Practical: The Fallacy of Induction

@SotonEdThis is the third and final post in a series about the value of practical work in science. In the first post I have suggested that science trainee teachers (and possibly some qualified teachers too), have a tendency to make assumptions about the value, and the learning, associated with practical work in science. In the second post I illustrated this with an example and briefly tackled two questions I think are important: whether or not children enjoying practical work is sufficient justification, and whether or not just doing practical will make them better at it. I left a third question hanging and ask it again now.

Do children learn important science ideas and/or develop their understanding from seeing the theory ‘in the flesh’? Often trainee teachers think that this is self-evident. I’m not convinced.

Some of the most useful work on children’s ideas, and misconceptions in science was completed by Rosalind Driver and colleagues in the 1990s. I think this is an essential resource for all science teachers because of the evidence that knowledge of children’s misconceptions is an important distinguishing feature between more and less effective teachers. Some may find elements of the suggestions for classroom practice overly constructivist but for me, as well as the identification of a whole range of misconceptions, the other really useful idea I have taken from this work is the ‘fallacy of induction’.
The fallacy of induction is the mistaken belief that children, when presented with relevant evidence, for example from practical work, will tend to work out (induce) the appropriate scientific theory.

The problem is that correct scientific theories are often simple when you know them, but are tremendously hard to generate directly from evidence. After all, it took a lot of very skilled scientific thinkers hundreds of years to do this the first time. What’s worse, children inevitably develop naïve theories as they grow up, so in secondary school they are sometimes not just trying to learn correct scientific thinking but are trying to un-learn naïve thinking that serves them perfectly well outside the classroom.

As teachers, we can of course select learning experiences, including practical work, that provide far more scaffolding and direction than Galileo, Copernicus, Newton, Darwin, Lavoisier, Faraday or Wegener were working with but, however well we do this, I think that induction from practical work, or other activities, is doomed to failure.

For conceptually straightforward science, where there are no misconceptions to overcome, I think that, as a science teacher, we can clearly see how the correct scientific principle follows from the practical observations and it is very easy to assume this will be apparent to the learners too. For the teacher, the scientific principle already exists as a complete and correct schema (like a mind map) in their long-term memory, and they know which features of the practical are relevant, so making this match is relatively easy. For the learner this is not the case. They just don’t have enough of the necessary knowledge chunked in long-term memory to manage the cognitive load – they can’t see the wood for the trees. Like many cognitive load problems, it may be possible to scaffold or adapt the activity sufficiently to allow children to see the wood, but you have to question whether a forest is the right starting place, or whether a nice piece of rough sawn timber from B&Q might be a better option.

Where there are misconceptions, Driver and others have suggested that cognitive conflict, created by exposure to direct evidence that the existing ideas are untenable, will help to resolve the problem. That was certainly my thinking for many years. It seems obvious that, when presented with evidence that is in conflict with their misconceptions, learners will tend to respond by correcting their ideas (their mental representations or schemas). What actually seems to happen a lot of the time is that they ignore, fail to focus on, or distort the evidence, so that their naïve theory survives and may even be reinforced. This explains why so many intelligent people stuck with Aristotle’s ideas about force and motion for a thousand years despite blatant evidence to the contrary.

The ideas of Daniel Kahneman and others help to explain why people have an overwhelming tendency to respond in this way. David Didau in his #WrongBook is also very good on the reasons why our response to contradictory evidence tends to be irrational.

My personal experience is that I have eventually learned the situations where my quick thinking will be wrong and I need to over-write with the correct scientific idea. For something like weight and mass I can pretty much do this automatically but with something more taxing like the tendency to see speed and mistakenly think as if force and acceleration behave in the same way, the best I can do is stop myself and know that I need to think very hard and apply Newton’s Laws with great care.

I don’t think typical practical work ever produces enough clarity in either the results or the conclusions to even begin to address these stubborn misconceptions. I love asking hinge questions, like the Veritasium videos, that throw up misconceptions, but the next step is to tackle the problem head on. I don’t think there are many situations where children can discover scientific principles directly through practical work and I think it even less likely that misconceptions can be effectively challenged and addressed.

So, what role does that leave for practical work in teaching science? I think, if you’ve read this far, you might be thinking there isn’t much practical work in my science teaching, and that perhaps the children taught by my @SotonEd trainee teachers aren’t getting much either, and what little they are getting is restricted to training in purely practical skills – accurate measuring, and manipulation of equipment. Not so! For me, practical work is terrific for the stage beyond basic theoretical knowledge, for three reasons:

Science is stuffed with abstract concepts and there is good evidence that concrete representations help children to understand these abstract concepts. I think sometimes physical models are more useful but practical work can often play this role. For example, you can find a good, clear explanation (with diagrams and perhaps photographs) of chromatography in any textbook but I think the actual physical process of separating out ‘black’ ink colours makes a big difference to children’s grasp of what this really looks like, and the time scale – that painfully slow diffusion – over which it happens.

Secondly, when new knowledge is acquired it will be very fixed to the original context. Deeper understanding comes from making this knowledge more flexible and filtering out the key points from the peripheral detail. Practical work provides an excellent additional level of complexity through which the scientific principle can be seen. Another way to think of this is that children often need to encounter the same idea in several different ways before it sticks; again, a practical can provide this.

Finally, there is something joyful about seeing abstract theory writ large (or often actually quite small) in the fabric of the universe. Science differs from other subjects because it is humankind’s ultimate attempt to describe, and perhaps even understand, the physical world around us. As science teachers, we need to be careful not to think that children see practical work the way we do, but if we ever lose the joy then it’s time to do something else.

Driver R. (1994). The fallacy of induction in science teaching. Chapter 3 in Levinson, R. ed. (1994) Teaching Science. London: Routledge

Nuthall G. (2007) The Hidden Lives of Learners. Wellington: NZCER Press

Pashler H., Bain P.M., Bottge B.A., Graesser A., Koedinger K, McDaniel M and Metcalfe J. (2007) Organizing Instruction and Study to Improve Student Learning: IES Practice Guide. Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education

Sadler P.M. and Sonnert G (2016) Understanding Misconceptions: Teaching and Learning in Middle School Physical Science. American Educator. 2016 (Spring)

Shtulman A. and Valcarcel J. (2012) Scientific knowledge suppresses but does not supplant earlier intuitions. Cognition. 124(2) pp.209-215

Thorn C.J., Bissinger K., Thorn S. and Bogner F.X. (2016) “Trees Live on Soil and Sunshine!”: Coexistence of Scientific and Alternative Conception of Tree Assimilation. PLoS ONE. 11(1)

Willingham D. (2002) Inflexible Knowledge: The First Step to Expertise. American Educator. 2002 (Winter)

Practical: Grinding Frustration

I’ve seen a lot of trainee teachers knacker lessons up with a well run, but ultimately pointless, practical. Whole-class practicals, in particular, are massively time-consuming with many filling an entire lesson. If all that has been learned in that time is “It went blue, sir” then I don’t think that’s good enough.

The problem is usually a confusion over learning objectives. My previous blog set out the way I see learning objectives in relation to practical work but I’ll recount an example. I went to visit a promising trainee teacher a few years ago; the lesson was part of a unit on separation techniques I think; this was certainly part of a sequence on chromatography. When I looked at the lesson plan, saw that it was mainly going to be chromotography of pigments from leaves, and that the learning objective was “To separate leaf pigments by chromatography” I tried to help by asking her what she actually wanted the children to learn. I just couldn’t get past “I want them to separate the yellow pigment from the green chlorophyll”. It’s not fair to pull the rug just before an observation so I let it go and waited to see. Bless those lovely Y8s; they chopped and crushed and ground their little hearts out. They followed the instructions as well as they could, set up their chromatography paper (several submerged the spot in the propanone), and then did a little write up whilst they waited for the chromatograms to be ready. Some got a bit of green and yellow differentiation and the rest didn’t. Whilst they were working I went round and asked a few questions, such as “Can you tell me why you’re doing this?” and “What’s the point of chromatography?” I didn’t even get half answers, just pretty much universal “Don’t know”.

In the feedback session I didn’t get any further, really. The trainee teacher was very disappointed with the lesson. She carefully evaluated the quality of the practical work and made some perceptive comments about maybe splitting the practical into sections and briefing more closely to ensure the leaves were finely chopped, the amount of propanone was reduced, and the papers were set up correctly. But she completely and stubbornly failed to identify the problem, which was that, her ‘learning objective’ wasn’t about learning at all; it was about getting the practical to work. Had the chromatograms come out well, she would have been satisfied with the lesson,  Even when I directly asked the questions “Did the children understand the process?” and “Did the children understand what chromatography was?” and pointed out that they had been unable to tell me anything about these things, she couldn’t really see that this was a much bigger problem than the poor results.

There are plenty more examples where that comes from. Some worked nicely as expected. Some didn’t. All suffered irrevocably from a sense at the planning stage that the practical somehow justified its own existence just by being practical. Often, I find a defensiveness of practical work that I don’t see when pointing out other misaligned learning objectives. That sense that practical is self-justifying can be difficult to change. Why is this difficult?

In the end this boils down to the questions of whether or not (a) children enjoying practical work is sufficient justification, (b) just doing practical will make them better at it, and (c) children will learn important science ideas and/or develop their understanding from seeing the theory ‘in the flesh’. Often I think trainee teachers think, perhaps sub-consciously, that some or all of these are self-evident. I’ll tackle (a) and (b) here and leave (c) to another blog.

For me, enjoyment can’t ever be an end in itself for what happens in science lessons; that just reflects my personal belief in what school is for – no evidence presented. On the other hand, if enjoyment leads to better learning, higher motivation, more time doing science outside lessons, improved post-16 take up, and so on, then the judgement is maybe about balance between enjoyment and learning. I don’t have the expertise to offer a definitive review of the evidence but I’ve certainly been influenced by Abrahams (2009) Does Practical Work Really Motivate? and I’m not convinced practical work is as critical to motivation as is often assumed. The ASPIRES final report makes a brief reference to reduced interest in science after Y9, which might or might not correlate with reduced practical; personally I think it is the  GCSE curriculum content, and looming exams, that is to blame, but can’t offer more than a hunch.

Is it good teaching to explain how to do something tricky and complicated, and then get the children to try lots of examples with very general (that one’s good, that one’s bad) feedback? No, of course not. So why would practical skills be any different? Most of us have had years and years of experience through school, university, and maybe in the classroom, to hone our practical skills. Many of us have probably also taken things to bits and re-built them, developed fine motor, and problem-solving skills, through art and craft and cooking and all sorts. We tend to massively underestimate how difficult it is to extract chlorophyll from leaves, prepare a cheek cell slide, or connect up lamps in parallel. The cognitive load for these things, for children, is very high. In the lesson described above, the instruction sheet and the teacher were both clear about the level of the propanone on the chromatography paper, but at least a third of the class submerged the spot. There was just too much new information for them. These things need breaking down, step by step, with practice or at least immediate feedback at each stage. Without this, children just get used to practicals not working half the time (and working more often for the ‘smart’ kids and more rarely for the others) and accept this is the way of the world. Sometimes there is value in unexpected results, but not if a shrug of the shoulders is the typical response. If we are trying to teach practical skills then we need to plan carefully for those skills, and get precise and accurate work from the children.

Which takes me back to that chromatography lesson. I would have been very happy if the learning objective had been something like “To improve practical skills: use of mortar and pestle to extract plant material; setting up chromatograms; precise working” and then the trainee teacher’s reflection would have been at least a useful starting point. That was an aspect of the intention, but actually, if I’m being generous and assuming the practical wasn’t just picked because it was on the SoW, the stronger intention was something vague about understanding chromatography better by doing a practical example. Failure to separate learning practical skills from developing understanding is a big problem but this idea that doing a practical will improve understanding is, I think, the worst mistake.

Next blog coming up…

 

 

 

Practical: Young people’s view on science education from the Wellcome Trust Science Education Tracker

This morning the TES published a confusing article on key findings from the Wellcome Trust Science Education Tracker. This is a survey of over 4000 young people Y10-Y13 asking about their views on their science education and careers. The TES don’t even seem to have managed a link but the tracker, including a breakdown of the questions and responses is at https://wellcome.ac.uk/what-we-do/our-work/young-peoples-views-science-education

Hopefully readers from the science education community might have quickly got past the ‘hands thrown up in horror’ headline and be asking whether the survey tells us anything useful about the quantity or quality of practical work in schools and colleges. Actually there are 144 questions and only 3 are about practical work. There is a mine of useful data here for questions around post-16 STEM participation, science capital, and availability and participation in triple GCSE, which has been a problematical issue, but that’s probably best seen through the lens of the ASPIRES2 work. Hopefully they’ll blog about the survey results at some point.

I’ve only had a quick look but these are my first impressions of the 3 questions (T66-T68) directly asking about practical work.

Firstly, some caution is always required when dealing with self-report measures, and also the way the responses are reported. For example (T66), young people might well have different views on what constitutes “Designing and carrying out an experiment / investigation” and “A practical project lasting more than one lesson” but I can’t see how any of last year’s Y10 or Y11 could not have completed an ISA across multiple lessons. The responses to these two questions were about 75% and 55% respectively with 10% responding “None of these”. What were the 10% doing? Did at least 35% squeeze an ISA into one lesson or do their ISAs in only one of the two KS4 years? How many didn’t think an ISA was an investigation (justifiably?). My take on this is that we need the responses to the same question for current Y10 to see the impact of the new GCSEs, otherwise we are discussing history, but I’m not convinced about the merits of practical projects and multiple-lesson investigations anyway.

Secondly, it’s important to interpret the findings critically. About 1/3 were happy with the amount of practical work and nearly 2/3 would have liked more. As pointed out in @alomshaha’s excellent video, this might be because practical is an easy option, not because it is the best way to improve learning. Even children have a keen awareness of this issue; in the Student Review of the Science Curriculum (Murray & Reiss 2003), about 70% had “Doing an experiment in class” in the top 3 most enjoyable activities (along with watching a video and going on a trip) but only about 40% thought it top 3 for “Most useful and effective activities”.

However, there is one thing in the data we ought to be thinking about. These are the figures for “When doing practical work, how often would you say that you just followed the instructions without understanding the purpose of the work?”

just-follow-instructions

That suggests this statement is true for maybe 1/3 of practicals; this concurs with a lot of practice I see out in schools (from trainee teachers, mostly, but I have a suspicion it’s quite widespread). I think this is a problem.

It’s really good to see a School Science Review article by Millar & Abrahams (2009) here on the AQA website. This is a summary of a significant bit of work they, and some others, did looking at the effectiveness of practical work. Essentially the problem they identify is confusion over learning objectives. Just like all lesson planning, the objectives need to drive the activities, not the other way round. Whole-class practicals form such a big and obvious chunk of a lesson that it’s really easy to start planning from the activity. The trouble is that you then lose sight of the wood for the trees so that a successful practical outcome becomes the real objective – the one you focus on – although the lesson actually has an objective related to knowledge and application of science content. You then emphasise the procedure and just hope the children understand how it relates to the science content. And the children then just follow your instructions (hence the survey response) and, as Millar and Abrahams put it the emphasis becomes “producing the phenomenon”.

Millar and Abrahams go on to suggest there are three broad categories of learning objectives that are served by practical work and, based on a related article, I’ve broken these down further. I find this really helpful in getting a clearer focus on what purpose the practical serves in the lesson and therefore what the best way to approach it is.

broad-los2

If conceptual understanding is what you want, then the children need to spend time thinking about the practical in relation to the relevant content. There are maybe three options here:

  • Whole class practical with lots of time afterwards to do work on how the practical is a demonstration of the content.
  • Whole class practical with very high level of practical competence so children have capacity to think about the content.
  • Demo or video (maybe simulation) so children don’t have to think about manipulating equipment and the teacher can direct their attention with questions and explanations.

The second of these could come from prior learning, but could also be a result of very careful briefing. This is, I think, what @oliviaparisdyer is describing in her blog post about practical work. It is certainly how I remember my excellent O-Grade Chemistry teacher doing it, several decades back into the last century.

If investigative skills are what you want then don’t try to teach conceptual understanding at the same time and remember that as science graduates we tend to massively underestimate the complexity of designing and conducting a full investigation. That’s why the ISAs were such an unpleasant exercise in trying to temporarily get children to remember enough to hit whatever ridiculous coursework target grade they had. I’ve had the good fortune to work with A-Level students on some terrific independent projects (for A-Level Physics and EPQ) but even post-16 they are barely ready for high-quality work. In my experience, either very high-levels of scaffolding, or acceptance of interesting but very rickety work, are needed for 11-16 classes, though that may not be true for all teachers.

Finally, if practical skills are what you want, then again you need to focus on them. Something like reaction of copper(II)oxide with sulfuric acid and then filtering and evaporating to get copper(II)sulfate involves a stack of excellent practical skills to do well. This would be a great practical for improving these skills; I think it’s a massive waste of time for learning the chemistry of metal oxide + acid reactions. By all means combine the two, so do the practical in that unit, and start or finish with the chemistry, but don’t expect the children to learn anything about the chemistry content whilst trying not to scald or gas themselves or – more hopefully – produce nice blue crystals.

This blog is already a bit long; next post I’ll try to use an example to explore these ideas about confused objectives a bit further, and then I’ll try and write another post on why children don’t automatically develop understanding from seeing a scientific principle ‘in the flesh’ and about Driver’s excellent Fallacy of Induction.

 

 

 

 

 

Linking ITT and workforce data: a step in the right direction

I had the great pleasure of meeting Becky Allen back at the beginning of the year for a bit of a discussion about the work Education Datalab were doing on matching teacher training records to the School Workforce Census. I suspect a pretty monumental amount of effort has gone into nailing down the final details since then but two of the three linked reports are now published. I suggest you start here to either have a quick look at the key findings, or to access the full reports. So far I’ve just read the NCTL one.

It is immediately apparent that this is something the DfE ought to have done years ago. There is a lot of talk of evidence-based policy-making but any kind of genuine commitment to such a thing would have seen this sort of data-analysis set up prior to the seismic changes to ITT that have been implemented since 2010. Hey-ho; better late than never.

In theory this methodology could be used for a much longer-term project that might start generating some really useful data on the impact of various approaches to training teachers. It is easy to pick up this work and think it is limited to evaluating structural issues about ITT routes but if you consider the richness of a data set that can pretty much link every teacher in the maintained sector back to their ITT experiences, there is almost unlimited potential. Inevitably, for ITT providers, there is a pretty steady (and self-selecting) drift out of contact over the years after qualification. This work potentially solves that problem for research on any aspect of ‘what works’ in ITT. That’s something for the future; what of the findings here?

It would be tremendously easy for a lot of people in ITE to say “I told you so” in regard to the Teach First retention figures. Actually, I think the useful questions are more subtle than that but figures first. Using the lower-bound numbers, traditional HEI-led routes have about 60% of those initially recruited working as teachers in the maintained sector in their third year after qualifying. SCITTs are higher at 70% (but these would have been the early adopters). School Direct hasn’t been running long enough to have figures. Teach First is under 50%.

datalab retention graph

However, there are several things to remember about Teach First. Their qualifying year involves teaching potentially difficult classes, mostly in schools with more challenging behaviour, with variable levels of in-school/in-class support, whereas university-led trainee teachers are supernumerary, on lower timetables, and working in a wider range of schools, and rarely those in a category or Grade 3. Teach First are also possibly more likely to continue to work in more challenging schools although I think that is an assumption I would want to see data on because certainly some participants move from TF schools to schools at the opposite end of the socio-economic spectrum.

There are also a few things to remember about HEI-led courses. Financial survival, and the need to make up the numbers across all the shortage subjects, probably mean that in these subjects the HEI-led cohort has a longer tail than for any other route. SCITTs may have some of these pressures too but, particulary in the years for this report, are likely to have had the opportunity to be more selective. I suspect it’s the other way round for subjects like PE, English and history where the larger scale of HEIs generates a larger pool of applicants compared to SCITTs. Since shortage subjects make up the bulk of an HEI cohort, you would expect to have a lower qualification rate, and also some marginal grade 2s where support (or lack of it) in their employing school might determine success in their NQT year. As pointed out right at the beginning, the report can’t tell us anything about what would happen to the same trainee teachers if they were trained via a different route.

Teach First recruitment has been astonishingly successful. Having seen the marketing machine in action, and with access to funding that very few providers can match, that is perhaps not completely surprising but it has been terrific nonetheless. This means they probably have the strongest cohort of all at the start of training. For me, the critical question to ask is, if Teach First training was more like the HEI-led route, or a SCITT, would there be hundreds more high quality teachers still in the classroom. There is no way to tell from this report but, anecdotally, the Teach First participants I have worked with would all have had excellent outcomes on the HEI-led course or School Direct programmes I mainly work on. What I don’t know is whether they would have gone into teacher training at all.

If Teach First is mainly putting people who would never have tried teaching into struggling schools with teacher recruitment problems, to do a decent job for two or three years, then that is probably a justifiable use of public money; if they are putting potentially high quality, long-career teachers through training in a way that knocks an additional 10-20% off retention, that doesn’t look so good. I suppose there might be other benefits; I’m unconvinced by these but make up your own mind. Sam Freedman sets out the most positive case here.

What about the other findings?

  • Three regions of England – North East, North West and South West – appear to have large numbers of new qualified teachers who do not join a state-sector school immediately after achieving QTS.
    • This is pretty good evidence that the NCTL need to sort out the Teacher Supply Model, but that was already very apparent. We are waiting on tenterhooks for the announcement on allocation methodology (so presumably they are desperately trying to invent something at the moment; let’s hope they don’t make another almighty cock-up!
  • Those studying on undergraduate with QTS courses have low initial retention rates in the profession, though we cannot know whether this results from subsequent choices made by the individual or recruitment decisions made by schools.
    • They do, but the data also shows they catch up later. I suspect that if you have a B.Ed. sooner or later it becomes the best option for a professional career whereas PGCEs have their UG degree as an alternative option (depending on subject a bit)
  • Teach First has very high two year retention rates, but thereafter their retention is poorer than other graduate routes.
    • I’m hoping, perhaps in vain, that the move away from QTS  might link teacher development across from ITT into the first year(s) of post-qualification employment for other routes and get a bit of the 2-year TF programme effect into other routes.
  • Ethnic minority teacher trainees have very low retention rates.
    • I suspect because they are much more likely to have limited experience of the UK education system if educated abroad, and are also more likely to be EAL, both of which, in my experience, can affect classroom relationships. It would be enormously useful to have data that separates UK and non-UK educated teachers and drill down a bit. In my part of the world, UK-educated BME applicants are thin on the ground but I don’t notice anything that would lower their retention rate.
  • Individuals who train part-time or who are older have much poorer retention rates, which may simply reflect other family commitments that interfere with continuous employment records.
    • UoS doesn’t do part-time. I have a hunch that retention might actually be better for older trainee teachers on our Science PGCE – they do mostly need a proper job to pay mortgages whereas younger trainees often don’t have that commitment. On the other hand, whilst they are nearly all tremendous people to work with, developing into a good teacher is partly about developing habits that are effective in the classroom and I think changing habits gets harder as you get older. It’s also a very fast-moving environment when you are a novice and again I think adapting to this gets harder with age. They are quite often particularly good at developing relationships with teenagers though, so it’s swings and roundabouts, maybe.

So those are my first thoughts. I think we have some way to go to get stable and effective initial teacher education that is structurally sound and therfore with the potential for continuous improvement. NCTL have tried quite hard to break what we had; now we need to take the best of the many pieces and put them back together again, hopefully to end up with something better than before. High quality evidence is a key part of this process, as are people in high places that are prepared to pay attention to it. This report is a very important step in the right direction.

 

 

Venn and the Art of Categorisation Maintenance

This post is based on my presentation at the Durrington High School Teach Meet #DHSTM16. It was the best TeachMeet that I’ve attended so far. Massive thanks to @shaun_allison and everyone else involved. Do follow the DHS Class Teaching blog.

Although I now work on the PGCE courses at the University of Southampton, I used to be a proper teacher, and this is the TRUE story of how I learned to teach the photoelectric effect.

It doesn’t matter if you’ve never heard of the photoelectric effect. Einstein won the Nobel Prize for Physics for his work on this, and it represents a crucial step in the development of quantum mechanics – trivial stuff – but it is only a very small part of the A-Level spec. It might be helpful to know that, at it’s core, is a simple conservation of energy concept (KS3) but the photoelectric effect sets this in an unfamiliar and complicated sub-atomic context.

The first time I came to teach it, I knew it might be hard. Reviews of cognitive psychology suggest there might be about six techniques that pretty consistently improve the effectiveness of teaching and learning. One of them is pairing of words with graphics to support explanations.

The idea is to provide students with graphics and then link oral explanations closely to these. It seems to be important to keep the graphics as simple diagrams without lots of lebelling or other text. So that’s what I did; it was a great explanation! Then I added complexity with a practical demo. Then assessment. And they hadn’t a clue.

I did my best to repair the damage, went back over the explanation, asked scaffolded questions, modelled exam questions. It wasn’t awful but my experience is that, once students think something is difficult to understand, it’s very hard to come back from there.

The second time I came to teach it, I knew that pairing words with graphics wasn’t enough. Reviews of cognitive psychology suggest there might be about six techniques that pretty consistently improve the effectiveness of teaching and learning. Another of them is that abstract concepts should be linked to concrete representations. Perhaps I needed to do that more effectively. After some hard thinking I came up with a great analogy based on a coconut shie. This activity took about five minutes for the students to do, and another five to review with them, and then came my killer explanation (pairing words with graphics), some complexitiy was added with a practical demo. Then assessment. And this time they all understood.

So we moved on to do some past paper questions and suddenly they were all at sea again. Despite taking a step back and modelling some of the exam questions for them, I couldn’t completely shake off a feeling in the class that the photoelectric effect was just too hard. It was very frustrating.

The third time I came to teach it, I knew that linking abstract concepts to concrete representations, and pairing words with graphics helped a lot with understanding but there was still a problem. I reviewed the past paper questions and started to wonder whether the rather odd term ‘photoelectron’ might be causing difficulty. Because I thought they were clear about what an electron was, and what a photon was, they would have been fine with me telling them that a photoelectron was just an electron. But what if they weren’t completely happy with electrons and photons?

So I tried this, giving the students a list of features to add to the diagram.

venn electrons photons

And, at the third time of asking, I finally nailed it! This time they not only got it but could handle past paper questions; they didn’t even find it hard.

I’ve used Venn diagrams as a categorisation exercise a lot since then and found them very valuable. They’re really good for when things are similar or related but with important differences. I think, as a teacher, it’s easy to assume that the distinctions are clear when, for students who have less certain concepts, actually the overlap is a major source of confusion. The Venn diagram forces them to focus on the similarities and differences; it makes them think hard about these; and possibly it reduces cognitive load because there are no other distractions.

I usually give students the points to go on the diagram (on the board or a slide, although you can cut and stick if that floats your boat) but for revision, or possibly in a few cases if you are setting up a debate, this can be left open.

You can also do diagrams with three sets but my experience is that this only works if it’s fairly straightforward or familiar.

The other categorisation exercise I use a lot I don’t have a name for but looks like this:

target

Students draw this on a sheet of paper and then have to put crucial factors in the middle, moderately important ones in the outer ring, and unimportant ones outside. It works for the same reason as the Venn diagram but this time it is about relative importance rather than similarities and differences.

If you haven’t already tried this kind of categorisation exercise. I recommend having a go. Sir Tim Brighouse was talking at the beginning of the TeachMeet about low effort, high impact, and I think this fits the bill.

It is also easy to ask probing questions around these exercises. “Why did you put X there?” is all that’s required. And that takes me back to those six or so techniques that pretty consistently seem to improve the effectiveness of teaching and learning. I guess you might want the full list.

  • Pairing words with graphics
  • Linking abstract concepts with concrete representations
  • Posing probing questions
  • Repeatedly alternating worked examples and problems that students must solve
  • Retrieval practice
  • Distributing practice

The review comes via the What Works Clearinghouse at the Institute of Education Sciences – a branch of the US Department of Education. It reviews evidence on the effectiveness of interventions (similar to what the EEF does in the UK) using a very high minimum standard of evidence. This list has been around for nearly ten years but several items haven’t been picked up elsewhere until pretty recently; perhaps it should have been.

Best wishes

Stability at Last? Guaranteed longer-term allocation of places for the best providers

Hope is like a road in the country; there was never a road but when many people walk on it, the road comes into existence. ~Lin Yutang

This is the third in a hopeful series of posts about things I can possibly envisage happening in ITE following on from the Schools White Paper 2016. All depends on the quality of implementation and the DfE don’t have a great track record so this may represent a triumph of hope over experience but you never know…

2.28 We will seek to recognise both the best university and school-led ITT through guaranteed, longer-term allocation of training places, allowing providers to plan their provision into the future.

Guaranteed, longer-term allocation of training places, allowing providers to plan their provision into the future is an enormously welcome proposal. Everyone delivering ITE – HEIs and schools – has been facing the same problem of inability to commit to long-term development because of the level of uncertainty around allocations and therefore income. At Southampton we had a year with a PE PGCE running for one trainee teacher, RE numbers went down to 0 and then back up to 10, Chemistry dropped to 3 and then went so high we couldn’t fill it, and one of the best local SD alliances had their allocation cut in core subjects because they refused to put quantity over quality the previous year. However, the DfE need to be very careful not to think this is about protecting Cambridge rather than, say, Southampton. Although Cambridge History had a major fright this year, they had previously been protected for several years as a Grade 1 HEI whereas our allocations have been all over the place since the inception of SD. The problem of planning and development is probably most acute for providers like us and right now is not the time to be imagining that it might be possible to sort Grade 1 wheat (plenty of whom haven’t been inspected under the new, tougher Ofsted framework) from Grade 2 chaff.

I’ve been reading a bit recently about Professional Development Schools in the USA. There is a fine potential model (and some bitter experience) there for the Centres of Excellence for ITE proposed in the White Paper. Third hand murmuring suggests that the NCTL and DfE haven’t got much of a clue what these Centres of Excellence might look like: whether they would be pretty ubiquitous with most existing providers involved, or rare and exclusive; whether there would be strict criteria based on Ofsted reports, academic credentials, ITE Performance Profiles, etc. or some kind of ‘making the case’ bidding process; whether there would be a regional aspect to the allocation, or ten in London and none in the north-east, say.

It is clear to me that ITE in this country is currently too fragmented. As the number of both routes, and providers, has multiplied by several times, it has left large numbers of small organisations (including the relatively small education schools at many universities) struggling to cope with the adminstrative and organisational burden of running teacher training. All these small organisations are operating in parallel and endlessly duplicating work. There is a clear case for consolidation into more formal and semi-permanent partnerships, not only schools with an HEI or SCITT but actually between several HEI/SCITTs and the schools across their combined partnerships. Ideologically this may go against the grain of fierce local competition allegedly driving up standards but the economies of scale could provide the capacity to really develop the quality in a way that is difficult at the moment. Quite rightly, partnership quality has tended to be at the heart of evaluation of ITE provision and there is evidence from PDSs in the USA, and various programmes here (not least Teach First) to suggest that the Centres of Excellence policy could be a really important driver towards this better state of play. It needs a bit of time to develop ideas, to get some initial models up and running and learn lessons from these, but an end goal of large, strong, local but world-leading partnerships, in which school leadership and coal-face expertise combine with high quality academic research and experience of supporting trainee teachers, is worth taking the time to do properly. As long as we don’t have a ridiculously rushed bidding process, set up before anyone actually knows what they’re looking for, this could be something to celebrate. DfE, don’t let us down!

Solving the Puzzle by Finding the Pieces: New quality criteria for ITE

Hope is like a road in the country; there was never a road but when many people walk on it, the road comes into existence. ~Lin Yutang

This is the second in a hopeful series of posts about things I can envisage happening in ITE following on from the Schools White Paper 2016. All depends on the quality of implementation and the DfE don’t have a great track record so this may represent a triumph of hope over experience but you never know…

2.29. New quality criteria will focus on areas such as the quality of training programmes, the effectiveness of providers in recruiting high quality trainees, and the impact of those trainees on standards of teaching in schools. We will assess providers’ ability to meet these criteria and will, in future, factor this into the allocation of training places.

This one is a bit of a puzzle. At the moment ITE providers are held accountable by Ofsted and, until this year, the inspection grade has strongly affected HEI allocations. Ofsted have already had several goes at adjusting the framework for inspection; there is no reason to think another version will be of benefit. I guess high quality trainees might imply further pressure to only take 2:1 and above but that’s silly at a time of teacher shortages, when the link between academic qualifications and teacher effectiveness is marginal at best. We’re actually sorely lacking in hard evidence about how selection criteria and training experiences relate to teacher effectiveness and/or retention. An excellent outcome from this White Paper would be funding for research to fill the knowledge gap on effective selection; there is plenty of appetite for this work and expertise in the Russell Group universities and other education research organisations like Education Datalab.

Imagine a longitudinal study that tracks teachers from their UCAS application, through their training, and on into their teaching career. If this was big and good enough, it would be possible to not only clearly show how degree classification relates to effectiveness and retention, but also to examine lots of other aspects of selection and training, and their impact down the line. There is potential here for a genuinely world-leading, evidence-based approach to ITE. That would be amazing.