Salaries, bursaries, and fees: too few, too much, too painful?

Justine Greening’s conservative party conference speech included an announcement of a pilot student loan repayment scheme for MFL and science teachers in 25 target local authorities. Details were initially a bit sketchy but have now been clarified. How much this will be worth depends on career progression but for a teacher working FT for the full 10 years, and moving steadily up the scale and picking up a TLR at some point it could average out at getting on for £1000 per year.

This announcement follows a pledge in the Conservative election manifesto, comes along with changes to bursaries for maths teachers, and also echoes a similar but different suggestion from Rusell Hobby, new CEO of Teach First. I thought it might be time to think out loud about the issues around funding, fees and bursaries for ITT. So here is where I would start if this were my decision.

I think that the bursaries for secondary shortage subjects are too high. Paid over 10 months, with no tax or NI deductions, a fair number of my trainee teachers are taking home a fatter monthly pay packet than anyone else in the department where they are training including the HoD. Regardless of the impact of the bursaries on recruitment, that just isn’t right.

What’s worse is that we have almost no evidence on the impact. How many extra trainee teacher applications are made because of the bursaries? How many of those are of decent quality? How many more trainee teachers do we actually recruit? What is retention like for these additional teachers? How many train with only half-hearted intentions of teaching in the maintained sector (or in this country, or at all…)? How many stick out the training through concern procedures/teaching module referrals, to the detriment of children, even though they are no longer intending to teach? These questions have been asked and the DfE and NCTL have been founding wanting.

Those are the obvious questions about the impact of bursaries. The wider question is what else might make an impact on the recruitment and retention problem? There are various people and organisations chipping away at the task of bringing some decent evidence to bear on this but I think we are probably still in the realm of informed hunches. These are mine.

There has been a lot of discussion about having bursaries with strings attached as a way of increasing their impact. Singapore has this kind of system but I don’t think it is right for the UK. I think the spectre of a massive financial penalty for failing to qualify and/or take a teaching job in the maintained sector would be really off-putting to a large number of prospective applicants in shortage subjects. This is the wrong image.

I think, if we really want to improve recruitment, then ITT needs to feel equivalent to the graduate training schemes that attract large numbers of high calibre applicants to high-profile companies, and I’m sure that sort of positioning is one element of Teach First’s recruitment success. This means all teachers should just be paid a sustainable amount during their training.

Some already are paid (SD Salaried, Teach First, and tiny numbers of Troops to Teachers and Researchers in Schools) but it is difficult to just extend this because universities are not in a position to start employing their own students, and the recruitment pattern from SD clearly shows that if all ITT recruitment is done by schools, the shortage subjects will fall woefully short of national targets. However, whilst universities are probably going to have to still work with bursaries rather than salaries, there is potential for considerable re-branding.

The level of this basic salary/bursary ought to be set in relation to qualified salaries and I think the current unqualified teacher rate is probably about right (in relation to qualified salaries only – the real terms cuts to salaries since 2010 are a different issue). However, there is a clear difference in the opportunities outwith teaching for science and maths graduates compared to history, PE and primary, for example, and I think there is a place for a shortage-subject uplift, but this still ought to be in line with likely NQT salaries. These are surprisingly varied but evidence suggests the average in science is maybe £3K more than basic M1 so to me that seems like a justifiable uplift during training. Caution suggests making this change gradually and keeping a careful eye on recruitment, though.

The DfE currently spends about £170 million on bursaries. I think the additional cost of paying a basic salary to all trainee teachers with just an uplift for some would be about £200 million – wallet out! On the other hand, something over £400 million is spent on ‘wasted training’ due to failure to retain these teachers (based on IFS figures). I think the evidence from other countries suggests that the image and status of teaching as a career can have a huge impact on both the quality of the pool of recruits, and their subsequent retention. I think gimmicky bursaries are not helping this image; I think bursaries with strings attached would almost certainly damage it; I think a clear commitment to paid training might help.

Fees for ITT are the other issue. Tempted though I am by the idea of fees being paid for all trainee teachers, I don’t think we’re ready for that yet. With student loan forgiveness or repayment, I think it’s reasonable for trainee teachers on non-salaried routes to use Tuition Fee Loans to cover that cost. However, the pilot student loan repayment scheme, with its 10 year maximum extent, would probably only just about get this £9000+ debt paid off, leaving undergraduate loans untouched. That’s not very tempting compared to choosing a different career and never having the extra debt in the first place.

I assume there is a reason why the pilot is covering loan repayment costs rather than actually writing off debt – I guess it is easier to administer that way. In the long run. though, I think that writing off debt is the preferable option. Partly it just somehow feels better, and as I’ve already said, I think a lot of this is about the image. I also think covering loan repayments is regressive – it is worth £1350/year for someone on £40K, and nothing at all to someone on £23K. That could be seen as leveraging career progression and TLR payments, but it also favours those negotiating higher starting salaries (more often men), and anyone working in London or London fringe.

So I think the DfE ought to look at moving from covering loan repayment costs to writing off debt. This would allow ITT tuition fees to be written off quite quickly – perhaps over three years (although it would be worth looking more closely at the SWC data to see if there is any kink in the retention data that would suggest a particular career point that it ought to cover). Meanwhile, other student loans could be written off at the slower rate. This would maintain the attraction of undergraduate loan repayment, whilst also making the tuition fees less of a burden, provided trainee teachers go on into a teaching career.

I assume there is also a reason for targeting whole local authority areas rather than specifically schools that have particularly difficult recruitment and retention problems. Again, perhaps it is simpler, since a school is clearly either in Portsmouth, or not, whereas targeting schools above a certain FSM or other threshold might leave teachers in borderline schools dipping in and out of the scheme as random fluctuations change which side of the threshold the school sits. Perhaps more generally, a focus on FSMs might push recruitment towards large cities and leave out some coastal and rural schools with tremendous recruitment difficulties. I don’t know what’s best but it should be relatively simple to model. If the DfE haven’t already done so, that needs to happen alongside evaluation of the pilot.

My final thought relates to Teach First and a couple of tweets exchanged with @russellhobby the new CEO of Teach First. I think Teach First has brought some tremendous new ideas into ITT, has demonstrated that teaching can be an attractive career at a time when that was proving difficult, and has undoubtedly produced some very effective teachers. However, it’s important to remain aware that this route is significantly more expensive for the taxpayer than other main routes, and retention over 5 years is a fair bit lower. My concern was that any loan reduction based on FSM thresholds would immediately start benefitting all Teach First participants as soon as they started employment at the beginning of their first year, whilst university-led trainees were racking up more debt. And whilst there are obvious reasons for targeting schools on the basis of FSMs, that also neatly guarantees Teach First participants a benefit whilst leaving it only as a possibility for other routes. I may just be biased! My suggestion of accelerated write-off for PGCE loans might be unfair in the other direction. The current pilot seems equally fair to all routes and I’m cautiously hopeful that it’s a step in the right direction.

 

Advertisements

Learning Styles, Universities, and ITT

Sharks have teeth that slope inward; once they get their jaws on something, it isn’t getting away. I don’t think @tombennett71 has teeth that slope inwards (although, like me, he grew up in a land and era of pragmatic dentistry, far from the pearly white, braced and straightened paradise of the Home Counties) but there is no doubt that he has had his teeth into Learning Styles for some time now, and he’s not letting go. Mostly this is a good thing but I think the recent little flurry of tweets about the failings of university-led ITT involved tarring us with a rather broad brush.

So I thought it might be useful to flag up a few things, partly to try to explain why this is (or has been) an issue in ITE, and partly to explain why Googling “x university” + “learning styles” is a poor way of finding out about what is happening.

The people working on PGCE courses in universities have a mixture of backgrounds. Nearly all will be qualified and experienced teachers. Some will have moved straight from a school job into a ‘Teaching Fellow’ post – one with no research element to their contract – and some will be Lecturers or more senior academics with both a teaching and a research role. There is, I think, an assumption that we all have a lot of time to stay abreast of broad swathes of academic literature but for most of us that is not really any more true than it is for a teacher or senior leader in school. Those with a research element to their contracts of course have an obligation to be right up-to-date with their narrow area of expertise but that’s quite time consuming; Teaching Fellows have a full-time job teaching and looking after trainee teachers, and probably other students.

And so what happens in universities is, in my limited experience, the same as what happens in schools. We draw on our personal experience, the reading we have done, and everything else we pick up along the way – particularly from colleagues – to plan and teach as well as we can. Learning styles was so ubiquitous in schools that, without enough critical thought, it was bound to end up in some teacher training. Hopefully we have all learned our lesson about taking on ideas that sound convincing without actually checking the supporting evidence, and will pay more attention to critical analysis, such as Sharp, Bowker and Byrne (2008) – from colleagues at @BGUlincoln (ironically), @UniofExeter and @SotonEd. However, the rush to embrace Mindset in schools suggests ResearchEd, and anyone in a position to influence schools’ thinking, have their work cut out.

Direct confirmation from recent trainee teachers is one thing; trawling Google for university webpages that mention learning styles is quite another. If you Google “Soton learning styles” you will find this page on ‘picking your profile’. Feel free to read (if you must) but it is a classic example of the genre. How can we square this with a research-informed PGCE? Dig a bit deeper and you’ll see this has nothing to do with the Education School, never mind the PGCE. This is not a primary or secondary school where the SLT oversees all teaching and learning. The Education School makes up less than 2% of the university. Holding us accountable for this webpage, or implying it affects our PGCE, is like holding the literacy lead in a primary school responsible for the way literature is taught in a secondary that happens to be in the same MAT.

For what it’s worth, here is our position statement on learning styles. If anyone else wants to make use of, or adapt it (or offer constructive criticism in the comments), then provided due credit is given, you are welcome to do so.

University of Southampton Education School Initial Teacher Education: Position Statement on Learning Styles

As a leading school of education, with an internationally recognised research profile, and a long and successful history of initial teacher education, we are committed to ensuring that our teacher education programmes are fully research-informed and reflect current best practice.

There has been considerable confusion, and a significant amount of poor practice, associated with learning styles. In particular, the matching hypothesis suggests learning is enhanced when the mode of delivery is matched to the preferred learning style of a student. This is a widespread belief but is not supported by the evidence (Pashler, et al., 2009). There is a plethora of different, disconnected models, many based on incoherent theory and/or tests which are neither reliable nor valid (Coffield, et al., 2004) but it is the Visual, Aural, Read/Write and Kinaesthetic model (VARK), often simplified to Visual, Aural, Kinaesthetic (VAK) which is typically encountered in school settings (Sharp, et al., 2008) to the potential detriment of children.

However, there is evidence that presenting new ideas in several different ways is helpful to learners. This is not because of simplistic differences in learning styles but because learners benefit from multiple opportunities to relate new ideas to their existing knowledge (Nuthall, 2007), and benefit from multiple modes of presentation. This is particularly true when information is presented in both verbal and graphical form at the same time (Pashler, et al., 2007).

It is therefore our policy in initial teacher education at the University of Southampton Education School to avoid presenting simple learning styles models (including VAK), and the matching hypothesis, as useful. Since the VAK model and the matching hypothesis are still so prevalent, this misconception should be actively addressed. However it is appropriate to encourage teachers to present new ideas in a variety of ways that are appropriate for the material (Willingham, 2005) and to emphasise the particular strategy of combining verbal and graphical formats where relevant.

However, it is also recognised that learning styles, now more often referred to as cognitive styles, is an area of active research. More sophisticated models are being developed which better reflect the evidence from cognitive psychology and neuroscience (Kozhevnikov, et al., 2014) and it is clear that there are differences in the way in which people approach problems and learning opportunities. Evidence suggests that, whilst different people will have different capacities (Kohzenikov, 2007), most will adapt their approach depending on the situation, and most people can benefit from developing their ability to apply different approaches to best meet a particular challenge (Zhang, et al., 2012). This is often described as a meta-cognitive skill. Work has been carried out to try to provide a framework for teachers to make use of this understanding (Evans & Waring, 2015) but our view is that this is not yet adequately tested in classroom settings, and may not be simple enough to be of practical benefit to teachers working in complex school and college settings. In accordance with our commitment to draw on the best available evidence from research, we will keep this situation under review.

Bibliography
Coffield, F., Moseley, D., Hall, E. & Ecclestone, K., 2004. Learning /styles and Pedagogy in Post-16 Learning: A Systematic and Critical Review, London: Learning and Skills Research centre, LSDA.
Donker, A. et al., 2014. Effectiveness of learning strategy instruction on academic performance: A meta-analysis. Educational Research Review, Volume 11, pp. 1-26.
Evans, C. & Kozhevnikov, M., 2013. Styles of practice in higher education: Exploring approaches to teaching and learning. London: Routledge.
Evans, C. & Waring, M., 2012. Application of styles in educational instruction and assessment. In: L. F. Zhang, R. J. Sternberg & S. Rayner, eds. The Handbook of Intellectual Styles. New York: Spring, pp. 297-330.
Evans, C. & Waring, M., 2015. Understanding Pedagogy: Developing a Critical Approach to Teaching and Learning. Abingdon: Routledge.
Kohzenikov, M., 2007. Cognitive styles in the framework of modern psychology: toward an integrated framework of cognitive style. Psychological Bulletin, Volume 133, pp. 464-481.
Kozhevnikov, M., Evans, C. & Kosslyn, S., 2014. Cognitive Style as Environmentally Sesitive Individual Differences in Cognition: A Modern Synthesis and Applications in Education, Business, and Management. Psychological Science in the Public Interest, 15(1), pp. 3-33.
Nuthall, G., 2007. The Hidden Lives of Learners. Wellington: NZCER Press.
Pashler, H. et al., 2007. Organizing Instruction and Study to Improve Student Learning, Washington DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education.
Pashler, H., McDaniel, M., Rohrer, D. & Brork, R., 2009. Learning styles: concepts and evidence. Psychological Science in the Public Interest, 9(3), pp. 105-119.
Sharp, J. G., Bowker, R. & Byrne, J., 2008. VAK or VAK-uous? Toward the trivialisation of learning and the death of scholarship. Research Papers in education, 23(3), pp. 293-314.
Willingham, D. T., 2005. Visual, auditory, and kinaesthetic learners need visual, auditory, and kinaesthetic instruction? American Educator, Volume Spring 2005, pp. 31-35.
Zhang, L. F., Sternberg, R. J. & Rayner, S., 2012. Handbook of intellectual styles: Preferences in cognition, learning and thinking. New York: Springer.

 

Why isn’t Engelmann in our library?

Following the little kerfuffle over the new BPP University PGCE, there has been a bit of discussion about whether or not Theory of instruction: Principles and applications by Engelmann and Carnine, should be available in university libraries for PGCE or other Education students.

It isn’t in my university’s library. I’m open to persuasion that it should be but here is what I think at the moment. I would be interested in any corrections or constructive comments.

Firstly I think it is important to be clear that Direct Instruction (DI, with capitals) refers to specific programmes of scripted lesson sequences. The original DISTAR programmes were designed for EYFS and KS1. More recently programmes for older primary children and an intervention for struggling readers have been developed. The approach is very specific, to the point where there is a rubric for deciding if a programme is genuinely Direct Instruction or not. It is worth quickly skimming this just to see its length (133 pp.) and level of detail. However, if you want to understand DI you might be a lot better off looking at this really excellent interactive tutorial for psychology students at the University of Athabasca.

By contrast, direct instruction (without capitals) tends to refer to any teaching that involves the teacher setting out the expected learning clearly, as opposed to providing learning activities that help the learner to construct their own learning. Typically, this is narrowed further to a situation in which a teacher is engaged in whole-class interactive teaching: laying out clear explanations, asking questions, modelling, and then getting children to practice independently, perhaps initially with scaffolding that is gradually withdrawn.

DI is clearly a good example of direct instruction, but the overwhelming majority of direct instruction is definitely not DI. Use that 133 page rubric to check if you’re not convinced!

Project Follow Through provided pretty conclusive evidence that the DISTAR programmes were very effective for disadvantaged young children in the USA in the ’70s. Arguably that’s a bit dated but I think, given the strength of the evidence, it is highly unlikely that a similar evaluation now would not produce a similarly positive result. Is that also evidence in support of direct instruction generally? Yes, I think so, but only weakly. Why? Because it’s not clear which elements of DISTAR made it so effective.

Here are the features of DI:

  • Attention focused on the teacher.
  • Active, cued, responding as a group and individually.
  • Frequent feedback and correction.
  • High pace.
  • Children taught in small groups; grouped by ability.
  • Scripted presentation.
  • Meticulously designed instruction
  • Faultless communication achieved through application of logic
  • Sequence of examples and non-examples, leading to generalisation

It is these last four, and arguably the grouping, that mean the DI is not conclusive evidence in favour of direct instruction generally. Firstly there is other evidence that suggests expertly designed, scripted lessons may be more effective than lessons planned by individual teachers. Secondly, most direct instruction does not use the logical system of examples and non-examples that is central to DI.

Much better evidence in favour of direct instruction is available. The two main sources I make use of are the ubiquitous Kirschner, Sweller & Clark (2006) (or the lighter professional version of this paper Clark, Kirschner & Sweller (2012) ) and Muijs & Reynolds (2010). The latter is a very good overview of the evidence on effective teaching and learning from two important figures in the Educational Effectiveness Research movement. I should state that Daniel is a colleague and David used to be, but some time on Google will probably establish their credentials and some of their thinking.

But if DI and direct instruction are both effective, why are my students not reading about both? Well, I think that DI is really about curriculum design and I think it is at completely the wrong level, with it’s incredibly meticulous approach, for anyone planning sequences of lessons as a trainee or early-career teacher, or a HoD squeezing out a new SoW in their evenings and weekends. There are things that have worked for me in the past, that are similar to some features of DI, and I share these as part of the PGCE I work on. There has been work done elsewhere (I’m particularly aware of Kris Boulton) but I haven’t come across anything fully developed, nor have I come across any of the NFIDI programmes in use over here. If I’m missing something then do please let me know. I think there should probably be more comparison of examples and categorisation exercises in teaching, and therefore in ITE, but until I’ve read an accessible overview that my trainee teachers can make properly make use of I don’t think there is very much they can usefully take from reading Theory of Instruction.

There is another argument I’m aware of. As a Russell Group university with one of the leading schools of education in the country, perhaps this book should be available for students engaging with the traditional/progressive debate, and maybe part of this is to be aware of the way in which DISTAR was sidelined despite the quality of evidence in its favour from Project Follow Through, but Theory of Instruction isn’t going to tell them much about that. Possibly a later publication deals with that well – if anyone thinks that there is a book specifically on this topic that beats the journal articles that the students can already access then let me know and I’ll pass on the recommendation to colleagues whose modules head in that direction. However, as far as the PGCE goes, I’m going to upset some people by saying that I don’t think the progressive/traditional debate is very useful and I don’t think I’m the first person to suggest that focusing on evidence-informed ideas about effective teaching is more helpful to trainee teachers than getting into the sociology of education. I therefore intend to stick with direct instruction, and will park Direct Instruction until I am convinced of its utility: convince me!

References:

Clark R., Kirschner P. & Sweller J. (2012) Puttting Students on the Path to Learning: The case for fully guided instruction. American Educator, Spring 2012

Engelmann S. and Carnine D. (1991). Theory of instruction: Principles and applications. Eugene, OR: ADI Press

Kirschner P., Sweller J. and Clark R. (2006) Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist. 41(2) pp.75-86

Muijs D. and Reynolds D. (2011). Effective Teaching: Evidence and Practice. London: Sage Publications Ltd.

NKOTB – the BPP PGCE

The announcement of the new BPP PGCE in the TES yesterday generated a lot of twittering, most of which struck me as knee-jerk in nature, unduly negative, and rather uninformed. Now, given the nature of Robert Peal’s polemic against the educational establishment – which ended by describing all those people who (whether right or wrong) had dedicated their working lives to trying to provide a good education for our children, as a persistent national embarrassment – I’m not overly bothered that criticisms that he only trained 6 years ago, hasn’t even been in the classroom for all of that time, and therefore doesn’t have the necessary experience to run the course, are a bit personal. However, I do think trying to tar the subject tutor team with the same brush is unfair since some of them can count their teaching experience in decades and all those I know anything about have clearly been effective teachers and have thought long and hard about teaching within their subject specialisms.

However, my main issue with a lot of the reaction is the misunderstanding of the nature of the PGCE in relation to QTS and the role of the university and the SCITT. I can forgive anyone for not being able to understand the current complexity of ITE in England – I regularly fail to explain it clearly to prospective trainee teachers myself – but a lot of the negative commentary and questioning clearly assumes that the BPP PGCE is all the training that is provided and I very much doubt that’s accurate.

Now, I don’t know exactly how this new PGCE integrates with the Pimlico-London SCITT so instead of speculating, this is what I do know, based on working with various SCITTs and HEIs.

A SCITT is an accredited ITT provider, based around a school or group of schools and approved by the NCTL to recruit trainee teachers and recommend the award of QTS. As such, it is their responsibility to ensure that all the Teachers Standards are met, that trainee teachers get suitable training (including things like professional expectations, subject knowledge, planning, behaviour training, understanding of SEND issues, assessment and so on) and good support, and that the quality of NQTs at the end of the course is appropriate. Ofsted’s view on this is that all NQTs should be exceeding the Teachers’ Standards at the point QTS is awarded and if Ofsted inspect and conclude that the training has omissions or the NQTs are not good enough, then a Grade 3 (or 4) effectively shuts them down.

All of this relates only to QTS; a SCITT is not a university and cannot award a PGCE, which is a post-graduate qualification, usually at Level 7 with associated Masters credits. Typically the PGCE consists of two or three assignments totalling 12000 words. These will usually include relating practice to theory and may include collecting data from lessons but, importantly, the PGCE is an academic qualification and the grading has absolutely no link to the actual quality of classroom practice of the trainee teacher.

Some SCITTs therefore just do QTS but most have a contract with a university to provide the additional PGCE. This gives the training more kudos and makes it more portable e.g. to Australia. How much input the university has depends entirely on the nature of that contract. Here are some examples that I know exist:

The university train the SCITT tutors to deliver the PGCE and never work directly with the trainee teachers. The university sets the PGCE assignments but the SCITT tutors do the input and mark the work, and the university just moderate the marking and make the award.

The university provide all the teaching and assessment of the PGCE assignments. This might involve the trainee teachers going to the university, or university tutors coming to the SCITT.

The university provide all the teaching and assessment of the PGCE assignments and also some subject-specific training. This is what we do at Southampton. The advantage for the SCITT is just related to economies of scale. A small SCITT may only have 20-30 trainee teachers. Pulling an experienced teacher out of the classroom for every subject, for a dozen days a year, to do subject-specific training, is difficult, whereas a university has enough trainee teachers for there to be someone (like me) in each subject for whom this is the core of a full-time post.

Set against this background, the BPP PGCE model looks perfectly acceptable to me. The university are supplying some training, which is likely to be more subject-specific than the SCITT can provide on their own, and the PGCE assignments, which the SCITT can’t provide at all. The only thing that is a bit unusual is that BPP itself is not an accredited ITE provider and it doesn’t have a track-record, or any intention, of producing education research. I’m not convinced that’s a problem. In some universities, the QTS and PGCE work is done by tutors who have a research element to their contract. In others, like Southampton, there is some overlap but most of the tutors are on teaching-only contracts. I don’t think that reduces the quality of what we do at all.

Finally, PGCE tutors are typically qualified at Masters level, and I think that’s not entirely the case for the BPP PGCE. On the other hand the intellectual capability of the team is clearly strong and it is the university’s job to ensure standards are appropriate, which presumably includes having an external examiner from another university.

So, it’s different in flavour, and it’s sort of starting from scratch. I think the content of ITE generally, and this course in particular, is a legitimate, and very important, area for debate (and helpfully some clear indication of this is already available for English and history). I think knee-jerk responses of “this must be rubbish” because of who it is and how it is set up, are unhelpful. We train 40 000 teachers each year in this country. In that context BPP is a tiny but potentially interesting innovation. Let’s either engage with what they are actually doing (or ignore it, if preferred), and see how it goes.

KS1 SATs: A parent’s perspective

Last night was the final parent’s evening before my son moves from Infant to Junior school. This is a big change for us all; we will be turning right instead of left when we get throught the school gate in the morning, and there will be NO AFTERNOON BREAK!!!

Possibly there will be other differences, but it’s the afternoon break my son is focusing on for now.

As a secondary teacher, I have a passing idea of what ‘working towards’, ‘working at’, and ‘working in greater depth’ might mean in relative terms, but nothing more. I have a peripheral awareness of the somewhat traumatic events around the changes to KS1 and KS2 assessment. Beyond that, I’m not so different to other primary school parents (perhaps a bit more forgiving – good grief some parents expect miracles of perfection for £5k/year) so I thought I would jot down my experience of the end-of-key-stage reporting from the other side of the school gates.

I have to say, the school have been absolutely brilliant about how they have handled the SATS. There was one meeting on a Tuesday afternoon that neither my wife nor I could get to (amazingly, we are sometimes both working – I have very few criticisms of my son’s school but a bit more awareness that we are not always available to either attend meetings, or build castles/sew Robin Hood outfits/bake cakes/construct papier mache dinosaurs would be one). In the absence of a helpful video recording or something, I tentatively probed to see if there was any sense of pressure coming on the children; my son had no inkling at all. The closest he could come to understanding what I was asking, was the weekly spelling test. The other parents said the same thing. Now, given this is an Infant School, I think that shows exceptional professionalism and possibly low cunning on the part of his teachers. Not only did they avoid putting any pressure on the kids but they seem to have actually managed to do all the assessments without the children even noticing. That is really impressive.

At the parents’ evening we got the results. So, as a parent, what matters to me? Well, the teacher in me is slightly appalled by the parent in me but, hey, suspendisse discimus – I largely don’t care about the detail of what he can and can’t do; I do care how he compares to other children in Y2. That is almost completely the opposite of what I want as a teacher from any assessment process. As a teacher, it doesn’t matter to me where someone is in the pecking order because you can only start from where you’re at. As a teacher I want specific detail that helps me to find the gaps and fix them.

But, as a parent, I’m middle-class enough to think that academic success matters, and have read enough to know that this is actually true, so I want to know my son is doing well enough to be in the top part of the heap. I don’t care whether or not he is right at the top (the local GP’s son seems to have that spot sown up anyway). It was exactly the same when the children were grouped by achievement for maths and guided reading – I know how much better it is likely to be in the long term to be in the top groups, rather than the middle ones, even if working hard to catch up with lots of movement between groups. I just wanted to know he was doing well, rather than struggling – that there wasn’t a chance that some drawbridge would be raised and he would be left to swim the moat.

So, from that perspective “working at greater depth at the expected standard” is just fine (yesss – get in!), however arbitrary the assessment process. I’m sure as a teacher I would be tearing my hair out, and I really have very little idea of what this standard is, but it tells me he is doing pretty well, and that’s actually all I need to know. I do hope his Y2 and Y3 teachers do a good job of communicating the specifics as part of the school transition but he is only moving to a classroom 50m across the playground.

As a further bonus, we found out at parents’ evening that, although his maths was always going to be ‘at greater depth’ his reading and writing were not as secure. As a consequence he has been in a small group doing some carefully targeted extra work, particularly on writing. In other words, his progress has been of greater value to the school than the progress of other children and so he has had more than his fair share of attention. As a parent, that’s great; from my professional prospective it doesn’t seem very fair. Also, those are points in the bag now for the Infant School, but it’s going to be tougher to add the expected value for the Junior School (as Education Datalab has pointed out, although that analysis shows additional subtlety). I just hope that in Y6, the teachers are as good at keeping the pressure off the children and he doesn’t get flogged to make some unreasonable amount progress.

To finish, I have to say his teachers have been absolutely brilliant across all three years, and I am incredibly grateful for the way they have guided him, and his classmates, through the amazing complexity of learning to read, write, and the basics of maths. I am painfully aware of the obtuseness of the English language in a way I never was before (just trying to help him untangle tough, though, through, thought, and thorough, at the moment – good grief!). I am more aware than ever of how critical an extensive vocabulary and cultural literacy has been for my son at school, and what some children miss out on in this respect. I’ve been astonished to see the way successful, professional adults – some holding down jobs paying more than twice as much as mine – can be pushed around by their own children, and I’ve discovered that I like KS1 kids a lot!

I hope Junior School is going to be just as good.

And you should read Tom Sherrigton’s blog on assessment standards and the bell curve – same idea, I think, but a different perspective.

 

Practical: The Fallacy of Induction

@SotonEdThis is the third and final post in a series about the value of practical work in science. In the first post I have suggested that science trainee teachers (and possibly some qualified teachers too), have a tendency to make assumptions about the value, and the learning, associated with practical work in science. In the second post I illustrated this with an example and briefly tackled two questions I think are important: whether or not children enjoying practical work is sufficient justification, and whether or not just doing practical will make them better at it. I left a third question hanging and ask it again now.

Do children learn important science ideas and/or develop their understanding from seeing the theory ‘in the flesh’? Often trainee teachers think that this is self-evident. I’m not convinced.

Some of the most useful work on children’s ideas, and misconceptions in science was completed by Rosalind Driver and colleagues in the 1990s. I think this is an essential resource for all science teachers because of the evidence that knowledge of children’s misconceptions is an important distinguishing feature between more and less effective teachers. Some may find elements of the suggestions for classroom practice overly constructivist but for me, as well as the identification of a whole range of misconceptions, the other really useful idea I have taken from this work is the ‘fallacy of induction’.
The fallacy of induction is the mistaken belief that children, when presented with relevant evidence, for example from practical work, will tend to work out (induce) the appropriate scientific theory.

The problem is that correct scientific theories are often simple when you know them, but are tremendously hard to generate directly from evidence. After all, it took a lot of very skilled scientific thinkers hundreds of years to do this the first time. What’s worse, children inevitably develop naïve theories as they grow up, so in secondary school they are sometimes not just trying to learn correct scientific thinking but are trying to un-learn naïve thinking that serves them perfectly well outside the classroom.

As teachers, we can of course select learning experiences, including practical work, that provide far more scaffolding and direction than Galileo, Copernicus, Newton, Darwin, Lavoisier, Faraday or Wegener were working with but, however well we do this, I think that induction from practical work, or other activities, is doomed to failure.

For conceptually straightforward science, where there are no misconceptions to overcome, I think that, as a science teacher, we can clearly see how the correct scientific principle follows from the practical observations and it is very easy to assume this will be apparent to the learners too. For the teacher, the scientific principle already exists as a complete and correct schema (like a mind map) in their long-term memory, and they know which features of the practical are relevant, so making this match is relatively easy. For the learner this is not the case. They just don’t have enough of the necessary knowledge chunked in long-term memory to manage the cognitive load – they can’t see the wood for the trees. Like many cognitive load problems, it may be possible to scaffold or adapt the activity sufficiently to allow children to see the wood, but you have to question whether a forest is the right starting place, or whether a nice piece of rough sawn timber from B&Q might be a better option.

Where there are misconceptions, Driver and others have suggested that cognitive conflict, created by exposure to direct evidence that the existing ideas are untenable, will help to resolve the problem. That was certainly my thinking for many years. It seems obvious that, when presented with evidence that is in conflict with their misconceptions, learners will tend to respond by correcting their ideas (their mental representations or schemas). What actually seems to happen a lot of the time is that they ignore, fail to focus on, or distort the evidence, so that their naïve theory survives and may even be reinforced. This explains why so many intelligent people stuck with Aristotle’s ideas about force and motion for a thousand years despite blatant evidence to the contrary.

The ideas of Daniel Kahneman and others help to explain why people have an overwhelming tendency to respond in this way. David Didau in his #WrongBook is also very good on the reasons why our response to contradictory evidence tends to be irrational.

My personal experience is that I have eventually learned the situations where my quick thinking will be wrong and I need to over-write with the correct scientific idea. For something like weight and mass I can pretty much do this automatically but with something more taxing like the tendency to see speed and mistakenly think as if force and acceleration behave in the same way, the best I can do is stop myself and know that I need to think very hard and apply Newton’s Laws with great care.

I don’t think typical practical work ever produces enough clarity in either the results or the conclusions to even begin to address these stubborn misconceptions. I love asking hinge questions, like the Veritasium videos, that throw up misconceptions, but the next step is to tackle the problem head on. I don’t think there are many situations where children can discover scientific principles directly through practical work and I think it even less likely that misconceptions can be effectively challenged and addressed.

So, what role does that leave for practical work in teaching science? I think, if you’ve read this far, you might be thinking there isn’t much practical work in my science teaching, and that perhaps the children taught by my @SotonEd trainee teachers aren’t getting much either, and what little they are getting is restricted to training in purely practical skills – accurate measuring, and manipulation of equipment. Not so! For me, practical work is terrific for the stage beyond basic theoretical knowledge, for three reasons:

Science is stuffed with abstract concepts and there is good evidence that concrete representations help children to understand these abstract concepts. I think sometimes physical models are more useful but practical work can often play this role. For example, you can find a good, clear explanation (with diagrams and perhaps photographs) of chromatography in any textbook but I think the actual physical process of separating out ‘black’ ink colours makes a big difference to children’s grasp of what this really looks like, and the time scale – that painfully slow diffusion – over which it happens.

Secondly, when new knowledge is acquired it will be very fixed to the original context. Deeper understanding comes from making this knowledge more flexible and filtering out the key points from the peripheral detail. Practical work provides an excellent additional level of complexity through which the scientific principle can be seen. Another way to think of this is that children often need to encounter the same idea in several different ways before it sticks; again, a practical can provide this.

Finally, there is something joyful about seeing abstract theory writ large (or often actually quite small) in the fabric of the universe. Science differs from other subjects because it is humankind’s ultimate attempt to describe, and perhaps even understand, the physical world around us. As science teachers, we need to be careful not to think that children see practical work the way we do, but if we ever lose the joy then it’s time to do something else.

Driver R. (1994). The fallacy of induction in science teaching. Chapter 3 in Levinson, R. ed. (1994) Teaching Science. London: Routledge

Nuthall G. (2007) The Hidden Lives of Learners. Wellington: NZCER Press

Pashler H., Bain P.M., Bottge B.A., Graesser A., Koedinger K, McDaniel M and Metcalfe J. (2007) Organizing Instruction and Study to Improve Student Learning: IES Practice Guide. Washington, DC: National Center for Education Research, Institute of Education Sciences, U.S. Department of Education

Sadler P.M. and Sonnert G (2016) Understanding Misconceptions: Teaching and Learning in Middle School Physical Science. American Educator. 2016 (Spring)

Shtulman A. and Valcarcel J. (2012) Scientific knowledge suppresses but does not supplant earlier intuitions. Cognition. 124(2) pp.209-215

Thorn C.J., Bissinger K., Thorn S. and Bogner F.X. (2016) “Trees Live on Soil and Sunshine!”: Coexistence of Scientific and Alternative Conception of Tree Assimilation. PLoS ONE. 11(1)

Willingham D. (2002) Inflexible Knowledge: The First Step to Expertise. American Educator. 2002 (Winter)

Practical: Grinding Frustration

I’ve seen a lot of trainee teachers knacker lessons up with a well run, but ultimately pointless, practical. Whole-class practicals, in particular, are massively time-consuming with many filling an entire lesson. If all that has been learned in that time is “It went blue, sir” then I don’t think that’s good enough.

The problem is usually a confusion over learning objectives. My previous blog set out the way I see learning objectives in relation to practical work but I’ll recount an example. I went to visit a promising trainee teacher a few years ago; the lesson was part of a unit on separation techniques I think; this was certainly part of a sequence on chromatography. When I looked at the lesson plan, saw that it was mainly going to be chromotography of pigments from leaves, and that the learning objective was “To separate leaf pigments by chromatography” I tried to help by asking her what she actually wanted the children to learn. I just couldn’t get past “I want them to separate the yellow pigment from the green chlorophyll”. It’s not fair to pull the rug just before an observation so I let it go and waited to see. Bless those lovely Y8s; they chopped and crushed and ground their little hearts out. They followed the instructions as well as they could, set up their chromatography paper (several submerged the spot in the propanone), and then did a little write up whilst they waited for the chromatograms to be ready. Some got a bit of green and yellow differentiation and the rest didn’t. Whilst they were working I went round and asked a few questions, such as “Can you tell me why you’re doing this?” and “What’s the point of chromatography?” I didn’t even get half answers, just pretty much universal “Don’t know”.

In the feedback session I didn’t get any further, really. The trainee teacher was very disappointed with the lesson. She carefully evaluated the quality of the practical work and made some perceptive comments about maybe splitting the practical into sections and briefing more closely to ensure the leaves were finely chopped, the amount of propanone was reduced, and the papers were set up correctly. But she completely and stubbornly failed to identify the problem, which was that, her ‘learning objective’ wasn’t about learning at all; it was about getting the practical to work. Had the chromatograms come out well, she would have been satisfied with the lesson,  Even when I directly asked the questions “Did the children understand the process?” and “Did the children understand what chromatography was?” and pointed out that they had been unable to tell me anything about these things, she couldn’t really see that this was a much bigger problem than the poor results.

There are plenty more examples where that comes from. Some worked nicely as expected. Some didn’t. All suffered irrevocably from a sense at the planning stage that the practical somehow justified its own existence just by being practical. Often, I find a defensiveness of practical work that I don’t see when pointing out other misaligned learning objectives. That sense that practical is self-justifying can be difficult to change. Why is this difficult?

In the end this boils down to the questions of whether or not (a) children enjoying practical work is sufficient justification, (b) just doing practical will make them better at it, and (c) children will learn important science ideas and/or develop their understanding from seeing the theory ‘in the flesh’. Often I think trainee teachers think, perhaps sub-consciously, that some or all of these are self-evident. I’ll tackle (a) and (b) here and leave (c) to another blog.

For me, enjoyment can’t ever be an end in itself for what happens in science lessons; that just reflects my personal belief in what school is for – no evidence presented. On the other hand, if enjoyment leads to better learning, higher motivation, more time doing science outside lessons, improved post-16 take up, and so on, then the judgement is maybe about balance between enjoyment and learning. I don’t have the expertise to offer a definitive review of the evidence but I’ve certainly been influenced by Abrahams (2009) Does Practical Work Really Motivate? and I’m not convinced practical work is as critical to motivation as is often assumed. The ASPIRES final report makes a brief reference to reduced interest in science after Y9, which might or might not correlate with reduced practical; personally I think it is the  GCSE curriculum content, and looming exams, that is to blame, but can’t offer more than a hunch.

Is it good teaching to explain how to do something tricky and complicated, and then get the children to try lots of examples with very general (that one’s good, that one’s bad) feedback? No, of course not. So why would practical skills be any different? Most of us have had years and years of experience through school, university, and maybe in the classroom, to hone our practical skills. Many of us have probably also taken things to bits and re-built them, developed fine motor, and problem-solving skills, through art and craft and cooking and all sorts. We tend to massively underestimate how difficult it is to extract chlorophyll from leaves, prepare a cheek cell slide, or connect up lamps in parallel. The cognitive load for these things, for children, is very high. In the lesson described above, the instruction sheet and the teacher were both clear about the level of the propanone on the chromatography paper, but at least a third of the class submerged the spot. There was just too much new information for them. These things need breaking down, step by step, with practice or at least immediate feedback at each stage. Without this, children just get used to practicals not working half the time (and working more often for the ‘smart’ kids and more rarely for the others) and accept this is the way of the world. Sometimes there is value in unexpected results, but not if a shrug of the shoulders is the typical response. If we are trying to teach practical skills then we need to plan carefully for those skills, and get precise and accurate work from the children.

Which takes me back to that chromatography lesson. I would have been very happy if the learning objective had been something like “To improve practical skills: use of mortar and pestle to extract plant material; setting up chromatograms; precise working” and then the trainee teacher’s reflection would have been at least a useful starting point. That was an aspect of the intention, but actually, if I’m being generous and assuming the practical wasn’t just picked because it was on the SoW, the stronger intention was something vague about understanding chromatography better by doing a practical example. Failure to separate learning practical skills from developing understanding is a big problem but this idea that doing a practical will improve understanding is, I think, the worst mistake.

Next blog coming up…