Alex Weatherall and David Didau have just had an interesting Twitter exchange. David has written a blog post suggesting that a fundamental point about AfL may be flawed – he’s suggesting that learning is invisible, only performance can be seen and that therefore the idea of checking learning regularly in order to guide teaching is misguided, since you cannot check what you cannot see.
After a bit of to-ing and fro-ing and some 🙂 to pour oil on troubled water (opportunity missed to estimate the size of hydrocarbon molecules), Alex tweeted “I *think* it’s due to some fundamental difference between how Science and English are taught. I will ponder.” and David riposted “That would assume that learning is visible in science?”
This made me sit up and take notice – I’ve been vaguely wondering about how much teaching approaches / knowledge v skills / constructivist v didactic etc. is subject dependent, and I think maybe it is more so than a lot of folk are assuming.
So what about assessing learning? Well, I do think there could be a major difference between English and science. If I teach some concept in science e.g. Newton’s 1st Law, then it’s fairly easy to almost immediately ask some questions to see if they’ve got it (in this case I will probably have used several familiar situations as part of the teaching and I can just use several different familiar situations in my hinge questions and ask about the size of the forces acting). There will be a clear difference between the answers from students who’ve ‘got it’ and those who haven’t. This is because I’m testing whether they are still holding onto misconceptions (which give wrong answers) or are actually applying Newton’s 1st Law (which gives correct answers). On the basis of this, I can decide whether I need to have another go at getting the concept across. I would be really surprised if science teachers don’t all see this as an inherent part of good science teaching. That doesn’t negate the need to come back to the same concept in a subsequent lesson – David suggested three times but it’s often a lot more than that for deep-rooted physics misconceptions – but it makes a big difference to how this particular lesson proceeds. In English (and I apologise if my lack of expertise shows in quality of examples) if a lesson is about how to relate Dickens’ personal experiences of Victorian England to his empathy with characters in Oliver Twist then I can see how it might be hard to do anything to ‘assess learning’ that isn’t asking pupils to just regurgitate what they’ve just heard/said/written/done with no indication of whether this has transferred to long term memory. You certainly can’t just ask a question about some other Dickens novel. Equally, if you’ve just taught them about correct use of apostrophe’s by showing them examples and getting them to spot errors (yes, its ironic!) then what can you do in a few moments to check that they have understood – get them to tell you what you just told them were the rules? Well, actually, I still think there might be a role for AfL in deciding whether spotting twenty errors is enough or whether another twenty would be a good idea, but I can see the difficulty in really knowing if they’ve ‘got it’ until they write some blog post at a later date and either fill it with apostrophic errors or do’nt.