Some notes for those who weren’t in the sessions I attended. Or who were, but perhaps held a completely different view.
Ben Goldacre – Keynote
Ben Goldacre’s stand up skills were tested to the maximum as he proceeded to wrestle with a faulty connection from his Mac to the projector. Carefully planned, comic, visual gags were wasted on a room full of teachers willing him to do well. As soon as he stopped trying to rely on his PowerPoint, it all became much better.
What I learned:
Why bother with Macs? They just cause more problems than they’re worth. *ducks*
Qualitative research is a proper discipline. You can’t dismiss it just because it’s based on interviews, subjectivity and lack of numerical data. Learn how to do it properly and then use it – it has its place.
An RCT is probably a big thing and needs a network. Ben suggested several ways that an RCT is relatively simple to put together. However, throwaway comments such as “all you need are about 100 or 200 schools that are kind of doing okay and don’t mind either way” brought home to me the lack of infrastructure in existence (at least that I’m aware of) to set up these trials. This was corroborated by Sam Freedman’s presentation later on where he explained that the UK has practically run out of capacity to carry out more RCTs anyway.
Research is pointless if unread. Kind of an obvious one, but Ben Goldacre illustrated a similar problem that exists in the paragon of RCT virtue that is Medicine: the BMJs left stacked up in doctors’ toilets, unread in their plastic packaging.
Teachers need to be taught Research Methods 101. Important if teachers are to be able to generate questions for researchers (which is the ultimate goal of linking researchers with practitioners). If I can’t spot the flaws in a study, how can I usefully identify the gaps that need further study? Also necessary if I am to evaluate qualitative, observational, quantitative or aggregated data with any meaningful application for my practice. I actually think that most teachers would love to have a CPD session based around the dissection of a journal article – much like the Journal Clubs that Ben mentioned as being present in the high-performing education systems in Singapore and China.
We all need to take a step back and slow down. And then speed up. The time frame for teaching practice to become better informed by educational research might be long. Decades long. All the more reason to press on right away!
What kind of nutter plans a conference without a lunch break or networking time? Kind of agree, but also glad that so much was packed in! Made the trip to the pub afterwards that much more delicious.
Dr Carol Davenport
Although Ben Goldacre explicitly said that one teacher in one classroom cannot carry out a meaningful RCT in any true sense of the word, I’m sure a lot of teachers thought along the lines of “I can accept that, but I still want to do something NOW.” Hence my curiosity in finding out about Action Research. This session was billed as one where “participants will gain a basic understanding of how to carry out action research and look at different data collection tools that they can use in their classroom”.
What I learned:
Action research is good reflective practice. The pro-forma that suggested ways in which teachers can improve their reflective practice in the language of research. So instead of a simple “What Went Well” – “Even Better If”, the teacher is encouraged to think about what problem they are being asked to tackle in advance, set themselves success criteria and then put together a time line with metrics (or research methods) for measuring success.
Enlightened SLT could use this to set Performance Management targets. When asked for contributions and comments, two members (who I couldn’t see through the crowd) said that they would be using something similar to the approach outlined by Dr Carol Davenport to help members of staff set targets for their performance review.
Dr Kay Yeomans
With all the talk about the importance of research in schools, there seemed to be relatively little exploring how universities and schools already work together – namely through the Widening Participation agenda. Together with her colleagues, Dr Kay Yeomans presented an overview of a collaboration existing between UEA and 10 schools in the Norfolk/Suffolk region as part of a nascent project funded in the short term by an RCUK grant of £150,000. With the work we are engaged in at The Brilliant Club, this seemed perhaps the most pertinent, although it was made clear that this was not strictly speaking WP work as such.
What I learned:
University departments want to engage with schools. But the main problem they have is understanding how schools work. This was another issue flagged up by other speakers such as Laura McInerney who flagged up how difficult some researchers found it to deal with the fact that a teacher’s working day is a mix of strictly compartmentalised contact time and random and unpredictable behavioural incidents. Rachel (JoneS?) – Head of Biology at the City of Norwich School has helped deal with the administration of a relatively large scale project of public engagement across some 10 schools and school groups in the region and seemingly lends her expertise on how ‘schools work’ that allows UEA to liaise more effectively with them.
Departments for education within universities are underused by other subject departments. Despite liaising with many schools, UEA does not seem to have engaged with its own Department for Education in this project. In general, Departments of education which are carrying out research into what works in schools, among other things, should surely be well placed to advise and work with other subject departments on how to work with schools.
There is a sore lack of meaningful links between secondary and tertiary education – for teachers and for pupils. A huge amounts seems to have been invested by secondary and primary schools in developing stronger links. In my experience secondary schools typically have a member of SLT with responsibility for cross-phase transition and get to know the communities that their pupils are coming from. The gap from secondary to tertiary is equally daunting and yet universities and schools could do more to forge excellent links that mean every pupil understands what a university is, how to get there and so on. This will probably be the topic for a future blogpost.
If money is invested, impact evaluation is crucial. Given that the money for the RCUK project is limited, it’s very important to evaluate the short-term impact (and long-term potential) of the collaborations in order for it to turn into something sustainable. The examples were exciting (inter-disciplinary projects on bee-keeping that involved the arts, sciences, high-attainers and naughty pupils alike), but Dr Yeoman herself mentioned that it remains to be seen to what extent all of these projects can be continued in the long term.
Kids know what research is. Sort of. An interesting survey of all pupils (of different ages) showed that they strongly believe that research is a worthwhile activity and one that applies across different careers. Interestingly, they also recognised research as a prestigious activity. I think we at The Brilliant Club could carry out a similar survey with our pupils; we live in a society where if you’re bright, people tend to say “oh you should be a doctor or a lawyer”, rarely are you told that you should choose a subject you love and pursue it to the deepest level you can.
As a Brilliant Club tutor, I knew that Laura would be stellar and she was. (A bit cheeky to claim her for one of our own?) Laura’s perspective is manifold and rather unique; she has taught in a secondary school in the UK and she is an educational researcher. She gave a presentation asking us teachers to come up with education’s equivalent of the Millennium Prize for mathematics.
What I learned:
Researchers struggle with different constraints. Where teachers are chiefly concerned with the day-to-day of teaching and the school timetable/calendar, researchers need to consider the timing of their project and these logistical priorities can clash. More importantly, the focus of the research should focus on the cognitive or social development of young people, as this will allow for quicker and more effective implementation of any findings – the crucial obstacle often being a lack of justification behind any changes that new research might indicate as being beneficial.
Focusing our efforts onto narrow problems gives more bang for your buck. As David Weston pointed out, the solutions to what might be initially thought of as cutting edge or “edge-of-practice” problems might already be out there, but lost in the disparate world of educational research. Focusing efforts onto answering some of these questions might enable others to go off and unearth these hidden gems.
Give us principles. This should the request from all research that claims to have made a breakthrough. In order to practically apply any individual research finding to your own classroom, it’s not enough to be given a resource and a new-fangled lesson plan. You need to work out the underlying principles and then make them work for you within that ‘Black Box’.
Some teachers aren’t satisfied with pragmatism. Overall Laura impressed everyone and I heard many comments about what a loss to the classroom she is. Judging by some questions at the end, however, there is a feeling among some teachers that it’s all very well identifying a set number of problems at the “edge-of-practice”, but this would ignore some of the more important issues that need to be addressed in the education system in general. My own view is that I like this approach for two reasons. First, it puts the needs of the child explicitly at the centre of any research question. Secondly, it will inevitably lead to changes in systems if the current system doesn’t allow for what the findings prove to be the most effective way of teaching.
Daisy’s talk added nuance to her previous discussions of the works of Dan Willingham among others. Recognising the importance of evidence-based practice (EBP), she was hoping to show how implementing research can nevertheless be challenging when you do not understand the root causes of the effect that the research seems to have proved.
What I learned:
We should be wary. There’s always an opportunity cost when choosing to invest in one type of EBP over another. The case brought here was the huge amount of money spent in California on reducing class sizes to no significant beneficial effect after previous studies had shown there might be some benefit. To me this seemed a classic case of mistaking correlation with cause. Quite how research could be implemented so badly is beyond me. Maybe there should just be a generic red warning siren atop the DfE that goes off whenever someone starts a sentence with “All schools need to have XYZ” with XYZ costing an obscene amount of money!
Teachers don’t all know what theory means. Or as Daisy put it: “in the English speaking world we seem to have a distrust of theory.” It apparently makes us imagine something spurious that can be dismissed in the face of fact. This ignores the true meaning of the word in an academic sense as more of an inference that is based on a series of established facts. This reminded me of pupils in my class who argued against the likelihood of evolution by saying ‘oh but that’s just a theory.’
No-one is original, but some of us are ignorant. Daisy quoted Keynes: “Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.” This apposite citation had earlier in the day been echoed by Dan Willingham who claimed that every teacher inevitably has a theory of how children learn. It’s interesting to note that Keynes’ quote goes on to make a telling observation about conspiracy theorists: “Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back. I am sure that the power of vested interests is vastly exaggerated compared with the gradual encroachment of ideas.” Applies to the fears around many of the structures in the British education system perhaps?
Reading is not a transferable skill. Skills are domain specific and the best readers are those that know a little about a lot and can therefore make more links.
Familiarity is not the same as knowledge. Making reference to Nickerson and Adams’ 1979 penny experiment, we conclude that recognising something doesn’t mean that you can remember it. This reminded me of some of the things that Mathew Syed wrote in Bounce about the difference between purposeful practise and practise per se; it’s the reason that Mo Farah practices sprinting and hill runs as much as he does long-distance running. You cannot assume that doing a lot of one thing will make you good at it – you have to break the thing down to its constituent parts and work on them in turn also.
There’s a real challenge in making an apostrophe memorable. In a series of themed lessons where pupils worked in ‘tribal family’ groups around series of writing tasks, it was the family drama that the pupils remembered and not the grammar implicit in the writing tasks. Not surprising perhaps, but turn this phenomenon on its head and you have a conundrum: how could you tap into the fact that memories seem to remember narrative so well in order to teach something like the apostrophe in English?
There’s nothing wrong with a young teacher telling it like it is. I liked Daisy’s approach which was led by confessional anecdotes from her own teaching experience. To criticise her for only a few years in the classroom misses the point: she has reflected on those few years with a depth and honesty that not everyone choses to do. As a result she has chosen to pursue cognitive science as the key that can improve the practice of others – and that’s a good thing in and of itself.
For this talk Sam took off his Teach First hat and put on his old, battered policy adviser DfE-branded headwear.
What I learned:
Politicians want us to blog and tweet more. The 3 principal filters to a politician are advisers, the civil servants and then the wider research community. But all politicians use Twitter (as do their advisers and civil servants). Sam implied that politicians are all too aware of the evident confirmation bias inherent in the current set up of filters, but there is ground for optimism that social media can actually cut through these structures.
There is a place for ideology. Sam reminded us that focusing on EBP was in many ways an ideological move made by Blair positioning the Third Way as a route of pragmatism that all could rally behind. Moreover, where evidence is insufficient, ideology will point the way so it behoves us to continue engaging with ideology, particularly when EBP will come into conflict with democracy. Just because something is empirically right, say, doesn’t mean that it will be accepted as reasonable by the electorate.
The civil service suffers from a ‘consultant culture’ where top civil servants move on every two or three years. Just as a civil servant has got to grasps with the issues in their department, the demands of career progression move them on. The irony of the parallel with this and with the perception of the Teach First programme as a two year ‘stint’ in the classroom were not lost on me.
Educational research in the UK is at capacity. Compared to the enormous resources available in the USA, what we have in the UK may seem piddling. Of more concern is the fact that there are, according to Sam, no more researchers available for organisations such as the EEF to give grants to for RCTs and such.
Professor Coe vs Ollie Orange
It was nice to go and witness a debate rather than a presentation at the end of the day and this did not disappoint. Both speakers took turns to explain their interpretation and opinion of ‘effect size’ as a viable metric for determining what works.
What I learned:
Pure maths has no time for ‘effect size’ metrics. Ollie Orange attacked the formulas that Hattie used to determine effect size across a variety of studies in order to work out what works. As a mere ‘BA’ myself, I did my best to follow the thread of his argument which seemed to centre around the fact that the metric was mathematically flawed and that it is impossible to meaningfully compare study A with study B when completely different methodologies and metrics had been used.
You have to draw the line in the sand somewhere or you’ll stand still in search of perfection. To mix metaphors. Professor Coe was quite up front about the fact that it was a “crude tool” for working out what works, but was better than nothing.
Hattie’s gold standard 0.4 metric does not define what’s ‘good’. Both Mr Orange and Professor Coe agreed that an effect size of ‘0.4’ is meaningless without context and should therefore not be used in abstract terms to work out if a teaching method is worthwhile or not.
Hope this was of interest. We then went to the pub, but my impressions of that part of the day I’ll save for another blog post.