Rebecca Allen and Simon Burgess
What is the best way to deal with under-performing schools? This is a key policy concern for an education system. There clearly has to be a mechanism for identifying such schools. But what should then be done with schools which are highlighted as failing their pupils? There are important trade-offs to be considered: rapid intervention may be an over-reaction to a freak year of poor performance, but a more measured approach may condemn many cohorts of students to under-achieve.
This is the issue that Ofsted tackles. Its inspection system identifies failing schools and supervises their recovery. How effective is this? Is it even positive, or does labelling a school as failing push it to ever lower outcomes for its students?
It’s not clear what to expect. Ofsted inspections are often dreaded, and a fail judgement seen as being disastrous. It has been argued it triggers a ‘spiral of decline’, with teachers and pupils deserting the school, leading to further falls in performance. But it might also be a fresh start, with renewed focus on teaching and learning, leading to an improvement in exam scores. Equally, we might expect nothing much to happen: after all, the policy ‘treatment’ for those schools given a Notice to Improve is very light touch. It is neither strongly supportive (typically no or few extra resources) nor strongly punitive or directive (schools face no sanctions nor restrictions on their actions). Schools are instructed to focus intensively on pupil performance, and are told to expect a further inspection within a year. In addition – and possibly the most important factor – the judgement that the school is failing is public one, usually widely reported in the local press.
Our research shows that the Ofsted inspection system works. Schools that just failed their Ofsted significantly improved their performance over the next few years, relative to schools that just passed. The impact is statistically significant and sizeable. In terms of the internationally comparable metric of effect sizes, our main results suggest an improvement of around 10% of a standard deviation of pupil scores. This is a big effect, with a magnitude similar to a number of large-scale education interventions. Translated into an individual pupil’s GCSE grades, this amounts to a one grade improvement (for example, B to A) in one or two GCSEs. From the school’s perspective, the gain is an extra five percentage points in the proportion of pupils gaining five or more GCSEs at grades A*-C.
Our findings suggest that the turn-around arises from proper improvements in teaching and learning, not gaming to boost exam performance through switching to easier courses. First, the impact is significantly higher in the second year post visit than the first, and remains level into the third and fourth year after the inspection. So it is not simply a quick fix to satisfy the inspectors when they return twelve months later. Second, we find a stronger effect on the school’s average GCSE score than on the headline measure of the percentage of students gaining at least 5 good passes; if the schools’ responses were aimed at cosmetic improvement, we would expect the reverse. We also find similar positive effects on maths results and on English results.
It could be argued that these results are implausibly large given that the ‘treatment’ is so light touch and schools are given no new resources to improve their performance. The instruction to the school to improve its performance may empower headteachers and governors to take a tougher and more proactive line about school and teacher performance. This may not be a minor channel for improvement. Behavioural economics has provided a good deal of evidence on the importance of norms: the school management learning that what they might have considered satisfactory performance is unacceptable may have a major effect. The second part of the treatment derives from the fact that the judgement is a public statement and so provides a degree of public shame for the school leadership. Ofsted fail judgements are widely reported in local press and this is usually not treated as a trivial or ignorable announcement about the school. It seems plausible that this too will be a major spur to action for the school.
Where do we go from here? Our results suggest Ofsted’s identification of just-failing schools and the use of Notice to Improve measures is an effective policy, triggering the turn-around of these schools. We need to be clear that our research does not address the question of what to do about schools that comprehensively fail their Ofsted inspection. Possibly this light-touch approach can be extended. Since leaving the Headship of Mossbourne school to become the new Director of Ofsted, Sir Michael Wilshaw has argued that schools just above the fail grade should also be tackled: that ‘satisfactory’ performance is in fact unsatisfactory. Such interventions in ‘coasting’ or ‘just-ok’ schools are very likely to be of the same form as Notice to Improve. Our results suggest that this is potentially a fruitful development with some hope of significant returns.
This research is available on the CMPO website and the IoE website.
Reblogged this on Rebecca Allen.
Puzzled by the assertion that “schools are given no new resources to improve their performance”. Local authorities have prioritised schools in SM and NTI over the years. This is how under-performing schools came into their line of sight.