Tag Archives: performance tables

Reforming school performance tables

Rebecca Allen and Simon Burgess

Yesterday the Government published its response to the Wolf Review on Vocational Education. The Response sets out a number of proposals, accepting all of the Review’s recommendations. These include the eye-catching scheme to ensure that young people who do not achieve C grade in English and maths at age 16 continue studying them to age 19.

The response also proposes reforms to school performance tables. This is based on a recognition that schools’ behaviour in selecting qualifications for their students is strongly influenced by the incentive structure they face. A crucial component of this structure is the published school performance tables. These tables are important in influencing parental choice of school, and school leadership teams pay them a lot of attention.

From this year, the content of the performance tables will change quite significantly. The long-standing measure of the percentage of students achieving at least 5 A* to C grades will be retained. But in addition, a differential average points score will be published for each school, which provides information on how well the school does for students at the lower and upper ends of the ability distribution, as well as at the average:

“It is vital that performance indicators do not inadvertently cause schools to concentrate on particular groups of pupils at the expense of others. To avoid this we will continue to include performance measures, like average point scores, which capture the full range of outcomes for pupils of all abilities. In addition, from 2011 the performance tables will show for each school the variation in performance of low attaining pupils, high attaining pupils and those performing as expected.”  (Wolf Review of Vocational Education, Government Response, p. 6).

This is a step forward. In our analysis we argued for exactly this measure: average GCSE points score, presented at three points in the ability distribution, low ability, average and high ability. Our criteria were functionality of the performance measure, relevance to parents and also comprehensibility. A measure is relevant if it informs parents about the performance of children very similar to their own in ability and social characteristics.  It is comprehensible if it is given to them in a metric that they can meaningfully interpret.  It is functional if it helps parents to answer the question: “In which feasible choice school will my child achieve the highest exam score?”.  Overall, this performance measure came out on top. We also described ways that the information could best be displayed for parents: paper-based and web-based delivery mechanisms.

A second issue is that the “price” or GCSE-equivalent points of the new vocational exams seems set to change. Precise details of this are unclear at the moment. It is worth making the point again that schools will have an eye on the performance table impact of courses they offer to students. If vocational qualifications are to be worth less than they are at present receive, there is a danger that schools will not be keen to promote them to students who may be unlikely to score highly on more academic courses. In turn, this may make schools less keen to accept low ability pupils.

Of course, the old league table measure of percentage with 5 A* to C grades is staying. Perhaps there is a performance management version of Gresham’s Law and good performance measures will drive out bad ones. If parents come to rely more on this measure, the media will give it more prominence and the grip of the “%5A-C” measure on the public mind will finally begin to weaken.

“Systemic failure” in Welsh Education

Simon Burgess

The release this week of the latest round of international comparative education results produced some fascinating results. Not least of these was the outcome for Wales, characterised by the Wales’ Education Minister as alarming and “unacceptable”.

The PISA (Programme for International Student Assessment) results derive from a standardised international assessment of 15-year-olds, run by the OECD. They show that Wales has fallen further behind since the last tests in 2006, and scored worse than before in each of reading, maths and science. Scores in Wales have fallen relative to England and are now “cast adrift from England, Scotland and Northern Ireland”. The Wales Education Minister, Leighton Andrews, described the results as reflecting “systemic failure”.

What might that systemic failure be? One leading candidate is highlighted in our recent research on accountability mechanisms for state schools. We argue that the decision in 2001 by the Welsh Assembly Government (WAG) to stop the publication of school performance tables or “league tables” has resulted in a significant deterioration in GCSE performance in Wales. The effect is sizeable and statistically significant. It amounts to around 2 GCSE grades per pupil per year; that is, achieving a grade D rather than a B in one subject. This is a substantial effect, equivalent to the impact of raising class size from 30 to 38 pupils.

Although our results are based on a study of the GCSE scores school-by-school, Figure 1 gives a very stark impression of the overall effect. Students in England and Wales were performing very similarly up to 2001, but thereafter the fraction gaining 5 good passes has strongly diverged.

We take each secondary school in Wales, and match it up to a very similar school in England. This “matching” is based on pupils’ prior attainment, neighbourhood poverty and school funding among other factors. We then track the progress (or value added) students make in these schools before and after the league tables reform, comparing the Welsh school with its English match. Our analysis explicitly takes account of the differential funding of schools in England and Wales, and the greater poverty rates found in neighbourhoods in Wales.

Why should the removal of school league tables lead to a fall in school performance? Part of the effect is though the removal of information to support parental choice of school. The performance tables allow parents to identify and then apply to the higher scoring schools, and to identify and perhaps avoid the low scoring schools. This lack of applications puts pressure on the latter schools to improve. But this is not all of the story. Perhaps as important is the simple public scrutiny of performance, and in particular the public identification of the low scoring schools. This “naming and shaming” means that low scoring schools in England are under great pressure to improve, whereas the same schools in Wales are more able to hide and to coast.

Our work has attracted criticism, including a charge of using an “ideological theory” from teacher unions . A more thoughtful critic has accused us of a “howler” in the analysis: not noting the introduction of the original GCSE-equivalent qualifications. In fact, since these were introduced equivalently in both countries they simply net out of the comparison.

Responding to our research, the Welsh Assembly Government said “wait for the PISA results”. These results are now in, and do not make happy reading. No doubt there are many factors underlying the relative performance of Wales and England, but the diminution of public accountability for the country’s schools is surely one of them.