It’s an odd phrase, expected progress, as it seems to have two meanings. First, there is the expectation of the teacher, which is based on an in-depth knowledge of the pupil; not only their start point but everything else: their specific needs, attitude to learning, the support and engagement of parents, and whether or not the pupil has breakfast. And then there is the definition we build into our systems, which is essentially a straight line drawn from pupils’ prior attainment at EYFS, KS1 or KS2. A one size-fits all-approach, all for the sake of convenience - a simplistic algorithm and a neat metric. Needless to say, the two usually do not match, but all to often we wilfully ignore the former in favour of the latter, and plough on with blind faith in the straight line.
The problem is the assumption of ‘linearity’ - that all pupils learn in the same way, at the same rate, and follow the magic gradient. We know it’s not true but we go along it because we have to make pupils fit the system, even if it means shooting ourselves in the foot in the process.
The other problem with ‘expected progress’ - other than it not existing - is that it sounds, well, mediocre. Language is important, and if we choose to adopt the phrase ‘expected progress’ then we also need to have a definition for ‘above expected progress’ as well. And this is where things start to get messy. It wasn’t that long ago that I saw Ofsted report state that ‘according to the school’s own tracking data, not enough pupils are making more than expected progress’. The school was hamstrung by the point system they used, that only really allowed those that were behind at the start of the year and apparently caught up, to make more than expected progress. Everyone else had to settle for expected.
But putting aside the still popular levels and points-style methods, we still have a problem in those schools that are taking a ‘point in time’/age-related approach.
Quite simple really and perfectly illustrated by a recent conversation, where I asked the headteacher, who was talking about percentages of pupils making expected progress, to define it. They gave a puzzled look, as if it was a bizarre question:
“Well, that’s staying at expected. If they were expected before and still at expected now, then they’ve made expected progress, surely?”
“And what about those at greater depth?”
“That’s sticking at greater depth of course.”
“So, how do ‘greater depth’ pupils make above expected progress?”
Problem number 1: in this system, pupils with high start points cannot be shown to have made 'above expected' progress. I asked another question: “what about those pupils that were working towards? What’s expected progress for them?”
“To stay at working towards.” Was the reply.
Is it? Is that really our expectation for those pupils? To remain below? Obviously there are those that were working towards that probably will remain so; but there are also those pupils, such as EAL pupils, that accelerate through curriculum content. And then there is another group of low prior attaining pupils, that do not have SEND and are not EAL, but often do not catch up. These may well be disadvantaged pupils for whom pupil premium is intended to help close the gap. Our expectations for all these pupils may be different. They do not fit on a nice neat line.
Expected progress is many things. it is catching up, closing gaps, overcoming barriers and deepening understanding. It is anything but simple and linear. What we’re really trying to convey is whether or not pupils are making good progress from their particular start points, taking their specific needs into account.
That may not roll off the tongue quite as easily, but surely it’s more meaningful than ‘expected progress’.
Further reading: Why measuring pupil progress involves more than taking a straight line. Education Data Lab, March 2015 https://ffteducationdatalab.org.uk/2015/03/why-measuring-pupil-progress-involves-more-than-taking-a-straight-line/
*credit to Daisy Christodolou whose book title I've blatantly copied.