Wednesday, 24 October 2018

2018 ASP summary template for primary schools

2018 version of ASP summary template free to download here.

Some tweaks on last year's template - now takes account of time series and 3 year average.

Feel free to modify, copy and share. Just credit the source and please download it first before attempting to complete it (it will open in Word online. To download, click on 3 dots in top right window of browser).

If you are confused by the 'Impact scores' concept (and who can blame you. I made up that term by the way), the idea is to find the minimum score required to improve an overall progress score from below average (orange or red) to average (yellow); or from average (yellow) to above average (green). The former is most critical and often it a case of just removing one pupil from data.

Schools that are below average (orange) will have a negative progress score (e.g. -1.9) and a confidence interval that is entirely negative (e.g. -3.6 to -0.2). If the confidence interval does not include the national average of zero - i.e. it does not cross the zero line - then it is deemed to be significantly below average (as in the example given above).

It would be neat to find out if removing one pupil would improve our data from below (orange) to average (yellow). Let's return to our example above. We take the upper limit of the confidence interval (the right hand number, i.e. -0.2). This is how far the progress score is away from average; how far the confidence interval is away from the zero line (safety!). Essentially, if every pupil's progress score increased by 0.2, the overall score would be in line with average, but that doesn't really help.

A better approach is to take that figure of -0.2 and multiply by the number of pupils included in progress measures (clearly stated in ASP). Let's say that's 30 pupils:

-0.2 x 30 pupils = -6.

This means by removing just one pupil with an individual progress score below -6, the 'below average' (orange) indicator will change to 'average' yellow.

Note: if your progress scores are average (yellow) and you want to determine what it would take to make them above average (green), use the lower limit of the confidence interval (the left hand figure) instead. Same applies: multiply that by the number of pupils, and if you have a pupil with a negative score equal to the result, removing a pupil with a progress score lower than the result should change overall scores from average to above.

Hope that makes some kind of sense. If it doesn't, tweet me and I'll do my best to explain it again.

Saturday, 20 October 2018

Making expected progress?

’Expected progress’ was a DfE accountability measure until 2015. Inspectors must not use this term when referring to progress for 2016 or current pupilsOfsted Inspection Update, March 2017.

It’s an odd phrase, expected progress, as it seems to have two meanings. First, there is the expectation of the teacher, which is based on an in-depth knowledge of the pupil; not only their start point but everything else: their specific needs, attitude to learning, the support and engagement of parents, and whether or not the pupil has breakfast. And then there is the definition we build into our systems, which is essentially a straight line drawn from pupils’ prior attainment at EYFS, KS1 or KS2. A one size-fits all-approach, all for the sake of convenience - a simplistic algorithm and  a neat metric. Needless to say, the two usually do not match, but all to often we wilfully ignore the former in favour of the latter, and plough on with blind faith in the straight line. 

The problem is the assumption of ‘linearity’ - that all pupils learn in the same way, at the same rate, and follow the magic gradient. We know it’s not true but we go along it because we have to make pupils fit the system, even if it means shooting ourselves in the foot in the process. 

The other problem with ‘expected progress’ - other than it not existing - is that it sounds, well, mediocre. Language is important, and if we choose to adopt the phrase ‘expected progress’ then we also need to have a definition for ‘above expected progress’ as well. And this is where things start to get messy. It wasn’t that long ago that I saw Ofsted report state that ‘according to the school’s own tracking data, not enough pupils are making more than expected progress’. The school was hamstrung by the point system they used, that only really allowed those that were behind at the start of the year and apparently caught up, to make more than expected progress. Everyone else had to settle for expected. 

But putting aside the still popular levels and points-style methods, we still have a problem in those schools that are taking a ‘point in time’/age-related approach.

Why?

Quite simple really and perfectly illustrated by a recent conversation, where I asked the headteacher, who was talking about percentages of pupils making expected progress, to define it. They gave a puzzled look, as if it was a bizarre question:

“Well, that’s staying at expected. If they were expected before and still at expected now, then they’ve made expected progress, surely?”

Sounds logical. 

“And what about those at greater depth?”

“That’s sticking at greater depth of course.”

“So, how do ‘greater depth’ pupils make above expected progress?”

“They can’t”

Problem number 1: in this system, pupils with high start points cannot be shown to have made 'above expected' progress. I asked another question: “what about those pupils that were working towards? What’s expected progress for them?”

“To stay at working towards.” Was the reply.

Is it? Is that really our expectation for those pupils? To remain below? Obviously there are those that were working towards that probably will remain so; but there are also those pupils, such as EAL pupils, that accelerate through curriculum content. And then there is another group of low prior attaining pupils, that do not have SEND and are not EAL, but often do not catch up. These may well be disadvantaged pupils for whom pupil premium is intended to help close the gap. Our expectations for all these pupils may be different. They do not fit on a nice neat line.

Expected progress is many things. it is catching up, closing gaps, overcoming barriers and deepening understanding. It is anything but simple and linear. What we’re really trying to convey is whether or not pupils are making good progress from their particular start points, taking their specific needs into account.

That may not roll off the tongue quite as easily, but surely it’s more meaningful than ‘expected progress’.

Isn’t it?

Further reading: Why measuring pupil progress involves more than taking a straight line. Education Data Lab, March 2015 https://ffteducationdatalab.org.uk/2015/03/why-measuring-pupil-progress-involves-more-than-taking-a-straight-line/

*credit to Daisy Christodolou whose book title I've blatantly copied.

Thursday, 18 October 2018

Trust

Here's a thing. In conversations with senior leaders both online and in the real world, I often get asked about restricting access to data for teaching staff or even locking down tracking systems entirely. This seems to take two broad themes:

1) Limiting a teacher's access to data that relates only to those pupils for whom they are responsible.

2) Locking down the system after the 'data drop' or 'assessment window'

Let's have a think about this for a minute. Why are some senior leaders' wanting to do this? What are their concerns? Essentially it boils down to mistrust of teachers and fear that data will be manipulated. But what sort of culture exists in a school where such levels mistrusts have taken root? How did they get to this point? It's possible that such concerns are well founded, that manipulation of data has occurred; and I have certainly heard some horror stories, one of which came to light during inspection. That didn't end well, believe me. But often it's just suspicion, suspicion that teachers will change the data of another class to make their class look better, or will alter the end of previous year assessments for their current class to make the baseline lower, or will tweak data to ensure it fits the desired school narrative, or most commonly to ensure it matches their target.

Suspicion and mistrust. How desperately sad is that?

Golden Rule #1: separate teacher assessment from performance management. But how common is it for teachers to be set targets that are reviewed in the light of assessment data that the teacher is responsible for generating? I regularly hear of teachers being told that 'all pupils must make 3.5 points' progress per year' or that '85% must be at age-related expectations by the end of the year' and the final judgement is based on the data that teachers enter onto the system; on how many learning objectives they've ticked. It is a fallacy to think you can achieve high quality, accurate data under such a regime.

Teacher assessment should be focused on supporting children's learning, not on monitoring teacher performance. You cannot hope to have insightful data if teachers have one eye over their shoulder when assessing pupils, and are tempted to change data in order to make things look better than they really are. Perverse incentives are counterproductive and a risk to system integrity. They will cause data to be skewed to such an extent that it ceases to have any meaning or value, thus rendering it useless. Senior leaders need a warts and all picture of learning, not some rose-tinted, target-biased view that gets exposed when the SATs results turn up. Teachers need to be able to assess without fear, and that evidently requires a big culture shift in many schools.

The desire to lock down systems and restrict teacher access is indicative of how assessment data is viewed in many schools: as an instrument of accountability, rather than a tool for teaching and learning. If teachers are manipulating data, or are suspected of doing so, then senior leaders should take a long hard look at the regime and culture in their school rather than resorting to such drastic measures.

It is symptomatic of a much wider problem.