Saturday, 20 October 2018

Making expected progress?

’Expected progress’ was a DfE accountability measure until 2015. Inspectors must not use this term when referring to progress for 2016 or current pupilsOfsted Inspection Update, March 2017.

It’s an odd phrase, expected progress, as it seems to have two meanings. First, there is the expectation of the teacher, which is based on an in-depth knowledge of the pupil; not only their start point but everything else: their specific needs, attitude to learning, the support and engagement of parents, and whether or not the pupil has breakfast. And then there is the definition we build into our systems, which is essentially a straight line drawn from pupils’ prior attainment at EYFS, KS1 or KS2. A one size-fits all-approach, all for the sake of convenience - a simplistic algorithm and  a neat metric. Needless to say, the two usually do not match, but all to often we wilfully ignore the former in favour of the latter, and plough on with blind faith in the straight line. 

The problem is the assumption of ‘linearity’ - that all pupils learn in the same way, at the same rate, and follow the magic gradient. We know it’s not true but we go along it because we have to make pupils fit the system, even if it means shooting ourselves in the foot in the process. 

The other problem with ‘expected progress’ - other than it not existing - is that it sounds, well, mediocre. Language is important, and if we choose to adopt the phrase ‘expected progress’ then we also need to have a definition for ‘above expected progress’ as well. And this is where things start to get messy. It wasn’t that long ago that I saw Ofsted report state that ‘according to the school’s own tracking data, not enough pupils are making more than expected progress’. The school was hamstrung by the point system they used, that only really allowed those that were behind at the start of the year and apparently caught up, to make more than expected progress. Everyone else had to settle for expected. 

But putting aside the still popular levels and points-style methods, we still have a problem in those schools that are taking a ‘point in time’/age-related approach.

Why?

Quite simple really and perfectly illustrated by a recent conversation, where I asked the headteacher, who was talking about percentages of pupils making expected progress, to define it. They gave a puzzled look, as if it was a bizarre question:

“Well, that’s staying at expected. If they were expected before and still at expected now, then they’ve made expected progress, surely?”

Sounds logical. 

“And what about those at greater depth?”

“That’s sticking at greater depth of course.”

“So, how do ‘greater depth’ pupils make above expected progress?”

“They can’t”

Problem number 1: in this system, pupils with high start points cannot be shown to have made 'above expected' progress. I asked another question: “what about those pupils that were working towards? What’s expected progress for them?”

“To stay at working towards.” Was the reply.

Is it? Is that really our expectation for those pupils? To remain below? Obviously there are those that were working towards that probably will remain so; but there are also those pupils, such as EAL pupils, that accelerate through curriculum content. And then there is another group of low prior attaining pupils, that do not have SEND and are not EAL, but often do not catch up. These may well be disadvantaged pupils for whom pupil premium is intended to help close the gap. Our expectations for all these pupils may be different. They do not fit on a nice neat line.

Expected progress is many things. it is catching up, closing gaps, overcoming barriers and deepening understanding. It is anything but simple and linear. What we’re really trying to convey is whether or not pupils are making good progress from their particular start points, taking their specific needs into account.

That may not roll off the tongue quite as easily, but surely it’s more meaningful than ‘expected progress’.

Isn’t it?

Further reading: Why measuring pupil progress involves more than taking a straight line. Education Data Lab, March 2015 https://ffteducationdatalab.org.uk/2015/03/why-measuring-pupil-progress-involves-more-than-taking-a-straight-line/

*credit to Daisy Christodolou whose book title I've blatantly copied.

Thursday, 18 October 2018

Trust

Here's a thing. In conversations with senior leaders both online and in the real world, I often get asked about restricting access to data for teaching staff or even locking down tracking systems entirely. This seems to take two broad themes:

1) Limiting a teacher's access to data that relates only to those pupils for whom they are responsible.

2) Locking down the system after the 'data drop' or 'assessment window'

Let's have a think about this for a minute. Why are some senior leaders' wanting to do this? What are their concerns? Essentially it boils down to mistrust of teachers and fear that data will be manipulated. But what sort of culture exists in a school where such levels mistrusts have taken root? How did they get to this point? It's possible that such concerns are well founded, that manipulation of data has occurred; and I have certainly heard some horror stories, one of which came to light during inspection. That didn't end well, believe me. But often it's just suspicion, suspicion that teachers will change the data of another class to make their class look better, or will alter the end of previous year assessments for their current class to make the baseline lower, or will tweak data to ensure it fits the desired school narrative, or most commonly to ensure it matches their target.

Suspicion and mistrust. How desperately sad is that?

Golden Rule #1: separate teacher assessment from performance management. But how common is it for teachers to be set targets that are reviewed in the light of assessment data that the teacher is responsible for generating? I regularly hear of teachers being told that 'all pupils must make 3.5 points' progress per year' or that '85% must be at age-related expectations by the end of the year' and the final judgement is based on the data that teachers enter onto the system; on how many learning objectives they've ticked. It is a fallacy to think you can achieve high quality, accurate data under such a regime.

Teacher assessment should be focused on supporting children's learning, not on monitoring teacher performance. You cannot hope to have insightful data if teachers have one eye over their shoulder when assessing pupils, and are tempted to change data in order to make things look better than they really are. Perverse incentives are counterproductive and a risk to system integrity. They will cause data to be skewed to such an extent that it ceases to have any meaning or value, thus rendering it useless. Senior leaders need a warts and all picture of learning, not some rose-tinted, target-biased view that gets exposed when the SATs results turn up. Teachers need to be able to assess without fear, and that evidently requires a big culture shift in many schools.

The desire to lock down systems and restrict teacher access is indicative of how assessment data is viewed in many schools: as an instrument of accountability, rather than a tool for teaching and learning. If teachers are manipulating data, or are suspected of doing so, then senior leaders should take a long hard look at the regime and culture in their school rather than resorting to such drastic measures.

It is symptomatic of a much wider problem.

Friday, 21 September 2018

The Progress Delusion

I recently spoke to a headteacher of a primary school judged by Ofsted to be 'requiring improvement'. The school has been on an assessment journey in the last couple of years, ditching their old tracking system with its 'emerging-developing-secure' steps and expected progress of three points per year (i.e. levels), in favour of a simpler system and 'point in time assessment', which reflects pupils security within the year's curriculum based on what has been taught so far. With their new approach, pupils may be assessed as 'secure' all year if they are keeping pace with the curriculum, and this is seen as making good progress. No levels, no points; just a straightforward assessment presented in progress matrices, which show those pupils that are where you expect them to be from particular start points, and those that aren't.

And then the inspection happened and the screw began to turn. Despite all the reassuring statements from the upper echelons of Ofsted, the decision to ditch the old system is evidently not popular with those now 'supporting' the school. Having pupils categorised as secure all year does not 'prove' progress, apparently; points prove progress. In order to 'prove' progress, the head has been told they need more categories so they can show more movement over shorter time scales. Rather than have a broad 'secure' band, which essentially identifies those pupils that are on track - and in which most pupils will sit all year - the school has been told to subdivide each band into three in order to demonstrate the progress. This means having something along the lines of:

BLW- BLW= BLW+
WTS- WTS= WTS+
SEC- SEC= SEC+
GDS- GDS= GDS+

The utter wrongness of this is staggering for so many reasons:

1) Having more categories does not prove anything other than someone invented more categories. The amount of progress pupils make is not proportionate to the number of categories a school has in its tracking system. That's just stupid. It's like halving the length of an hour in order to get twice as much done.

2) It is made up nonsense. It is unlikely there will be a strict definition of these categories so teachers will be guessing where to place pupils. Unless of course they link it to the number of objectives achieved and that way lies an even deeper, darker hell.

3) Teacher assessment will be compromised. The main purpose of teacher assessment is to support pupils' learning and yet here we risk teachers making judgements with one eye over their shoulder. The temptation to start pupils low and move them through as many sub-bands as possible is huge. The data will then have no relation to reality.

4) It increases workload for no reason other than to satisfy the demands of external agencies. The sole reason for doing this is to keep the wolf from the door; it will in no way improve anything for any pupil in that school, and the teachers know it. Those teachers now have to track more, and more often, and make frequent decisions as to what category they are going to place each pupil into. How? Why? It's the assessment equivalent of pin the tale on the donkey.

5) It is contrary to recent Ofsted guidance. Amanda Spielman, in a recent speech, stated "We do not expect to see 6 week tracking of pupil progress and vast elaborate spreadsheets. What I want school leaders to discuss with our inspectors is what they expect pupils to know by certain points in their life, and how they know they know it. And crucially, what the school does when it finds out they don’t! These conversations are much more constructive than inventing byzantine number systems which, let’s be honest, can often be meaningless." Evidently there are many out there that are unaware of, or wilfully ignoring this.

The primary purpose of tracking is to support pupils learning, and any data provided to external agencies should be a by-product of that classroom-focussed approach. If your system works, it's right, and no one should be trying to cut it up into tiny pieces because they're still in denial over the death of levels. Everyone needs to understand that the 'measure more, more often' mantra is resulting in a toxic culture in schools. It is increasing workload, destroying morale and even affecting the curriculum that pupils experience. It is a massive irony lost on the people responsible that many of their so-called school improvement practices are having precisely the opposite effect; and I've spoken to several teachers in the past year or so who have changed jobs or quit entirely because of the burden of accountability-driven assessment. Schools should not be wasting their time inventing data to keep people happy, they should not be wasting time training teachers in the complexities of 'byzantine number systems'; they should be using that time for CPD, for advancing teachers' curriculum knowledge and improving and embedding effective assessment strategies. That way improvement lies.

In short, we have to find a way to challenge undue demands for meaningless numbers, and resist those that seek to drive a wrecking ball through principled approaches to assessment.

It is reaching crisis point in too many schools.



Tuesday, 4 September 2018

2018 KS2 VA Calculator free to download

I've updated the VA calculator to take account of changes to methodology this year. This includes new standard deviations and estimated outcomes, and the capping of extreme negative progress scores. I have referred to this as adjusted and unadjusted progress; and the tool shows both for pupils and for the whole cohort. Note that extreme positive progress scores are not affected.

You can use the tool to get up-to-date, accurate progress scores by removing pupils that will be discounted, and adding on points for special consideration (this should already be accounted for in tables checking data) and successful review outcomes due back via NCA Tools on 12th Sept.

You can also use it to get an idea of estimated outcomes for current Year 6 but please take be aware of usual warnings, namely that estimates change every year.

The tool can be download here.

It will open in Excel Online. Please download it to your PC before using by clicking on the 3 dots top right. Do not attempt to complete online as it is locked for editing. Please let me know ASAP if you have any issues or find any discrepancies.

Enjoy!


Monday, 3 September 2018

New year, new direction

I've got a new job!

After much deliberation I have accepted a position with Equin Ltd, the lovely people behind Insight Tracking. I've got to know Sarah and Andrew (directors) very well over the last few years and it's no secret that I am a big fan of their system, which I've recommended to many schools (not on a commission basis, I hasten to add; I just like their system because it’s neat and intuitive and it ticks all the boxes I outlined here).

The job is a great opportunity to be part of a growing company and it seems like a good fit considering the direction I want to go in. Sig+ will continue much the same as it is now: I'll still be tweeting, blogging, speaking, ranting, visiting schools and running training courses. But I also want to make better use of technology - videos, podcasts, online training courses - to provide more efficient, cost-effective (and often free!) support for schools. Equin have the platform and expertise to make this to happen.

I'm also keen to help develop the Insight system, which is already highly customisable, very easy to use, and fits well with my philosophy on tracking.  I'm particularly excited about plan for 'Insight Essentials' - a stripped down version of Insight for schools that want an even more simplified approach. Sometimes less is more.

And then there's Progress Bank, a system that will allow schools to upload, store and analyse standardised test scores from any provider, and will provide meaningful and familiar VA-style progress measures from any point to any point, in advance of statutory data. I've been talking about it for a year now; it's time to make that happen.

So there you have it: all change but no change. I'll still be here doing my thing but I'll be doing other stuff as well, working with people who can make those things happen.

It's exciting.




Monday, 9 July 2018

What I think a primary tracking system should do

I talk and write a lot about the issues with primary tracking systems: that many have reinvented levels, and are often inflexible and overly complicated. Quite rightly this means I get challenged to define what a good system looks like, and it's a tricky question to answer, but I think I'm getting there now.

I've already written a post on five golden rules of tracking, which summarised my talk at the inaugural Learning First conference. I still stand by all these, but have since added a sixth rule: don't compromise your approach to fit the rules of a system. Essentially, whatever software you use, it needs to be flexible so you can adapt it to accommodate your approach to assessment as it develops. I hear too many teachers say "the system doesn't really work for us" but they persevere, finding workarounds, ignoring vast swathes of data, focussing on the colours, navigating numerous reports to find something useful, and fuzzy-matching ill-fitting criteria that is out of alignment with their own curriculum. It's not necessarily the tracking systems I have a problem with, it's the approach within the system. If your system can't be adapted to be more meaningful and better suited to the needs of your school, don't struggle on with it, change the system.

Thankfully most systems now offer some level of customisation.

So this is what I think a primary tracking system needs to offer:

1) A flexible approach to tracking objectives
Some schools want to track against objectives, some schools don't. Some schools want a few KPIs, some schools want more. Some schools want something bespoke, some schools are happy with national curriculum objectives. Whatever your approach, ensure your system accommodates it and can be modified as and when you change your mind.

Personally, I think too many schools are tracking against far too many objectives and this needs paring back drastically. It is counter-productive to have teachers spending their weekends and evenings ticking boxes. Chances are it's not informing anything and is highly likely to be having a negative impact if it's sucking up teachers' time and eroding morale. It's important that you have a system that quickly allows you to reduce the objectives you track against. Or delete them entirely.

Whilst we're on the subject, think very carefully before extending this process into foundation subjects. Ask yourself: why do you need this data? Will it improve outcomes? Will it tell you anything you didn't already know? What impact will it have on workload?

Be honest!

2) Bespoke summative assessment descriptors and point-in-time assessment
Systems should be designed from classroom upwards as tools for teaching and learning, not from the head's office downwards as tools for accountability. With this in mind, ensure those assessment descriptors reflect the language of the classroom. On-track, secure, expected, achieved, at age-related - whatever you use on a day to day basis to summarise learning should be reflected in the system. This again means we need systems that are flexible.

And don't reinvent levels. I'm referring to those steps that pupils apparently progress through, where they're all emerging because it's autumn, are apparently developing in the spring, and magically become secure after Easter. This was never useful, never linked to reality, and was all about having a neat, linear point scale in order to measure progress. I believe that to get tracking right we need to put our obsession with progress measures to one side. It drives everything in the wrong direction.

If we don't reinvent levels, what should we do? More and more schools are adopting a simple 'point in time' assessment i.e. if a pupil is keeping pace with the demands of the curriculum, and gets what has been taught so far, then they are 'secure' or 'on-track' and are therefore making good progress. We don't need any point scores or arbitrary thresholds, we just need that simple overall descriptor. Yes, it means they are likely to be in the same 'band' all year, which means we can kiss goodbye to our flightpath and associated points, but honestly that's fine.

And finally, the overall assessment should be based purely on a teacher's judgement, not on some dubious algorithm linked to how many objectives have been ticked. For too long we have relied on systems for answers - an assessment by numbers approach - and it's time teachers were given back this responsibility and regained their confidence.

3) Assessment out of year group and tracking interventions
Tricky to do in many systems, and perhaps somewhat controversial, but I think it's important that teachers can easily track pupils against previous (or even next!) year's objectives (if the school is tracking against objectives, of course). I also think systems should allow users to create their own lists of objectives for specific, supported groups of children, rather than limiting tracking to national curriculum statements. In fact, this may be the only objective-level tracking a school chooses to do: just for those pupils that are working below their curriculum year. One thing's for sure: I don't see how it's useful to describe, say, a year 4 pupil that is working well below as Year 4 Emerging for the entire year. Greater system flexibility will allow that pupil to have a more appropriate assessment, and one school I visited recently used the term 'personal curriculum' instead of 'well below' or 'emerging'. I rather like that.

4) Handling test scores and other data
Many schools use tests, and systems need to be able to store and analyse that data, whether it be standardised scores, raw marks, percentages, or reading ages. This should be straightforward to enter onto the system and, if the school so chooses, easily integrated into reports. It seems crazy to spend a lot of money on a system only to have to store test scores or other assessment data in a spreadsheet, where it can't analysed alongside the teacher assessment data.

5) A few simple reports
I think there are only three reports that primary schools need:
  1. A quick overview of attainment showing percentages/numbers of pupils that are below, at, or above where you expected them to be in reading, writing and maths at a given point in time, based either on teacher assessment or a test if desired. Users should be able to drill down to identify individual pupils, in each category, and this will be enough to answer many of the questions that are likely to get asked by external agencies.
  2. A progress matrix. I'm a fan of these because they are simple, easily understood by all, and summarise progress visually without quantifying it so they get away from the need for points and levels. Essentially it's a grid with rows and columns, with the vertical axis usually used for a previous assessment and the horizontal axis used for the current assessment. We can then talk about those five pupils that were 'secure' but are now 'working towards'; or those 6 pupils that were struggling last term but are now 'above expectations'. Rather than talking about abstract concepts of points and measures, we are talking about pupils, which is all teachers want to do anyway. And don't forget that matrices can also be used to compare other types of data eg standardised test compared to teacher assessment at one assessment point; EYFS or KS1 prior attainment to latest teacher assessment, or results in one subject against another. 
  3. A summary table that pulls all key data together in one place - prior attainment, teacher assessment, or test scores - and groups it by year group and/or pupil characteristic groups (if statistically meaningful!). Whatever the school deems necessary for the intended purpose, whether that be a governor meeting, SIA visit, or Ofsted inspection, the system should quickly provide it in an accessible, bespoke format. Many if not most schools produce such tables of data; unfortunately all too often this is an onerous manual exercise, which involves running numerous reports, noting down figures and transferring them to a template in Word or Excel. And the next term, they do it all again. A huge waste of time and something that needs to stop.
These are only suggestions and many schools will have already gone beyond this. For example, I know plenty of schools that do not require teachers to record assessments against objectives; they simply make an overall assessment three times per year. Then there is the pupil group-level data that many schools spend a great deal of time producing. The usefulness of such data is certainly questionable (I think we've always known this) and it was encouraging to hear Amanda Spielman address this issue recently. Ultimately, the less the powers that be insist on abstract and low quality data, the less data schools will need to produce, the less complicated systems need to be, and the more we can focus on teaching and learning.

I think we are moving in the right direction.

Now we just need our systems to catch up. 





VA Calculator: excel version free to download

With KS2 results about to be released on NCA Tools (10th July, 07.30) I thought I'd publish a link to the latest excel version of my VA calculator. This version, with pupil group tables for progress and average scores, can be downloaded here

To use the VA calculator, you will need to download first (it will be read only in your browser). To download, click on the 3 dots found top right of browser tab window and select 'download'. This will open the tool in full excel mode on your laptop. Recommend reading the notes page first and also worth reading the primary accountability guidance for more information about nominal scores, p-scales and pre-key stage pupils. Progress measures were quite bit more complicated in 2017 with more prior attainment groups (24 instead of 21), closing of the loophole (pupils not scoring on test receive a nominal score of 79), and nominal scores assigned to individual p-scales (rather than blanket score of 70 assigned to BLW).

Which, in my opinion, means this year's progress data is not comparable with last year's, but hey ho.....

Hopefully we won't see too much change to methodology in autumn 2018 but we can't assume anything. 

and please, please note that this tool is for guidance only. The official progress data will benchmark pupils' scores against the national average score for pupils with the same prior attainment in the same year. Essentially, the estimated scores shown in the VA calculator WILL change. 

and if you want the online tool, which is very neat, easy to use, and GDPR compliant (i.e. it does not collect the data, the data is only stored in the cache memory of your browser and can't be accessed by Insight or anyone else), then you can find it here:

https://www.insighttracking.com/va-calculator/

Enjoy!

and let me know asap if you find any errors