Tuesday, 6 November 2018

Converting 2018 KS2 scaled scores to standardised scores

Many schools are using standardised tests from the likes of NFER, GL and Rising Stars to monitor attainment and progress of pupils, and to predict outcomes; and yet there is lot of confusion about how standardised scores relate to scaled scores. The common assumption is that 100 on a standardised test (eg from NFER) is the same as 100 in a KS2 test, but it's not. Only 50% achieve 100 or more in a standardised test (100 represents the average, or the 50th percentile); yet 75% achieved 100+ in the KS2 reading test in 2018 (the average score in 2018 KS2 reading test was 105). If we want a standardised score that better represents expected standards then we need one that captures the top 75%, i.e. around 90. However, to be on the safe side, I recommend going for 94 (top 66%), or maybe even 95 (top 63%) if you want to be really robust. Whatever you do, please bear in mind that standardised test scores are not a prophecy of future results, they are simply an indicator. Michael Tidd (@ MichaelT1979) has written an excellent blog post on this subject, which I recommend you read if you are using standardised scores for tracking.

The purpose of this blog is to share a conversion table, that will give you a rough idea of how scaled scores convert to standardised scores. It is based on distribution of 2018 KS2 scores in reading and maths, taken from national tables. Download the national, local and regional tables (3rd link down) and click on table N2b. The cumulative percentages in table N2b are converted to standardised scores via this lookup table.

The scaled score to standardised score conversion table can be downloaded here.

Please note: this is not definitive; it is a guide. It will also change next year, when 2019 national data is released, but hopefully it will demonstrate that one score does not directly convert into another.


Wednesday, 24 October 2018

2018 ASP summary template for primary schools

2018 version of ASP summary template free to download here.

Some tweaks on last year's template - now takes account of time series and 3 year average.

Feel free to modify, copy and share. Just credit the source and please download it first before attempting to complete it (it will open in Word online. To download, click on 3 dots in top right window of browser).

If you are confused by the 'Impact scores' concept (and who can blame you. I made up that term by the way), the idea is to find the minimum score required to improve an overall progress score from below average (orange or red) to average (yellow); or from average (yellow) to above average (green). The former is most critical and often it a case of just removing one pupil from data.

Schools that are below average (orange) will have a negative progress score (e.g. -1.9) and a confidence interval that is entirely negative (e.g. -3.6 to -0.2). If the confidence interval does not include the national average of zero - i.e. it does not cross the zero line - then it is deemed to be significantly below average (as in the example given above).

It would be neat to find out if removing one pupil would improve our data from below (orange) to average (yellow). Let's return to our example above. We take the upper limit of the confidence interval (the right hand number, i.e. -0.2). This is how far the progress score is away from average; how far the confidence interval is away from the zero line (safety!). Essentially, if every pupil's progress score increased by 0.2, the overall score would be in line with average, but that doesn't really help.

A better approach is to take that figure of -0.2 and multiply by the number of pupils included in progress measures (clearly stated in ASP). Let's say that's 30 pupils:

-0.2 x 30 pupils = -6.

This means by removing just one pupil with an individual progress score below -6, the 'below average' (orange) indicator will change to 'average' yellow.

Note: if your progress scores are average (yellow) and you want to determine what it would take to make them above average (green), use the lower limit of the confidence interval (the left hand figure) instead. Same applies: multiply that by the number of pupils, and if you have a pupil with a negative score equal to the result, removing a pupil with a progress score lower than the result should change overall scores from average to above.

Hope that makes some kind of sense. If it doesn't, tweet me and I'll do my best to explain it again.

Saturday, 20 October 2018

Making expected progress?

’Expected progress’ was a DfE accountability measure until 2015. Inspectors must not use this term when referring to progress for 2016 or current pupilsOfsted Inspection Update, March 2017.

It’s an odd phrase, expected progress, as it seems to have two meanings. First, there is the expectation of the teacher, which is based on an in-depth knowledge of the pupil; not only their start point but everything else: their specific needs, attitude to learning, the support and engagement of parents, and whether or not the pupil has breakfast. And then there is the definition we build into our systems, which is essentially a straight line drawn from pupils’ prior attainment at EYFS, KS1 or KS2. A one size-fits all-approach, all for the sake of convenience - a simplistic algorithm and  a neat metric. Needless to say, the two usually do not match, but all to often we wilfully ignore the former in favour of the latter, and plough on with blind faith in the straight line. 

The problem is the assumption of ‘linearity’ - that all pupils learn in the same way, at the same rate, and follow the magic gradient. We know it’s not true but we go along it because we have to make pupils fit the system, even if it means shooting ourselves in the foot in the process. 

The other problem with ‘expected progress’ - other than it not existing - is that it sounds, well, mediocre. Language is important, and if we choose to adopt the phrase ‘expected progress’ then we also need to have a definition for ‘above expected progress’ as well. And this is where things start to get messy. It wasn’t that long ago that I saw Ofsted report state that ‘according to the school’s own tracking data, not enough pupils are making more than expected progress’. The school was hamstrung by the point system they used, that only really allowed those that were behind at the start of the year and apparently caught up, to make more than expected progress. Everyone else had to settle for expected. 

But putting aside the still popular levels and points-style methods, we still have a problem in those schools that are taking a ‘point in time’/age-related approach.


Quite simple really and perfectly illustrated by a recent conversation, where I asked the headteacher, who was talking about percentages of pupils making expected progress, to define it. They gave a puzzled look, as if it was a bizarre question:

“Well, that’s staying at expected. If they were expected before and still at expected now, then they’ve made expected progress, surely?”

Sounds logical. 

“And what about those at greater depth?”

“That’s sticking at greater depth of course.”

“So, how do ‘greater depth’ pupils make above expected progress?”

“They can’t”

Problem number 1: in this system, pupils with high start points cannot be shown to have made 'above expected' progress. I asked another question: “what about those pupils that were working towards? What’s expected progress for them?”

“To stay at working towards.” Was the reply.

Is it? Is that really our expectation for those pupils? To remain below? Obviously there are those that were working towards that probably will remain so; but there are also those pupils, such as EAL pupils, that accelerate through curriculum content. And then there is another group of low prior attaining pupils, that do not have SEND and are not EAL, but often do not catch up. These may well be disadvantaged pupils for whom pupil premium is intended to help close the gap. Our expectations for all these pupils may be different. They do not fit on a nice neat line.

Expected progress is many things. it is catching up, closing gaps, overcoming barriers and deepening understanding. It is anything but simple and linear. What we’re really trying to convey is whether or not pupils are making good progress from their particular start points, taking their specific needs into account.

That may not roll off the tongue quite as easily, but surely it’s more meaningful than ‘expected progress’.

Isn’t it?

Further reading: Why measuring pupil progress involves more than taking a straight line. Education Data Lab, March 2015 https://ffteducationdatalab.org.uk/2015/03/why-measuring-pupil-progress-involves-more-than-taking-a-straight-line/

*credit to Daisy Christodolou whose book title I've blatantly copied.

Thursday, 18 October 2018


Here's a thing. In conversations with senior leaders both online and in the real world, I often get asked about restricting access to data for teaching staff or even locking down tracking systems entirely. This seems to take two broad themes:

1) Limiting a teacher's access to data that relates only to those pupils for whom they are responsible.

2) Locking down the system after the 'data drop' or 'assessment window'

Let's have a think about this for a minute. Why are some senior leaders' wanting to do this? What are their concerns? Essentially it boils down to mistrust of teachers and fear that data will be manipulated. But what sort of culture exists in a school where such levels mistrusts have taken root? How did they get to this point? It's possible that such concerns are well founded, that manipulation of data has occurred; and I have certainly heard some horror stories, one of which came to light during inspection. That didn't end well, believe me. But often it's just suspicion, suspicion that teachers will change the data of another class to make their class look better, or will alter the end of previous year assessments for their current class to make the baseline lower, or will tweak data to ensure it fits the desired school narrative, or most commonly to ensure it matches their target.

Suspicion and mistrust. How desperately sad is that?

Golden Rule #1: separate teacher assessment from performance management. But how common is it for teachers to be set targets that are reviewed in the light of assessment data that the teacher is responsible for generating? I regularly hear of teachers being told that 'all pupils must make 3.5 points' progress per year' or that '85% must be at age-related expectations by the end of the year' and the final judgement is based on the data that teachers enter onto the system; on how many learning objectives they've ticked. It is a fallacy to think you can achieve high quality, accurate data under such a regime.

Teacher assessment should be focused on supporting children's learning, not on monitoring teacher performance. You cannot hope to have insightful data if teachers have one eye over their shoulder when assessing pupils, and are tempted to change data in order to make things look better than they really are. Perverse incentives are counterproductive and a risk to system integrity. They will cause data to be skewed to such an extent that it ceases to have any meaning or value, thus rendering it useless. Senior leaders need a warts and all picture of learning, not some rose-tinted, target-biased view that gets exposed when the SATs results turn up. Teachers need to be able to assess without fear, and that evidently requires a big culture shift in many schools.

The desire to lock down systems and restrict teacher access is indicative of how assessment data is viewed in many schools: as an instrument of accountability, rather than a tool for teaching and learning. If teachers are manipulating data, or are suspected of doing so, then senior leaders should take a long hard look at the regime and culture in their school rather than resorting to such drastic measures.

It is symptomatic of a much wider problem.

Friday, 21 September 2018

The Progress Delusion

I recently spoke to a headteacher of a primary school judged by Ofsted to be 'requiring improvement'. The school has been on an assessment journey in the last couple of years, ditching their old tracking system with its 'emerging-developing-secure' steps and expected progress of three points per year (i.e. levels), in favour of a simpler system and 'point in time assessment', which reflects pupils security within the year's curriculum based on what has been taught so far. With their new approach, pupils may be assessed as 'secure' all year if they are keeping pace with the curriculum, and this is seen as making good progress. No levels, no points; just a straightforward assessment presented in progress matrices, which show those pupils that are where you expect them to be from particular start points, and those that aren't.

And then the inspection happened and the screw began to turn. Despite all the reassuring statements from the upper echelons of Ofsted, the decision to ditch the old system is evidently not popular with those now 'supporting' the school. Having pupils categorised as secure all year does not 'prove' progress, apparently; points prove progress. In order to 'prove' progress, the head has been told they need more categories so they can show more movement over shorter time scales. Rather than have a broad 'secure' band, which essentially identifies those pupils that are on track - and in which most pupils will sit all year - the school has been told to subdivide each band into three in order to demonstrate the progress. This means having something along the lines of:


The utter wrongness of this is staggering for so many reasons:

1) Having more categories does not prove anything other than someone invented more categories. The amount of progress pupils make is not proportionate to the number of categories a school has in its tracking system. That's just stupid. It's like halving the length of an hour in order to get twice as much done.

2) It is made up nonsense. It is unlikely there will be a strict definition of these categories so teachers will be guessing where to place pupils. Unless of course they link it to the number of objectives achieved and that way lies an even deeper, darker hell.

3) Teacher assessment will be compromised. The main purpose of teacher assessment is to support pupils' learning and yet here we risk teachers making judgements with one eye over their shoulder. The temptation to start pupils low and move them through as many sub-bands as possible is huge. The data will then have no relation to reality.

4) It increases workload for no reason other than to satisfy the demands of external agencies. The sole reason for doing this is to keep the wolf from the door; it will in no way improve anything for any pupil in that school, and the teachers know it. Those teachers now have to track more, and more often, and make frequent decisions as to what category they are going to place each pupil into. How? Why? It's the assessment equivalent of pin the tale on the donkey.

5) It is contrary to recent Ofsted guidance. Amanda Spielman, in a recent speech, stated "We do not expect to see 6 week tracking of pupil progress and vast elaborate spreadsheets. What I want school leaders to discuss with our inspectors is what they expect pupils to know by certain points in their life, and how they know they know it. And crucially, what the school does when it finds out they don’t! These conversations are much more constructive than inventing byzantine number systems which, let’s be honest, can often be meaningless." Evidently there are many out there that are unaware of, or wilfully ignoring this.

The primary purpose of tracking is to support pupils learning, and any data provided to external agencies should be a by-product of that classroom-focussed approach. If your system works, it's right, and no one should be trying to cut it up into tiny pieces because they're still in denial over the death of levels. Everyone needs to understand that the 'measure more, more often' mantra is resulting in a toxic culture in schools. It is increasing workload, destroying morale and even affecting the curriculum that pupils experience. It is a massive irony lost on the people responsible that many of their so-called school improvement practices are having precisely the opposite effect; and I've spoken to several teachers in the past year or so who have changed jobs or quit entirely because of the burden of accountability-driven assessment. Schools should not be wasting their time inventing data to keep people happy, they should not be wasting time training teachers in the complexities of 'byzantine number systems'; they should be using that time for CPD, for advancing teachers' curriculum knowledge and improving and embedding effective assessment strategies. That way improvement lies.

In short, we have to find a way to challenge undue demands for meaningless numbers, and resist those that seek to drive a wrecking ball through principled approaches to assessment.

It is reaching crisis point in too many schools.

Tuesday, 4 September 2018

2018 KS2 VA Calculator free to download

I've updated the VA calculator to take account of changes to methodology this year. This includes new standard deviations and estimated outcomes, and the capping of extreme negative progress scores. I have referred to this as adjusted and unadjusted progress; and the tool shows both for pupils and for the whole cohort. Note that extreme positive progress scores are not affected.

You can use the tool to get up-to-date, accurate progress scores by removing pupils that will be discounted, and adding on points for special consideration (this should already be accounted for in tables checking data) and successful review outcomes due back via NCA Tools on 12th Sept.

You can also use it to get an idea of estimated outcomes for current Year 6 but please take be aware of usual warnings, namely that estimates change every year.

The tool can be download here.

It will open in Excel Online. Please download it to your PC before using by clicking on the 3 dots top right. Do not attempt to complete online as it is locked for editing. Please let me know ASAP if you have any issues or find any discrepancies.


Monday, 3 September 2018

New year, new direction

I've got a new job!

After much deliberation I have accepted a position with Equin Ltd, the lovely people behind Insight Tracking. I've got to know Sarah and Andrew (directors) very well over the last few years and it's no secret that I am a big fan of their system, which I've recommended to many schools (not on a commission basis, I hasten to add; I just like their system because it’s neat and intuitive and it ticks all the boxes I outlined here).

The job is a great opportunity to be part of a growing company and it seems like a good fit considering the direction I want to go in. Sig+ will continue much the same as it is now: I'll still be tweeting, blogging, speaking, ranting, visiting schools and running training courses. But I also want to make better use of technology - videos, podcasts, online training courses - to provide more efficient, cost-effective (and often free!) support for schools. Equin have the platform and expertise to make this to happen.

I'm also keen to help develop the Insight system, which is already highly customisable, very easy to use, and fits well with my philosophy on tracking.  I'm particularly excited about plan for 'Insight Essentials' - a stripped down version of Insight for schools that want an even more simplified approach. Sometimes less is more.

And then there's Progress Bank, a system that will allow schools to upload, store and analyse standardised test scores from any provider, and will provide meaningful and familiar VA-style progress measures from any point to any point, in advance of statutory data. I've been talking about it for a year now; it's time to make that happen.

So there you have it: all change but no change. I'll still be here doing my thing but I'll be doing other stuff as well, working with people who can make those things happen.

It's exciting.

Monday, 9 July 2018

What I think a primary tracking system should do

I talk and write a lot about the issues with primary tracking systems: that many have reinvented levels, and are often inflexible and overly complicated. Quite rightly this means I get challenged to define what a good system looks like, and it's a tricky question to answer, but I think I'm getting there now.

I've already written a post on five golden rules of tracking, which summarised my talk at the inaugural Learning First conference. I still stand by all these, but have since added a sixth rule: don't compromise your approach to fit the rules of a system. Essentially, whatever software you use, it needs to be flexible so you can adapt it to accommodate your approach to assessment as it develops. I hear too many teachers say "the system doesn't really work for us" but they persevere, finding workarounds, ignoring vast swathes of data, focussing on the colours, navigating numerous reports to find something useful, and fuzzy-matching ill-fitting criteria that is out of alignment with their own curriculum. It's not necessarily the tracking systems I have a problem with, it's the approach within the system. If your system can't be adapted to be more meaningful and better suited to the needs of your school, don't struggle on with it, change the system.

Thankfully most systems now offer some level of customisation.

So this is what I think a primary tracking system needs to offer:

1) A flexible approach to tracking objectives
Some schools want to track against objectives, some schools don't. Some schools want a few KPIs, some schools want more. Some schools want something bespoke, some schools are happy with national curriculum objectives. Whatever your approach, ensure your system accommodates it and can be modified as and when you change your mind.

Personally, I think too many schools are tracking against far too many objectives and this needs paring back drastically. It is counter-productive to have teachers spending their weekends and evenings ticking boxes. Chances are it's not informing anything and is highly likely to be having a negative impact if it's sucking up teachers' time and eroding morale. It's important that you have a system that quickly allows you to reduce the objectives you track against. Or delete them entirely.

Whilst we're on the subject, think very carefully before extending this process into foundation subjects. Ask yourself: why do you need this data? Will it improve outcomes? Will it tell you anything you didn't already know? What impact will it have on workload?

Be honest!

2) Bespoke summative assessment descriptors and point-in-time assessment
Systems should be designed from classroom upwards as tools for teaching and learning, not from the head's office downwards as tools for accountability. With this in mind, ensure those assessment descriptors reflect the language of the classroom. On-track, secure, expected, achieved, at age-related - whatever you use on a day to day basis to summarise learning should be reflected in the system. This again means we need systems that are flexible.

And don't reinvent levels. I'm referring to those steps that pupils apparently progress through, where they're all emerging because it's autumn, are apparently developing in the spring, and magically become secure after Easter. This was never useful, never linked to reality, and was all about having a neat, linear point scale in order to measure progress. I believe that to get tracking right we need to put our obsession with progress measures to one side. It drives everything in the wrong direction.

If we don't reinvent levels, what should we do? More and more schools are adopting a simple 'point in time' assessment i.e. if a pupil is keeping pace with the demands of the curriculum, and gets what has been taught so far, then they are 'secure' or 'on-track' and are therefore making good progress. We don't need any point scores or arbitrary thresholds, we just need that simple overall descriptor. Yes, it means they are likely to be in the same 'band' all year, which means we can kiss goodbye to our flightpath and associated points, but honestly that's fine.

And finally, the overall assessment should be based purely on a teacher's judgement, not on some dubious algorithm linked to how many objectives have been ticked. For too long we have relied on systems for answers - an assessment by numbers approach - and it's time teachers were given back this responsibility and regained their confidence.

3) Assessment out of year group and tracking interventions
Tricky to do in many systems, and perhaps somewhat controversial, but I think it's important that teachers can easily track pupils against previous (or even next!) year's objectives (if the school is tracking against objectives, of course). I also think systems should allow users to create their own lists of objectives for specific, supported groups of children, rather than limiting tracking to national curriculum statements. In fact, this may be the only objective-level tracking a school chooses to do: just for those pupils that are working below their curriculum year. One thing's for sure: I don't see how it's useful to describe, say, a year 4 pupil that is working well below as Year 4 Emerging for the entire year. Greater system flexibility will allow that pupil to have a more appropriate assessment, and one school I visited recently used the term 'personal curriculum' instead of 'well below' or 'emerging'. I rather like that.

4) Handling test scores and other data
Many schools use tests, and systems need to be able to store and analyse that data, whether it be standardised scores, raw marks, percentages, or reading ages. This should be straightforward to enter onto the system and, if the school so chooses, easily integrated into reports. It seems crazy to spend a lot of money on a system only to have to store test scores or other assessment data in a spreadsheet, where it can't analysed alongside the teacher assessment data.

5) A few simple reports
I think there are only three reports that primary schools need:
  1. A quick overview of attainment showing percentages/numbers of pupils that are below, at, or above where you expected them to be in reading, writing and maths at a given point in time, based either on teacher assessment or a test if desired. Users should be able to drill down to identify individual pupils, in each category, and this will be enough to answer many of the questions that are likely to get asked by external agencies.
  2. A progress matrix. I'm a fan of these because they are simple, easily understood by all, and summarise progress visually without quantifying it so they get away from the need for points and levels. Essentially it's a grid with rows and columns, with the vertical axis usually used for a previous assessment and the horizontal axis used for the current assessment. We can then talk about those five pupils that were 'secure' but are now 'working towards'; or those 6 pupils that were struggling last term but are now 'above expectations'. Rather than talking about abstract concepts of points and measures, we are talking about pupils, which is all teachers want to do anyway. And don't forget that matrices can also be used to compare other types of data eg standardised test compared to teacher assessment at one assessment point; EYFS or KS1 prior attainment to latest teacher assessment, or results in one subject against another. 
  3. A summary table that pulls all key data together in one place - prior attainment, teacher assessment, or test scores - and groups it by year group and/or pupil characteristic groups (if statistically meaningful!). Whatever the school deems necessary for the intended purpose, whether that be a governor meeting, SIA visit, or Ofsted inspection, the system should quickly provide it in an accessible, bespoke format. Many if not most schools produce such tables of data; unfortunately all too often this is an onerous manual exercise, which involves running numerous reports, noting down figures and transferring them to a template in Word or Excel. And the next term, they do it all again. A huge waste of time and something that needs to stop.
These are only suggestions and many schools will have already gone beyond this. For example, I know plenty of schools that do not require teachers to record assessments against objectives; they simply make an overall assessment three times per year. Then there is the pupil group-level data that many schools spend a great deal of time producing. The usefulness of such data is certainly questionable (I think we've always known this) and it was encouraging to hear Amanda Spielman address this issue recently. Ultimately, the less the powers that be insist on abstract and low quality data, the less data schools will need to produce, the less complicated systems need to be, and the more we can focus on teaching and learning.

I think we are moving in the right direction.

Now we just need our systems to catch up. 

VA Calculator: excel version free to download

With KS2 results about to be released on NCA Tools (10th July, 07.30) I thought I'd publish a link to the latest excel version of my VA calculator. This version, with pupil group tables for progress and average scores, can be downloaded here

To use the VA calculator, you will need to download first (it will be read only in your browser). To download, click on the 3 dots found top right of browser tab window and select 'download'. This will open the tool in full excel mode on your laptop. Recommend reading the notes page first and also worth reading the primary accountability guidance for more information about nominal scores, p-scales and pre-key stage pupils. Progress measures were quite bit more complicated in 2017 with more prior attainment groups (24 instead of 21), closing of the loophole (pupils not scoring on test receive a nominal score of 79), and nominal scores assigned to individual p-scales (rather than blanket score of 70 assigned to BLW).

Which, in my opinion, means this year's progress data is not comparable with last year's, but hey ho.....

Hopefully we won't see too much change to methodology in autumn 2018 but we can't assume anything. 

and please, please note that this tool is for guidance only. The official progress data will benchmark pupils' scores against the national average score for pupils with the same prior attainment in the same year. Essentially, the estimated scores shown in the VA calculator WILL change. 

and if you want the online tool, which is very neat, easy to use, and GDPR compliant (i.e. it does not collect the data, the data is only stored in the cache memory of your browser and can't be accessed by Insight or anyone else), then you can find it here:



and let me know asap if you find any errors

Tuesday, 12 June 2018

Arm yourself!

A headteacher recently told me she'd been informed by her LA advisor that "only 'good' schools can experiment and do what they like when it comes to assessment". This lie has been trotted out so many times now that it has become embedded in the collective consciousness and headteachers have come to accept it. Perhaps even believe it. But surely what's right for 'good' schools is right for all schools? It's an incredible irony that some schools are essentially being told that until they are 'good', they are going to have to persevere with ineffective practices. It's the ultimate education Catch 22: you can't start to improve things until you've improved.

And yet it is precisely these schools that have the most to gain from overhauling their approaches to assessment; by reducing tracking and marking and written feedback, not increasing it. Unfortunately schools are often being told the opposite: ramp it up, measure more and more often, track everything that moves.

With apparently little choice, headteachers wearily resign themselves to the drudgery, and often, quite understandably, rail against anyone that suggests a different path. I've been told numerous times "it's all very well for you, but you try being in this position". They are under intense scrutiny by people who think the way to improve outcomes is not to improve the curriculum, and teaching and learning, but to collect more data and increase workload. Clearly many of the processes put in place in the name of school improvement are having the opposite effect. 

Schools need to be brave. They need to be willing to make necessary changes but they also need reassurances that changes are justified, supported, and will not backfire when an inspector calls. To that end, I've compiled a list of key statements that schools can use to support (and defend) their position as they seek to build a more meaningful, and less onerous, approach to assessment. And there is plenty out there to arm themselves with.

  • We do not expect to see 6 week tracking of pupil progress and vast elaborate spreadsheets. What I want school leaders to discuss with our inspectors is what they expect pupils to know by certain points in their life, and how they know they know it. And crucially, what the school does when it finds out they don’t! These conversations are much more constructive than inventing byzantine number systems which, let’s be honest, can often be meaningless.
  • Nor do I believe there is merit in trying to look at every individual sub-group of pupils at the school level. It is very important that we monitor the progress of under-performing pupil groups. But often this is best done at a national level, or possibly even a MAT or local authority level, where meaningful trends may be identifiable, rather than at school level where apparent differences are often likely to be statistical noise.
  • Ofsted does not expect performance and pupil-tracking information to be presented in a particular format. Such information should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of pupils in that school.
  • Inspectors will use lesson observations, pupils’ work, discussions with teachers and pupils and school records to judge the effectiveness of assessment and whether it is having an impact on pupils’ learning. They don’t need to see vast amounts of data, spreadsheets, charts or graphs. Nor are the looking for any specific frequency or type or volume of marking or feedback.
  • I want teachers to spend their working hours doing what’s right for children and reduce the amount of time spent on unnecessary tasks. Damian Hinds, Secretary of State for Education 
  • If the impact on pupil progress doesn’t match the hours spent then stop it! Amanda Spielman, HM Chief Inspector, Ofsted 
  • The origins of the audit culture are complex but we do know there’s no proven link between some time consuming tasks around planning, marking and data drops, and improved outcomes for pupils. Professor Becky Allen, UCL
  • No inspector should be asking for these things, and nobody else should be telling you that this is what inspectors will be looking for. Sean Harford, National Director of Education, Ofsted
  • I want you to know you do have the backing to stop doing the things that aren’t helping chidren to do better. Damian Hinds, Secretary of State for Education 
  • Ofsted does not expect any prediction by schools of a progress score, as they are aware that this information will not be possible to produce due to the way progress measures at both KS2 and KS4 are calculated. Inspectors should understand from all training and recent updates that there is no national expectation of any particular amount of progress from any starting point.
  • ‘Expected progress’ was a DfE accountability measure until 2015. Inspectors must not use this term when referring to progress for 2016 or current pupils. 
  • There is no point in collecting ‘data’ that provides no information about genuine learning
  • Recording summative data more frequently than three times a year is not likely to provide useful information
  • Tracking software, which has been used widely as a tool for measuring progress with levels, cannot, and should not, be adapted to assess understanding of a curriculum that recognises depth and breadth of understanding as of equal value to linear progression
  • It is very important that these systems do not reinvent levels
  • Ensure that the primary purpose of assessment is not distorted by using it for multiple purposes
  • Sometimes progress is simply about consolidation (Ed: how do you measure consolidation? You can't. And if we persist with coverage-based progress measures (i.e. levels) then we are relying on measures that are out of kilter with the principles of this curriculum and  potentially risking pupils learning by prioritising pace at the expense of depth.)
  • Be streamlined: eliminate duplication – ‘collect once, use many times’ 
  • Be ruthless: only collect what is needed to support outcomes for children. The amount of data collected should be proportionate to its usefulness. Always ask why the data is needed. 
  • Be prepared to stop activity: do not assume that collection or analysis must continue just because it always has 
  • Be aware of workload issues: consider not just how long it will take, but whether that time could be better spent on other tasks
  • A purportedly robust and numerical measure of pupil progress that can be tracked and used to draw a wide range of conclusions about pupil and teacher performance, and school policy, when in fact information collected in such a way is flawed. This approach is unclear on purpose, and demands burdensome processes.
  • The recent removal of ‘levels’ should be a positive step in terms of data management; schools should not feel any pressure to create elaborate tracking systems.
  • Focusing on key performance indicators reduces the burden of assessing every lesson objective. This also provides the basis of next steps: are pupils secure and can pupils move on, or do they need additional teaching?
  • I also believe that a focus on curriculum will help to tackle excessive and unsustainable workload. For me, a curricular focus moves inspection more towards being a conversation about what actually happens in the day-to-day life of schools. As opposed to school leaders feeling that they must justify their actions with endless progress and performance metrics. To that end, inspecting the curriculum will help to undo the ‘Pixlification’ of education in recent years, and make irrelevant the dreaded Mocksted consultants. Those who are bold and ambitious for their pupils will be rewarded as a result.
  • Inspectors are reminded that Ofsted has no expectation about how primary schools should be carrying out assessment or recording of pupils’ achievements in any subjects, including foundation subjects. Use of the word ‘tracking’ in inspection reports is problematic as it can suggest that some form of numerical data is required, when there is no such requirement, even in English and mathematics. Schools will not be marked down because they are not ‘tracking’ science and foundation subjects in the same ways they may be doing so in English and mathematics. This clarification will be added to our ‘Clarification for schools’ section of the ‘School inspection handbook’, effective from September 2018.

I will keep adding to this list as and when I find useful statements. Please let me know if you have any to add. Thanks. 

Saturday, 9 June 2018

In search of simplicity

I'm a climber. Or at least I'd like to be. Back in the day I got out on rock loads, climbing routes all over the UK. From sea cliffs to mountain crags; from deep-wooded valleys to wild moorland edges, I would revel in the light and the textures, the exposure and the fear; and the way everything seemed more vivid and alive when you'd pulled through that last hard move and the battle was won. It was ace. Like many British climbers I had a go at Scottish winter and alpine climbing but my heart wasn't in it. As much as I liked the idea, I just wasn't built for that level of suffering: the cold and the dread, and the battery-acid tang that fills your mouth when you realise the seriousness of the position you've put yourself in. It was not for me.

It was on the way back from the Alps that I first visited Fontainebleau, a vast forest south of Paris littered with boulders of every conceivable shape and size rising out of a fine sandy floor, and sheltered by the pines above. I had never seen anything like it; it was perfect. We wandered amongst the rocks, bewildered, achieving precisely nothing. Here was the world Mecca of bouldering, a form of climbing that was barely on my radar. No ropes, no hardware, no heavy loads, no planning, no suffering, no fear (well not much), this was climbing distilled to its purest form: the simple art of movement on rock. It suited my minimalist philosophy. I was transfixed. I was hooked.

After that trip, I knew the direction of travel. I sold most of my climbing gear. Ropes, climbing rack ice axes, mountaineering boots, crampons - it all went. I was left with some climbing shoes, a chalk bag, and a bouldering mat. It felt good, like when you take stuff to a charity shop, or go to the tip, or freecycle that item of furniture that was getting in the way. Once it's gone, you can focus and breathe; and that's what I did: focus on bouldering.

Since then, motivation has waxed and waned. Injury, opportunity, work, family, and diversions into cycling and running have all taken their toll, but bouldering is always the thing I think about when I have time to breathe. In the last couple of years, as work has demanded more and more of my time, opportunities to go to the climbing wall, let alone get out on actual rock, have been extremely limited. Faced with the possibility of giving up, I decided to install a fingerboard, a simple device that does one job and does it well: trains finger strength. Basically, it's a piece of wood with an assortment of holes of varying widths, depths and angles machined into it. The idea is you build your finger strength by progressing through exercises of increasing difficulty, pulling up and hanging off smaller and smaller holds, two-handed and one-handed. It's very simple, it's very hard, and it's extremely effective. Installing it required finding a suitable substrate in our old house. I drilled pilot holes above every door frame, hitting lath and plaster every time, refilling the holes and driving Katy mad. Eventually I settled on a beam in the hallway that would take some sleeve anchors and (hopefully) take my weight. The board was up!

Jerry Moffatt - one of the greatest climbers of all time - had a great line: "if you don't let go, you can't fall off". Finger strength is the key to hard climbing and a finger board is the key to finger strength. My board means that I can train in my house, and even if I can't get to a climbing wall for two or three weeks, I can still train and not lose strength. Without that small, simple piece of wood bolted to a beam in my hallway I'd probably have quit climbing by now; and that's why it's one of my most treasured possessions.

Is there a point to this post, beyond the obvious climbing-related one? I suppose it's that, all too often, we seek complex solutions to problems. We invest in technology, in expensive hardware and software, believing that cutting-edge must be better. Our heads are turned by shiny things, eschewing the simple in favour of the elaborate. But sometimes those simple things work best: a book, a pen, a piece of paper, a game, some imagination.

And a piece of wood bolted to a beam in the hallway.

Thursday, 31 May 2018

The changes ahead of us, the changes behind

Stop the world, I'm getting off

In the four years since the removal of levels primary schools have had to contend with a staggering amount of change to almost every aspect of the accountability system. No longer are pupils achieving Level 4 or Level 5; now they are described as meeting 'expected standards' or achieving a 'high score' (in a test) or 'working at greater depth' (in writing). Those that don't make the grade in a test have 'not met expected standards', whilst in writing they're defined as 'working towards the expected standard'. P-scales cling on (for now) but sitting above those - and below the main curriculum assessments described above - sit a series of ‘interim’ pre-key stage assessments including 'foundations for the expected standard' (KS1 and KS2), 'early development of the expected standard' (KS2 only), and 'growing development of the expected standard' (KS2 only). Key measures include percentages achieving expected standards and high score/greater depth (KS1 and KS2), and average scaled scores (KS2 only - the DfE didn't collect the KS1 test scores).

Progress measures have also changed. The old 'levels of progress' measures are obviously dead, but value added remains. Now sensibly 'zero-centred', the 'new' progress measures involve a smaller number of prior attainment groups derived from a KS1 APS baseline in which maths is somewhat controversially double-weighted. The number of prior attainment groups has already changed from 21 in 2016 to 24 in 2017 and may or may not do so again. We also have a complicated system of nominal scores which are used to calculate the progress of those pupils below the standard of tests at KS2, and these scores also changed between 2016 and 2017. And very soon we’ll run out of levels. How progress will be 
measured from 2020 onwards, when the first cohort without KS1 levels reach the end of KS2, is anyone's guess. It may well require nominal scores to be retrospectively assigned to the 'new' KS1 assessments.

The changes to progress measures also meant changes to floor and 'coasting' standards with value added thresholds replacing levels of progress medians, and a change to the rules so that now being below the attainment component and just one out of three progress thresholds spells trouble; previously schools would have to be below all four.

In the last four years primary schools have therefore had to cope with changes to the programmes of study, assessment frameworks, nomenclature, writing moderation, test scores, attainment and progress measures, coasting and floor standards; and of course, there was that failed attempt at implementing a reception baseline in 2015. It's a huge amount of upheaval in a short space of time.

And in the next four years, it's all set to change again.

The main change this year involves the assessment of writing at KS1 and KS2, which becomes more 'flexible' having been a supposedly 'secure fit' over the last 2 years. I say supposedly, because it could be argued that that assessment and moderation of writing has been fairly flexible up to now anyway. Increased flexibility and discretion is welcome but is likely to lead to even more confusion.

Another important change this year is this capping of extreme negative progress scores, limiting them to the score of the bottom 1% for each prior attainment group. This should help mitigate some of the issues relating to negative outliers in the current progress system. 

Things really kick off this year with some welcome and some not so welcome changes. First is the removal of statutory teacher assessment of reading and maths at KS2, a fairly pointless exercise where teachers state whether or not pupils have met expected standards only to have their assessment usurped by the test score. There are plenty of pupils assessed by teachers as having not met expected standards who then go and score 100 or more on the test, and vice versa. It's the test score that rules and so collecting the teacher assessment seems fairly pointless. 2019/19 marks the end of that process.

Also this year we'll see the removal of the interim pre-key stage assessments and the start of a phased withdrawal of p-scales (starting with P5-8), to be replaced with a new system of numerical standards.  

This is probably the most controversial year of all with the rollout of the times tables check for year 4 pupils, and the start of a large scale voluntary pilot of the reception baseline assessment. There is concern about both assessments but it is the latter that is understandably getting most of the attention. It involves assessing pupils shortly after they start in reception in order to provide a baseline for future progress measures. The assessment designed by NFER will involve a series of activity-based, table-top tasks that will generate a standardised score; and this score will be used in much the same way as the KS1 scores in the current KS1-2 progress measure.

This also the year that our first cohort (current Year 4) of pupils with new KS1 assessment data reach the end of KS2, which means a new methodology probably involving a new system of yet to be invented KS1 point scores. Prepare to learn progress measures all over again.

And finally we have part 2 of the phased withdrawal of p-scales, with p1-4 being removed in this year. These apply to non-subject specific study and will be replaced by '7 aspects of engagement for cognition and learning'.

Following the pilot of the reception baseline in September 2019, this year will see the full national rollout to all schools with a reception year. This first cohort of 'baseliners' will reach the end of KS2 in 2027 and until then it's business as usual (sort of) with progress measured from KS1 (somehow). 

This could also be the year we see changes to the Early Years Foundation Stage profile, with descriptors underpinning early learning goals (ELG), moderation arrangements, and statutory assessment processes all in scope for an overhaul. There is particular focus on revising 'the mathematics and literacy ELGs to ensure that they support children to develop the right building blocks for learning at key stage 1' (pages 5-11).

I can't seem to find anything scheduled for this year. I must have missed something.

The year we could be waving goodbye to statutory assessment at key stage 1, but only if the reception baseline gets off the ground (because we can't have cohorts without a baseline for progress measures). With every silver lining.........

This the year the first cohort of reception baseliners reach the end of KS2, which means another revision of progress measures with new calculations, and new prior attainment groups to get your head round. Unless you work in a junior or middle school, in which case this is the year you've possibly been waiting for. The recent announcement by the DfE that they do not intend to measure the progress of pupils in non-all-through primary schools (i.e. infant, first, junior, and middle schools) from 2027, instead making these schools responsible for 'evidencing progress based on their own assessment information', is welcome, but it does beg the question: why not all schools? There is also the fact that infant and first schools will have a statutory responsibility for administering a baseline which they will have no real stake in. There are many questions to answer but 9 years is a long time in education.

Other changes
The Secretary of State recently announced that floor and coasting measures will be scrapped in favour of a single measure aimed at identifying schools in need of support. A consultation will be carried out on future measures but needless to say this change can't come soon enough.

That's it: a rundown of the main changes we will face over the next few years. No doubt I've missed something vital so please let me know. In the meantime, don't let the system get you down.

Friday, 4 May 2018

Test score upload and progress analysis in Progress Bank/Insight

Andrew Davey at Insight (@insightHQ; www.insighttracking.com) has been busy building a very neat, intuitive interface for the quick uploading of standardised test scores into Progress Bank and Insight, and analysis of the data.

As stated previously, the aim of Progress Bank is to provide schools with a simple, online system that will capture any standardised test data from any provider and measure progress between any two points. The sort of data that could be uploaded and analysed includes:
  • NFER tests
  • GL progress tests
  • CAT4
  • STAR Assessment
  • KS1 scaled scores
  • KS2 practice SATS results
  • KS2 actual SATS results
  • Reception baseline scores
Ultimately, we want to be able to build up enough data to enable the calculation of VA between any two points. This will involve a DfE-style calculation whereby pupils are placed into prior attainment groups based on a previous assessment, and their score on a following assessment is compared to the average score of pupils in the same prior attainment group. This could be from reception to KS1, or from KS1 to KS2, or from Y1 autumn to Y5 spring, or Y3 entry to Y6 SATS (useful for junior schools). In theory, if we get enough data, we can measure progress between any two points. The progress scores will be shown for pupils, key groups and cohorts, for reading and maths (and possibly SPaG if you are testing that too). By measuring progress using standardised tests, it is hoped schools will stop reinventing levels and use teacher assessment purely in the classroom, for formative purposes.

Until we reach the point where we have enough data to calculate VA, we will instead track changes in standardised scores or percentile rank for cohorts, groups and pupils (bearing in mind that standardised scores do not always go up, and no change is often fine). 

The system involves a three step process:
  1. Upload CTF (it is secure and GDPR compliant)
  2. Upload test scores
  3. Analyse data 
It is fairly quick to do. Once a CTF file has been uploaded, users can then upload test scores via a simple process that allows them to copy and paste data onto the screen.

Then paste the names and chose the order of surname and forename. This will enable the system to match pupils to those already in the system:

Then validate the data. Any pupils that don't match will be flagged and can be matched manually.

we can then select the assessment for which we want to upload scores for this particular cohort:

and add the scores on the next screen, again by copying and pasting from a spreadsheet:

That gets the data into the system (you can retrospectively upload data for previous years and terms by the way) and all we need to do now is analyse it. This is done via a simple pivot table tool within the system. The following screen shows summary of year 5's NFER tests scores for autumn and summer term broken down by key group. There are various options to select cohorts, start and end points, assessments, columns and rows values, and cell calculations. Note that the progress column currently shows change in standardised score, and the plan is to move that to a VA measure when enough data is available.

And finally, by clicking on a cell, we can drill down to pupil level; and by clicking on a progress cell we can access a clickable scatter plot, too.

Red dots indicate those pupils whose scores have dropped, and green dots show those whose scores have gone up. Clicking on the dots will identify the pupil, their previous and current test scores, and progress score between the two points selected. The colours are not intended to be a judgement, more an easy way to explore the data.

That's a quick tour of the Progress Bank concept, as it currently stands. The upload tool is already available to Insight users, and the pivot table report will be rolled out very soon. Progress Bank, featuring data upload, pivot tables and scatter plots, will be launched as a standalone tool in the Autumn term, for those schools that just want to capture and analyse their standardised scores without the full tracking functionality of Insight. It will therefore complement existing systems, and provide a quick and simple way of generating progress scores for Ofsted, governors and others.

Prices to be announced. 

More info and register your interest at www.progressbank.co.uk

Thursday, 26 April 2018

5 Things primary governors should know about data. Part 5: pupil groups

This is the 5th and final part in a series of blog posts on data for primary governors. Part 1 covered statutory data collection, part 2 was on sources of data, part 3 explained progress measures, and part 4 dealt with headline measures. In this post we're going to discuss those all-important pupil groups.

When we look at school performance data in the performance tables, Analyse School Performance (ASP) system, the Ofsted Inspection Data Summary Report (IDSR), and FFT Aspire, we can see that all those headline figures are broken down by pupil characteristics. Keeping tabs on the performance of key groups is evidently vital; and senior leaders and governors have an important role to play in monitoring the progress of these groups and the attainment gaps between them. Broadly speaking we are dealing with four key types of data: threshold measures (percentages achieving expected or higher standards), average scores, progress scores, and absence figures. Officially, we only have average scores and progress scores at KS2, although your school's internal data may have other measures you can track, including data from standardised tests. Also note that Ofsted, in the IDSR, have a pseudo-progress measure for KS1 whereby attainment is broken down by start point based on Early Years (EYFSP) outcome. More on that later.

Before we push on to look at the main pupil groups and what the various sources of data show us, it is important to note that it is easy to read too much into analysis of data by group. If we take any two groups of pupils - eg those with last names beginning A-M vs those beginning N-Z - there will be an attainment gap between the two groups. What can we infer from this? Nothing.

The main pupil groups are: gender, disadvantaged, SEN (special educational needs), EAL (English additional language), mobile pupils, term of birth, and prior attainment. Some of these require more explanation.

This group includes pupils that have been eligible for free school meals (FSM) in the last 6 years, have been in care at any point, or have been adopted from care. It does not include Forces children. Previously this group was referred to as pupil premium (and still is in FFT reports). When we look at reports we may see reference to FSM6 (or Ever 6 FSM). These are pupils that have been eligible for FSM in last 6 years and usually this is the same as the disadvantaged group although numbers may differ in some cases. We may also have data for the FSM group, which usually refers to those that are currently eligible for free school meals; and numbers will therefore be smaller than the disadvantaged/FSM6 groups. 24% of primary pupils nationally are classified as disadvantaged.

SEN is split into two categories: SEN Support and EHCP (Education, health and care plan). Note that EHCP replaced statements of SEN, but your school may still have pupils with statements. Nationally, 12.2% of primary pupils have SEN Support whilst 1.3% have an EHCP/statement.

Mobile pupils
The DfE and FFT have quite a strict definition here: it relates to those that joined the school during years 5 or 6. If they joined before year 5 they are not counted in this mobile group. Your school's tracking may have other groupings (eg on roll since reception).

Term of birth
Quite simply, this refers to the term in which the pupil was born. Research shows that summer born pupils tend to do less well than their older autumn or spring-born peers but that the gap narrows over time. ASP and IDSR does not contain any data on these groups, but FFT reports do.

Prior attainment
This could be a blog post all on its own. Here we are talking about pupils categorised on the basis of prior attainment at the previous statutory assessment point (i.e. EYFS for KS1, or KS1 for KS2). Whilst there are 24 prior attainment groups used in the KS1-2 progress measure, for the purposes of reporting we are just dealing with three groups: low, middle and high. Unfortunately, it's not as simple as it seems.

At KS1, pupils' prior attainment is based on their level of development in the specific subject (reading, writing or maths) at foundation stage (EYFSP). The prior attainment groups are not referred to as low, middle and high; they are referred to as emerging, expected or exceeding (terms used for assessment in the reception year). The percentages achieving expected standards and greater depth at KS1 are then compared to the national figures for the same prior attainment group. This data is only shown in IDSR.

At KS2, pupils' prior attainment is based on their results at KS1, and the main method involves taking an average of KS1 results in reading, writing and maths, rather than just looking at prior attainment in the specific subject. Broadly speaking, if the pupil averaged a Level 1 or below at KS1, they go into the low group; if they averaged a Level 2 then they slot into the middle group, and if they are Level 3 average then they fall into the high group. However, please note that a pupil with two 2As and a L3 at KS1 will also be categorised as high prior attaining; they don't need L3 in all subjects. This is the main method used in ASP and IDSR.

This means that at KS1, prior attainment relates to the specific subject at EYFS, whilst at KS2 it depends on an average across three subjects, known as overall prior attainment. But it doesn't end there. ASP, as well as offering us data for those overall prior attainment bands for KS2, also offers us subject specific prior attainments bands as well. Therefore, a pupil that was L1 in reading and writing and L3 in maths at KS1, who is categorised as 'middle' based on the main method, will be low or high depending on subject using the second method.

And then there's FFT who take a different approach again (and it's important we know the difference because it can cause problems). FFT use average prior attainment across subjects at EYFS (for KS1), or KS1 (for KS2), rank all pupils nationally by prior attainment score, and split the national pile into thirds. Pupils falling into the bottom third are referred to as lower, those in the middle are middle, and those in the top third are higher. Schools will have more lower and higher prior attainers in an FFT report than they will in ASP or IDSR.

Sources of data and national comparators
Once we have results for our various groups, we need something to compare them to so we can ascertain how well they are doing. And again, this is not as straightforward as you might think. FFT simply compare the attainment of the group in the school against the result of the same group nationally. Seems fair enough. But what if we are comparing an underperforming group to an underperforming group? Is this going to give a false impression of performance, result in lowering of expectations and possibly a widening of the gap? This is why the DfE (in the ASP system) and Ofsted (in the IDSR) take different approaches.

In ASP, by clicking on an 'explore data in more detail' link, we can access a table that summarises data for numerous key groups and compares the results to national figures. If we look at the national benchmark column we will notice that it is not a fixed figure; it keeps changing. That's because the DfE use different benchmarks depending on the group. These benchmarks can be split into three different types: all, same, and other.
  • All: The group is compared to overall national average (i.e. the result for all pupils nationally. This applies to school's overall results and to EAL, non-EAL, and SEN groups. The comparison of SEN group's results to overall national figures is particularly problematic and it is worth seeking out national figures for SEN pupils as a more suitable comparator. These can be found in DfE statistical releases, and in FFT. 
  • Same: The group is compared to national figures for the same group. This applies to boys, girls, non-SEN, and prior attainment groups. The key issue here is that girls do better than boys in reading and maths at KS2, which means that girls are compared to a higher benchmark than boys. This is not likely to solve the gap problem.
  • Other: The group is compared to the national figure for the opposite group. This applies to disadvantaged/FSM pupils and to looked after children. The aim is to focus schools on closing the gap between low attaining groups and their peers. Note that the data compares the results of these groups in school to the results of other pupils nationally; it does not measure the 'in-school' gap. 
The problem with ASP, despite all the pupil group data on offer for EYFS, phonics, KS1 or KS2, is that the presentation is a bit bland. It provides no visual clues as to whether results are significantly above or below average or significantly improving or declining. It's just a load of numbers in a table. FFT's pupil groups' report is clearer. 

Unlike ASP, which contains data for numerous groups, IDSR just has four: disadvantaged, and low, middle and high prior attainers. Whilst schools certainly need to be able to talk about the performance of other groups, Ofsted have chosen not to provide data for them. Clearly tracking progress of disadvantaged pupils and the gaps between those pupils and others is essential. It is also important that schools are tracking the progress of pupils from start points, and it is recommended that tracking systems are set up for that purpose to enable quick identification of pupils in these key groups.

As in ASP, IDSR compares the results of low, middle and high prior attainers to the national figures for the same groups. There is however a difference in IDSR when it comes to disadvantaged pupils: they are not only compared to the national figures for 'other' (i.e. non-disadvantaged) pupils, but also to the overall national figure. The former is no doubt the more important benchmark.

FFT, like ASP, have numerous key groups but tend to do a better job of presenting the data. Bearing in mind the difference between FFT prior attainment groups, comparators and terminology (FFT use the term 'pupil premium' rather than disadvantaged) explained above, FFT reports are undeniably clearer and easier to understand. They provide three year trends, and indicators to show if results are significantly above or below average (green/red dots), and/or significantly improving or declining (up/down arrows). The report ranks groups in order of progress scores so it is quick to identify the lower and higher performing groups; and can show three year averages for each group, which is useful where numbers are small. In addition, the overview page of the FFT dashboard, lists up to three lower and higher performing groups overall and in each subject. This is done for both KS1 and KS2. FFT also have a useful report on disadvantaged pupils; and, as mentioned above, provide data on pupils by term of birth.   

A word about FFT and progress measures
The default setting in FFT is VA (value added). This means that progress is measured in the same way as it is in ASP and IDSR. It is simply comparing each pupil's result to the national average result for pupils with the same start point, and scores should match other sources. When we look at group level progress data in FFT and focus on say, disadvantaged pupils, the scores are VA scores and will be same as those calculated by the DfE. Using the VA measure in FFT, disadvantaged pupils' progress is not compared to disadvantaged pupils nationally; it is compared to any pupil nationally with the same prior attainment. A like-for-like comparison will only happen if you click the CVA button (which takes numerous factors into account to compare pupils with similar pupils in similar schools). Some people may be dismissive of FFT data because they mistakenly believe it to be contextualised. Progress data is only contextualised if the CVA button is clicked, otherwise it is no different to progress data found elsewhere. The difference - as explained above - is in the attainment comparisons, where results are compared to those of the same group nationally.

I hope this series has been useful. Feel free to print, share and copy. Just ask that you credit the source when doing so. 

Many thanks.