Thursday, 26 April 2018

5 Things primary governors should know about data. Part 5: pupil groups

This is the 5th and final part in a series of blog posts on data for primary governors. Part 1 covered statutory data collection, part 2 was on sources of data, part 3 explained progress measures, and part 4 dealt with headline measures. In this post we're going to discuss those all-important pupil groups.

When we look at school performance data in the performance tables, Analyse School Performance (ASP) system, the Ofsted Inspection Data Summary Report (IDSR), and FFT Aspire, we can see that all those headline figures are broken down by pupil characteristics. Keeping tabs on the performance of key groups is evidently vital; and senior leaders and governors have an important role to play in monitoring the progress of these groups and the attainment gaps between them. Broadly speaking we are dealing with four key types of data: threshold measures (percentages achieving expected or higher standards), average scores, progress scores, and absence figures. Officially, we only have average scores and progress scores at KS2, although your school's internal data may have other measures you can track, including data from standardised tests. Also note that Ofsted, in the IDSR, have a pseudo-progress measure for KS1 whereby attainment is broken down by start point based on Early Years (EYFSP) outcome. More on that later.

Before we push on to look at the main pupil groups and what the various sources of data show us, it is important to note that it is easy to read too much into analysis of data by group. If we take any two groups of pupils - eg those with last names beginning A-M vs those beginning N-Z - there will be an attainment gap between the two groups. What can we infer from this? Nothing.

The main pupil groups are: gender, disadvantaged, SEN (special educational needs), EAL (English additional language), mobile pupils, term of birth, and prior attainment. Some of these require more explanation.

This group includes pupils that have been eligible for free school meals (FSM) in the last 6 years, have been in care at any point, or have been adopted from care. It does not include Forces children. Previously this group was referred to as pupil premium (and still is in FFT reports). When we look at reports we may see reference to FSM6 (or Ever 6 FSM). These are pupils that have been eligible for FSM in last 6 years and usually this is the same as the disadvantaged group although numbers may differ in some cases. We may also have data for the FSM group, which usually refers to those that are currently eligible for free school meals; and numbers will therefore be smaller than the disadvantaged/FSM6 groups. 24% of primary pupils nationally are classified as disadvantaged.

SEN is split into two categories: SEN Support and EHCP (Education, health and care plan). Note that EHCP replaced statements of SEN, but your school may still have pupils with statements. Nationally, 12.2% of primary pupils have SEN Support whilst 1.3% have an EHCP/statement.

Mobile pupils
The DfE and FFT have quite a strict definition here: it relates to those that joined the school during years 5 or 6. If they joined before year 5 they are not counted in this mobile group. Your school's tracking may have other groupings (eg on roll since reception).

Term of birth
Quite simply, this refers to the term in which the pupil was born. Research shows that summer born pupils tend to do less well than their older autumn or spring-born peers but that the gap narrows over time. ASP and IDSR does not contain any data on these groups, but FFT reports do.

Prior attainment
This could be a blog post all on its own. Here we are talking about pupils categorised on the basis of prior attainment at the previous statutory assessment point (i.e. EYFS for KS1, or KS1 for KS2). Whilst there are 24 prior attainment groups used in the KS1-2 progress measure, for the purposes of reporting we are just dealing with three groups: low, middle and high. Unfortunately, it's not as simple as it seems.

At KS1, pupils' prior attainment is based on their level of development in the specific subject (reading, writing or maths) at foundation stage (EYFSP). The prior attainment groups are not referred to as low, middle and high; they are referred to as emerging, expected or exceeding (terms used for assessment in the reception year). The percentages achieving expected standards and greater depth at KS1 are then compared to the national figures for the same prior attainment group. This data is only shown in IDSR.

At KS2, pupils' prior attainment is based on their results at KS1, and the main method involves taking an average of KS1 results in reading, writing and maths, rather than just looking at prior attainment in the specific subject. Broadly speaking, if the pupil averaged a Level 1 or below at KS1, they go into the low group; if they averaged a Level 2 then they slot into the middle group, and if they are Level 3 average then they fall into the high group. However, please note that a pupil with two 2As and a L3 at KS1 will also be categorised as high prior attaining; they don't need L3 in all subjects. This is the main method used in ASP and IDSR.

This means that at KS1, prior attainment relates to the specific subject at EYFS, whilst at KS2 it depends on an average across three subjects, known as overall prior attainment. But it doesn't end there. ASP, as well as offering us data for those overall prior attainment bands for KS2, also offers us subject specific prior attainments bands as well. Therefore, a pupil that was L1 in reading and writing and L3 in maths at KS1, who is categorised as 'middle' based on the main method, will be low or high depending on subject using the second method.

And then there's FFT who take a different approach again (and it's important we know the difference because it can cause problems). FFT use average prior attainment across subjects at EYFS (for KS1), or KS1 (for KS2), rank all pupils nationally by prior attainment score, and split the national pile into thirds. Pupils falling into the bottom third are referred to as lower, those in the middle are middle, and those in the top third are higher. Schools will have more lower and higher prior attainers in an FFT report than they will in ASP or IDSR.

Sources of data and national comparators
Once we have results for our various groups, we need something to compare them to so we can ascertain how well they are doing. And again, this is not as straightforward as you might think. FFT simply compare the attainment of the group in the school against the result of the same group nationally. Seems fair enough. But what if we are comparing an underperforming group to an underperforming group? Is this going to give a false impression of performance, result in lowering of expectations and possibly a widening of the gap? This is why the DfE (in the ASP system) and Ofsted (in the IDSR) take different approaches.

In ASP, by clicking on an 'explore data in more detail' link, we can access a table that summarises data for numerous key groups and compares the results to national figures. If we look at the national benchmark column we will notice that it is not a fixed figure; it keeps changing. That's because the DfE use different benchmarks depending on the group. These benchmarks can be split into three different types: all, same, and other.
  • All: The group is compared to overall national average (i.e. the result for all pupils nationally. This applies to school's overall results and to EAL, non-EAL, and SEN groups. The comparison of SEN group's results to overall national figures is particularly problematic and it is worth seeking out national figures for SEN pupils as a more suitable comparator. These can be found in DfE statistical releases, and in FFT. 
  • Same: The group is compared to national figures for the same group. This applies to boys, girls, non-SEN, and prior attainment groups. The key issue here is that girls do better than boys in reading and maths at KS2, which means that girls are compared to a higher benchmark than boys. This is not likely to solve the gap problem.
  • Other: The group is compared to the national figure for the opposite group. This applies to disadvantaged/FSM pupils and to looked after children. The aim is to focus schools on closing the gap between low attaining groups and their peers. Note that the data compares the results of these groups in school to the results of other pupils nationally; it does not measure the 'in-school' gap. 
The problem with ASP, despite all the pupil group data on offer for EYFS, phonics, KS1 or KS2, is that the presentation is a bit bland. It provides no visual clues as to whether results are significantly above or below average or significantly improving or declining. It's just a load of numbers in a table. FFT's pupil groups' report is clearer. 

Unlike ASP, which contains data for numerous groups, IDSR just has four: disadvantaged, and low, middle and high prior attainers. Whilst schools certainly need to be able to talk about the performance of other groups, Ofsted have chosen not to provide data for them. Clearly tracking progress of disadvantaged pupils and the gaps between those pupils and others is essential. It is also important that schools are tracking the progress of pupils from start points, and it is recommended that tracking systems are set up for that purpose to enable quick identification of pupils in these key groups.

As in ASP, IDSR compares the results of low, middle and high prior attainers to the national figures for the same groups. There is however a difference in IDSR when it comes to disadvantaged pupils: they are not only compared to the national figures for 'other' (i.e. non-disadvantaged) pupils, but also to the overall national figure. The former is no doubt the more important benchmark.

FFT, like ASP, have numerous key groups but tend to do a better job of presenting the data. Bearing in mind the difference between FFT prior attainment groups, comparators and terminology (FFT use the term 'pupil premium' rather than disadvantaged) explained above, FFT reports are undeniably clearer and easier to understand. They provide three year trends, and indicators to show if results are significantly above or below average (green/red dots), and/or significantly improving or declining (up/down arrows). The report ranks groups in order of progress scores so it is quick to identify the lower and higher performing groups; and can show three year averages for each group, which is useful where numbers are small. In addition, the overview page of the FFT dashboard, lists up to three lower and higher performing groups overall and in each subject. This is done for both KS1 and KS2. FFT also have a useful report on disadvantaged pupils; and, as mentioned above, provide data on pupils by term of birth.   

A word about FFT and progress measures
The default setting in FFT is VA (value added). This means that progress is measured in the same way as it is in ASP and IDSR. It is simply comparing each pupil's result to the national average result for pupils with the same start point, and scores should match other sources. When we look at group level progress data in FFT and focus on say, disadvantaged pupils, the scores are VA scores and will be same as those calculated by the DfE. Using the VA measure in FFT, disadvantaged pupils' progress is not compared to disadvantaged pupils nationally; it is compared to any pupil nationally with the same prior attainment. A like-for-like comparison will only happen if you click the CVA button (which takes numerous factors into account to compare pupils with similar pupils in similar schools). Some people may be dismissive of FFT data because they mistakenly believe it to be contextualised. Progress data is only contextualised if the CVA button is clicked, otherwise it is no different to progress data found elsewhere. The difference - as explained above - is in the attainment comparisons, where results are compared to those of the same group nationally.

I hope this series has been useful. Feel free to print, share and copy. Just ask that you credit the source when doing so. 

Many thanks. 

Sunday, 22 April 2018

Capped scores in 2018 progress measures

The DfE recently announced that they would cap the extremely negative progress scores (but not the extremely positive progress scores by the way) in order to reduce the impact that such scores can have on school's overall measures. This is a welcome move considering how damaging these scores can be, particularly for schools with high numbers of pupils with SEND or with small cohorts.

The guidance states that 'the limit will mean that there is a minimum progress score that is assigned to pupils within the prior attainment groups (PAGs) where extremely negative scores exist. Where a pupil’s score is more negative than the minimum score, the minimum score will replace the pupil’s original progress score when calculating a school’s progress average. The minimum score for each PAG will be determined based on the variation in pupil progress scores for pupils across the country within that PAG (as measured by the standard deviation). The minimum scores will be fixed at a set number of standard deviations below the mean so that approximately 1% of pupils are identified nationally (we anticipate this will normally be no more than 1 or 2 pupils in any school).'

Essentially, this means that the threshold at which progress scores will be capped will depend on the PAG the pupil is in; and the threshold will represent the progress scores of the bottom 1% within each PAG nationally. Whilst the guidance states that 'predicting which pupils will, and will not, have their score affected by this methodology change, in advance of progress scores being made available, will not be possible', we can get a rough idea of the capped score thresholds by using the 2017 standard deviations for each PAG (Ofsted make these available on their IDSR guidance website here).

To calculate the capped score thresholds, the 2017 standard deviation for each PAG has been multiplied by -2.3. This because the bottom 1% on a bell curve are -2.3 standard deviations from the mean.

A spreadsheet with indicated capped score thresholds for KS2 and KS4 is available to download here

As always, please note that the thresholds in this spreadsheet are NOT definitive; they are just for guidance. They are intended to show the variation in scores between PAGs and also indicate how low the scores could be even after the capped is applied.

Further reading:

Key stage 2: See pages 8-9 of Primary Accountability Guidance 

Key Stage 4: See pages 12-13 of Progress 8 Technical Guidance

Saturday, 21 April 2018

Converting standardised scores to scaled scores

Many schools are using standardised tests from the likes of NFER, GL and Rising Stars to monitor attainment and progress of pupils, and to predict outcomes; and yet there is lot of confusion about how standardised scores relate to scaled scores. The common assumption is that 100 on a standardised test (eg from NFER) is the same as 100 in a KS2 test, but it's not. Only 50% achieve 100 or more in a standardised test (100 represents the average, or the 50th percentile); yet 72% achieved 100+ in the KS2 reading test in 2017 (the average was 105 that year). If we want a standardised score that better represents expected standards then we need one that captures the top 72%, i.e. around 92. However, to be on the safe side, I recommend going for 94 (top 66%), or maybe even 95 (top 63%) if you want to be really robust. Whatever you do, please bear in mind that standardised test scores are not a prophecy of future results, they are simply an indicator. Michael Tidd (@ MichaelT1979) has written an excellent blog post on this subject, which I recommend you read if you are using standardised scores for tracking.

The purpose of this blog is to share a conversion table, that will give you a rough idea of how scaled scores convert to standardised scores. It is based on distribution of 2017 KS2 scores in reading and maths, taken from national tables (table N2b). The cumulative percentages in the KS2 national tables are converted to standardised scores via this lookup table.

The conversion table can be downloaded here.

Please note: this is not definitive; it is a guide. It will also change next year, when 2018 national data is released, but hopefully it will demonstrate that one score does not directly convert into another.


Wednesday, 18 April 2018

5 things primary governors should know about data. Part 4: headlines and trends

This is fourth part in a series of five blog posts for primary governors. Part 1 covered statutory assessment, part 2 dealt with sources of data, and part 3 explained the progress measures. Here, we will look at the headline measures governors need to be aware of.

Inspection Data Summary Report (IDSR) Areas to investigate
This is an important place to start. The IDSR lists your school's strengths and weaknesses (under the banner of 'areas to investigate'), as well as information relating to floor standards and coasting, on its front page and governors definitely need to have sight of this. The list of areas to investigate is not exhaustive - your school no doubt has more strengths that those listed (and possibly more weaknesses).

Early Years Foundation Stage
Key measure: % achieving a good level of development
As explained in part 1, pupils at the end of reception are assessed as 'emerging', 'expected' or 'exceeding' in each of the 17 early learning goals (ELGs). If a pupil reaches the expected level of development (i.e. assessed as expected or exceeding) in the 12 main ELGs, this is described as a 'good level of development' (GLD). Our first key measure is therefore:
  • % achieving a good level of development at end of reception
This data is not available in the performance tables (i.e. is not in public domain) but can be found in Analyse School Performance (ASP), where the school's result is shown against both LA and national figures; and in Ofsted's IDSR, which shows a 3 year trend against national figures. Pay attention to the trend: is it going up or down and how does the school compare to national. Always consider the context when comparing the results of different cohorts.

Phonics in year 1 (with possible retake in year 2)
Key measure: % attaining the expected standard
The phonics check is carried out in year 1 and if a pupil does not achieve the pass mark - which, since its inception in 2012, has been 32 words correctly decoded out of 40 - then they take it again in year 2. The key measures that governors should be aware of are:
  • % attaining expected standard in year 1
  • % attaining expected standard by end of year 2
Note: % attaining expected standard by end year 2 takes the whole cohort into account, not just those that retake in year 2. 

Again, this data is not in the public domain. ASP provides a comparison against LA and national figures for year 1 results only (no 'end of year 2' measure') and does not provide a trend; IDSR shows a 3 year trend against national figures. Again, note how the school compares to national, and whether or not standards are improving. Again, always take context into account when looking at trends.

Key Stage 1
Key measures: % attaining expected standard, % attaining greater depth
KS1 assessment, made at the end of year 2, is mainly focussed on reading, writing and maths (but don't completely ignore science!). Pupils can be assessed as 'below' or 'pre-key stage' if they are below the standard of the curriculum, but the vast majority of pupils are either working towards the expected standard, working at expected standards, or working at greater depth. The key measures that governors should be aware of are as follows:
  • % attaining expected standards or above in reading, writing and maths (3 separate measures)
  • % attaining greater depth in reading writing and maths (3 separate measure)
Unlike at KS2 where the DFE produce a single, combined result for reading, writing and maths (see below), here they are kept separate. However, if your school uses FFT you can get a combined result for KS1 (i.e. the dashboards show % pupils attaining expected standards in all three subjects). 

ASP provides us with percentages attaining expected standards in each subject (3 measures) and the same for greater depth; and school results are compared against LA and National figures. Note if your school is above or below these comparators, but make sure you consider prior attainment of pupils (based on EYFS outcomes) when you do this. For this reason, IDSR is more useful because it breaks the results down by prior attainment, namely emerging (low), expected (middle), and exceeding (high), thus providing useful context.

Governors should at least be aware of percentages attaining expected standards and greater depth in reading, writing and maths at KS1, how those results compare to national figures (note that IDSR will indicate if results are in top or bottom 10% nationally), and whether or not they have improved on the previous year. Neither IDSR nor ASP currently provide trend data for KS1, due to there being only two years of comparable data, but you can view and download previous year’s data from ASP if you have access. We can compare 2017 to 2016 results but please ensure you consider context of cohorts (e.g. prior attainment, SEND, EAL etc) when doing this. 

The FFT KS1 dashboard does provide previous year's data, and the overview page can be particularly insightful. It provides a combined reading, writing and maths result for both expected standards and greater depth, displayed as neat, easy to understand speed dial. Unlike ASP and IDSR, the data will indicate if results are significantly above or below national average (green or red dot), and will also show if results are significantly improving or declining (up or down arrow). The right hand side of the report shows how the school's KS1 results compare to estimated outcomes based on pupils' start points (using EYFS data). This is a form of progress measure, and it will reveal if results are above 'expected' despite being below national, or below 'expected' despite being above national, depending on pupils development at the end of foundation stage. 

Key Stage 2
Key measures: % attaining expected and high standards, average scaled scores, progress scores, floor and coasting standards
Let's face it: there are a lot of measures at KS2. The key measures that school's have to display on their websites (and that are shown in the public performance tables) are a good place to start:
  • % attaining expected standard in reading, writing and maths combined*
  • % attaining the higher standard in reading, writing and maths combined**
  • Average progress in reading
  • Average progress in writing
  • Average progress in maths
  • Average scaled score in reading
  • Average scaled score in maths
* score of 100+ in reading and maths test and expected standard in writing
** score of 110+ in reading and maths test and greater depth in writing

Unlike EYFS, phonics, and KS1 data, which is not in the public domain, the KS2 data listed above is neatly presented on the main page of the performance tables for each school, and governors are advised to be aware of this. The school results are shown alongside LA and national figures, and previous years' results are now available (just 2016 at time of writing) for comparison. Again, context of cohorts needs to be taken into account when evaluating performance over time. 

The DfE does not categorise attainment data (i.e. do not indicate if it is significantly above or below average - you'll need FFT reports for that information) but the IDSR will show if results are in the top or bottom 20% nationally (this will be stated on the front page as an 'area to investigate'). Progress scores, however, are categorised (in both ASP and performance tables) as follows:
  • Well above average (dark green): progress is significantly above average and in top 10% nationally
  • Above average (light green): progress is significantly above average but not in top 10%
  • Average (yellow): progress is broadly in line with national average
  • Below average (orange): progress is significantly below average but not in bottom 10%
  • Well below average (red): progress is significantly below average and in bottom 10%
It is vital that governors are aware of their school's progress category for each subject, and most importantly are able to discuss progress in broad terms, particularly issues that have resulted in low progress scores, or what has led to high scores.

As at KS1, FFT's KS2 reports show if attainment and progress is significantly above or below average (green and red dots) and indicate if standards have significantly improved or declined (up and down arrows). Again, the overview page of the FFT governor dashboard is incredibly useful as for quick reference. 

Other KS2 headlines that governors should know about are:
  • Floor standards: is your school below floor or has it been in the past? (see IDSR front page)
  • Coasting: similar to floor standards but over 3 years. Is your school defined as coasting? (again, see IDSR front page)
  • Absence: it's not an academic measure but it's vital we know about school's overall and persistent absence figures, how they compare to national and if they are going up or down
There are of course other KS2 measures including percentages attaining expected and higher standards in individual subjects. It may be that the combined result is low due to underperformance in all subjects or just one, and it's important that we investigate this. Also, don't ignore science and grammar, punctuation and spelling (GPS/SPaG), results for which can be found in IDSR. Also note that FFT dashboards show progress data for grammar, punctuation and spelling, and - as for other subjects - indicate if results are improving or declining.

This is a lot to take in and governors cannot be expected to carry all this information around in their heads. Focus on the headline measures in the bullet point lists above; be aware of how results relate to national figures, and whether or not those results are improving.

And, of course, we also need to know the performance of key groups of pupils, and that's the subject of the last post in this series. 

Saturday, 14 April 2018

The Progress Horizon

Following the release of further details on the proposed reception baseline and future progress measures this week, and the inevitable battle for the soul of primary education already in full swing, I find myself distracted by the possible mechanics of these measures, specifically what will happen to those pupils that change schools.

The issue of measuring the progress of 'mobile' pupils is a murky and complex one. Currently, it is straightforward in design and yet deemed by many to be extremely unfair. When a pupil moves schools they take their baseline - their KS1 or KS2 results - with them (if they have KS1 or KS2 results of course), and they are included in the new school's progress measures. The new school is solely responsible for the progress that pupil makes, even if they arrive late in Year 6 or Year 11. Of course, in some cases a school may benefit by admitting a pupil that does very well in relation to their specific start point, but often pupils that change schools do less well than their more rooted peers.

But what of this new reception baseline to KS2 progress measure? How will it deal with mobile pupils? Will it include them or not? Over the last year or so a number of people have told me that the new progress measure would be 'a cohort-level measure'; that it would not involve progress of individual pupils and would not take account of movement of pupils in and out of the school. If this were true then it would be a radical departure from the current measure which does just that. I assumed that this resulted from a misinterpretation of information in the primary assessment consultation, which states that reception baseline data will be used 'to calculate their school’s cohort-level progress measures.'; that it would not be used to evaluate the progress of individual pupils.

This is no different to the guidance on the current progress measure, which is 'a school-level accountability measure. Progress is calculated for individual pupils solely in order to calculate the school’s overall progress scores. There is no need for schools to share individual pupil progress scores with their pupils or parents.'

On the subject of whether or not so-called mobile pupils will be included in future progress measures, we are getting mixed messages even from the experts. In the TES (16th March 2018, p14), Greg Watson, chief executive of GL Assessment, lists 'three key challenges: matching the pupil data accurately in the first place, keeping track of the data as pupils move between schools, and, in the cases where pupils have moved, deciding how much credit each school gets for progress.' This suggests that the issue of mobility is high on the agenda, and the last point - apportioning credit for progress between schools in the case where a pupil moves - is an interesting and new development that deserves some serious consideration.

And yet Professor Robert Coe, director of the Centre for Evaluation and Monitoring at Durham University, is quoted in the same article as saying "Does it make sense to wait seven years from the time children start school to make a punitive judgement about the school based on the performance of whatever proportion of that small number of children are still at the same school? Not remotely"

Putting aside the main point - which I agree with - this implies that the measure will only involve those pupils retained since the start of reception. I assumed that the current methodology would continue, whereby individual pupil's progress scores are calculated and aggregated to generate the school's progress score; and that any pupil that changes school will be matched back to their baseline score and included in the new school's measures - no matter how unfair that seems.

But there is an interesting sentence in the primary assessment consultation response* which is maybe the source of much of the confusion:

In addition, we will work with analytical experts to develop the rules around the new progress measures, for example the minimum cohort size required and the minimum proportion of pupils that need to have been in the same school between reception and key stage 2.

But this can be interpreted in two ways:
  1. Mobile pupils ARE included in the progress measures. The DfE calculate the percentage of pupils retained since reception and do not publish progress data if retention falls below a certain threshold.
  2. Mobile pupils are NOT included in the progress measures. The DfE calculate the percentage of pupils retained since reception and do not publish progress data if retention falls below a certain threshold. 
A measure of retention would certainly provide useful contextual information, and perhaps progress measures should be withheld for those schools with high mobility, but I'm not sure I want to see mobile pupils omitted from measures full stop. What percentage of pupils actually remain in a school from reception to KS2 anyway? We could see a lot of pupils excluded from progress measures. 

The way I see it, we have four choices:
  1. Simply compare average attainment at the start of reception to the average attainment at the end of KS2, and ignore any movement in between. This would be a crude and meaningless measure. You only need look at the difference between the KS1 prior attainment of the current year 6 in a school and the KS1 results four year ago to see that such an approach would not work. This measure takes no account of movement in and out of the school.
  2. Measure only the progress for those pupils retained since reception, and not include any new arrivals. Many schools will therefore have small numbers of matched pupils, and, according to statement above, could end up having no published progress measures if retention falls below a certain threshold. This measure removes those that leave but does not add those that arrive.
  3. Carry on as now, including all pupils with a baseline in the school's progress measure, regardless of where that baseline was administered and how long the pupil has been in the school. This measure takes account of those that leave and arrive.
  4. As above but apportioning progress between schools in the cases where pupils have moved. This measure takes account of those that leave and arrive but is proportional (but no doubt complicated).
I know all of this is years away - the first cohort of reception baseliners reach the end of KS2 in 2027 - but I dwell on these things and some clarity, or at least some vague proposals, would be welcome. Otherwise we'll all continue to speculate and worry. I have tweeted the DfE for an answer.

I eagerly await their  response.

*Many thanks to Kate Barker (@K8ebarker) for bringing this to my attention.

Tuesday, 10 April 2018

5 things primary governors should know about data. Part 3: progress measures

The key stage 1-2 (KS1-2) progress measure is a value added (VA) measure, and this is nothing new. We have had VA measures for years, both at KS2 and at KS4. But previously these VA measures - which took up pages of the old RAISE reports - played second fiddle to the levels of progress measure. This was for a number of reasons:
  1. Levels of progress was a key measure with floor standards attached
  2. It was in the same language as everyday assessment
  3. It made target setting easy (just add 2 levels to KS1 result)
  4. It was simple and everyone understood it
But levels have gone, and for good reason: they labelled children, they were best-fit so pupils could have serious gaps in learning but still be placed within a level, and progress became synonymous with moving on to next level rather than consolidating learning and developing deeper understanding. Plus, they were never designed to be split into sublevels or points and used as for progress measures anyway. 

Most confusing of all, the two progress measures - VA and levels of progress - often contradicted one another. It was possible, for example, for a school to have all pupils make 'expected' progress of 2 levels, and yet have a VA score that was significantly below average. This was because - contrary to popular belief - the VA measure had nothing to do with levels; it was all to do with average KS2 scores from KS1 start points. 2 levels might be enough from one start point but nowhere near enough from another. 

But this is all rather academic now because levels have gone and we are left with a single progress measure: VA.

So, what is VA? 

VA involves comparing a pupil's attainment score at KS2 to the average score for pupils with similar prior attainment. There are a few myths we need to bust first, before we continue:
  1. We do not need data in the same format at either end of the measure to calculate VA. Currently we have KS1 (sub)levels at the beginning and KS2 scaled scores at the end. These data are not in the same format. We needed compatible data for the levels of progress measure but not for VA. This misconception is a hangover from levels, and it's something that is better understood in secondary schools where they have KS2 scores at one end and GCSE results at the other.
  2. We do not even need the same subjects at either end. Again, this is better understood in secondary schools, where the baseline comprises KS2 scores in reading and maths (note: no writing) and the end point is any GCSE the student sits. VA can be measured from KS2 test scores in reading and maths to GCSE result in Russian or Art, for example. 
  3. KS1-2 VA has nothing to do with that magic expected standard score of 100. Plenty of pupils get positive progress scores at KS2 without achieving a score of 100 in KS2 tests. They just need to exceed the national average score of pupils with the same prior attainment, and scoring 92 might be enough, depending on start point. And pupils that achieved 2b at KS1 (often referred to as 'expected' in old money) do not have to achieve 100 to make 'good' progress; in 2017 they had to exceed 102!
Each pupil's KS1 result - their prior attainment or start point - is therefore crucial to this process. Each p-scale, level and sublevel in reading, writing and maths at KS1 has a point value, which enables the DfE to calculate a KS1 average point score (APS) across the three subjects for every child that has a KS1 result (note: pupils without a KS1 result are excluded from progress measures). Their KS1 APS is then used to place pupils into a prior attainment group (PAG), of which currently we have 24, ranging from pupils that were on p-scales at KS1 (pupils with SEND) up to pupils that were Level 3 in all subjects. There is even a PAG for pupils that were level 4 at KS1, but there aren't many pupils in that group. 

All pupils with KS1 results are therefore slotted into PAGs alongside thousands of other pupils nationally. The DfE then take in all the KS2 test scores and calculate the average KS2 score for each PAG. Let's look at two examples for reading at KS2 (the process is the same for maths):
  • We have two pupils in a class that have KS1 prior attainment of 16 APS (2b in reading and writing and 2a in maths at KS1). They are placed into the same PAG as thousands of other children nationally with 16 APS at KS1. The DfE take in all the thousands of reading test scores for all the pupils in this PAG and calculate the average score, which for this PAG is 105 (note: in reality benchmarks are to 2 decimal places e.g. 104.08). 105 therefore becomes the benchmark for this group. Our two pupils scored 108 and 101 in their KS2 tests and both have met the expected standard. However, only one pupil has a positive progress score. The pupil scoring 108 has beaten the national benchmark by 3 whilst the other has fallen short by 4. These pupils' VA scores are therefore +3 and -4 respectively.
  • We have two other pupils in our class who have KS1 prior attainment of 10 APS (2c in reading and Level 1 in writing and maths). They are in the same PAG as thousands of other children nationally with 10 APS at KS1. The DfE collect the reading test scores for all pupils in the group nationally calculate the KS2 average score, which in this case is 94 (again, in reality this would be to 2 decimal places). 94 therefore becomes the benchmark for this group. Our two pupils scored 98 and 88 in their KS2 tests. Neither have met the expected standard but the first pupil has beaten the national benchmark by 3 whilst the other has fallen short by 7. These pupils' VA scores are therefore +4 and -6 respectively.
This process is repeated for each pupil that has a KS1 result. All pupils are placed into PAGs and their scores in KS2 tests are compared to the national average score (the benchmark) for pupils in the same PAG. If a pupil beats the benchmark, they have a positive progress score; if they fall short, their progress score is negative. Page 17-18 of the primary accountability guidance has a table of all PAGs with their corresponding KS2 benchmarks in reading, writing and maths. 

What happens next?

The DfE take all progress scores for all pupils in the year 6 cohort in your school, and calculate the average. In our example above we have four pupils (two with prior attainment of 16 APS and two with 10 APS). Let's imagine that is our entire Y6 cohort (it's a small school!). We add up the progress scores (3 + -4 + 3 + -7 = -5) and calculate the average (-5 / 4 pupils = -1.25). This school's VA score is therefore -1.25, and you will see these aggregated progress scores presented in the performance tables and ASP (where they are colour coded and categorised), in Ofsted's IDSR (where they inform the areas to investigate), and in FFT reports (where they are shown to be in line with, or significantly above or below average). 

And what does -1.25 mean. Putting it crudely, it tells us that, on average, pupils scored 1.25 fewer points in their test than similar children nationally. And when the DfE say 'similar children', they are basing this on prior attainment alone, not contextual factors. A progress measure that takes context into account is called Contextual Value Added (CVA), which the DfE scrapped in 2011, but which FFT still offer. CVA is an attempt to create a like-for-like progress measure but is not favoured by government.

Are there any issues with KS1-2 progress measures? Whilst VA is preferable to levels of progress, there are numerous problems:
  1. Writing! There is no test for writing at KS2 but there is still a progress measure. As in reading and maths, pupils are set benchmarks in writing that are fine graded to decimal points (see p17-18 here), but because pupils do not have a test score, these benchmarks are essentially unachievable. Instead, the DfE have assigned 'nominal' scores to teacher assessments for writing, which makes for a very clunky measure. The vast majority of pupils are assessed as either working towards the expected standard, working at the expected standard, or working at greater depth. These attract values of 91, 103, and 113 respectively. In reading, and maths pupils can achieve test scores in the range of 80-120; in writing, they get 91, 103 or 113. It doesn't work.
  2. Pupils below the standard of the tests/curriculum are also assigned nominal scores, which range from 59 for the lowest p-scales, up to 79 for the highest of the pre-key stage assessments. These pupils often have SEND and tend to end up with big negative progress scores, which can have a detrimental impact on a school's overall progress scores. The system is therefore punitive towards those schools that have large groups of pupils with SEND (or towards small schools with just one such pupil). The DfE plan to mitigate this issue by capping negative scores this year. 
  3. It can't be predicted. The benchmarks change every year (they are the national average scores for each PAG that year), and we don't know what they are until after pupils have left. This is a headache for many headteachers and senior leaders.
  4. It relies on the accuracy of KS1 results. I’ll say no more about that. 
Now you know how these progress measures are calculated, and what the issues are. But what do they mean in terms of school accountability? 

That's the subject of the next post in this series: headlines and trends.

Saturday, 7 April 2018

5 things primary governors should know about data. Part 2: data sources

In part 1 we dealt with statutory assessment - the data that the DfE collects from primary schools. In part 2 we're going to look at the main sources of data that governors should be aware of: the key reports, when they're made available, who has access, and what they contain. Here we will focus on four main sources: the Performance Tables, Analyse School Performance (ASP) system, Ofsted's Inspection Data Summary Report (IDSR), and Fischer Family Trust (FFT) dashboards, plus a couple of other important points in the calendar.

1) Results day
Published first week in July
Availability: Secure NCA Tools website. Login required. No access for governors.
Schools receive pupils' KS2 test scores via the NCA Tools website and a summary of results. Not in public domain

2) Checking exercise
Published 31st August/1st September
Availability: Secure Data Checking website. Login required. No access for governors.
Data checking website opens for schools to check and query results. Also contains pupils' progress scores and summary sheet of all results to be shown in the performance tables. Not in public domain.

3) The Performance Tables
Published annually in December
Availability: Public domain. Requires no login.
Often referred to as the league tables, it is important to note that this is the only publicly available data source on the list. Information on most schools in England can be found here:

For secondary school data it is a fantastic resource, and contains nearly as much as ASP. For primary schools it is more limited and only contains data for key stage 2 (KS2). There is no school level data for early years foundation stage (EYFS), phonics (PSC), or key stage 1 (KS1) contained in the performance tables. 

It is also important to note that data is only released in the performance tables once it has been validated. After it has been collected, school data goes through a checking exercise in the autumn term to ensure it is a true reflection of a school's results. Pupils may be discounted from results if they are recent arrivals from overseas for example, and these will be removed during the checking exercise. Also, a school may have had some test scripts successfully re-marked. All of these changes will be taken account of in the validated data, and this is why the DfE do not publish school data in the public domain until it is deemed 'clean'. Even after this process, some schools may still find errors which can be corrected before data is finalised. The other (non-public) sources, detailed below, publish data prior to validation, during the autumn term (referred to as unvalidated data), and governors need to be aware that these reports may contain errors (which will be corrected in future releases).

The performance tables contain the following data for KS2
  • % attaining expected standard in reading, writing and maths (single combined measure)
  • % attaining higher standard in reading, writing and maths (single combined measure)
  • Average scaled scores (two separate measures)
  • Average progress scores in reading, writing and maths (three separate measures)
The school results are shown alongside the national and local authority figures in all cases except for progress (note: progress is a relative measure and schools are compared to 0. This will be explained in the next blog). The performance tables now contain the past 2 years results (2016 and 2017) for all key measures - scroll down to dropdown links below main results.

The performance tables also provide data for certain pupil groups: disadvantaged (pupils on free school meals in past 6 years (FSM6), looked after children (CLA/LAC), or pupils adopted from care), gender, prior attainment (low, middle, high prior attainment groups defined by pupils' level at KS1), English additional language (EAL), and mobility (pupils that joined during years 5 or 6). There is no data for SEN pupils in the performance tables - they are included in overall figures but results are not shown for that group specifically. It is important that governors have an awareness of data for particular pupil groups, especially disadvantaged pupils, and this will be the subject of the final blogpost in this series.

The main page of performance tables is essential information for governors and it's worth printing it out.

4) Analyse School Performance (ASP)
Published annually in October/November and updated with later released data (eg EYFSP and attendance) and validated data (December)
Availability: Secure site requiring login.
ASP replaced RAISEonline and is not publicly accessible. Governors can be granted access but - please take note - only to the anonymised version of the system, which contains no pupil-level data. It is a fairly simple system and, apart from extended data on pupil groups and individual subjects, it does not provide much more than the performance tables in terms of KS2 data. It does however contain results for EYFSP, phonics and KS1, and presents them in the familiar, performance tables-style format with national and LA comparators. From a governor point of view, ASP does not have a great deal to offer and governors are advised to focus more on the Ofsted Inspection Data Summary Report (IDSR). 

5) Inspection Data Summary Report
Published annually in October/November
Availability: Download via ASP. Login required.
The IDSR, which replaced the Ofsted Dashboard, is an inspector's key source of school data and is therefore essential reading for governors. It is a PDF document downloaded from ASP and is not in the public domain. The front page of the report lists areas to investigate - which rather confusingly may be positive or negative statements - and shows if the school is below floor standards or deemed to be 'coasting'. The following pages contain contextual information about the school including absence, exclusions, deprivation, numbers of pupils in certain key groups, and prior attainment of cohorts in reading, writing and maths.

The report shows a breakdown of progress and attainment (% attaining expected standards and higher standards) in reading, writing and maths at KS2. Scatter plots are used to reveal outliers. Results in grammar, punctuation and spelling tests, and science assessments at KS2 are also provided. Green and red boxes are drawn around data that is significantly above or below national average, although there is inconsistency here and governors should be aware that in some parts of the report this highlighting indicates that the school is in the top or bottom 10%, whereas elsewhere it does not (hint: read the small print). Statistical significance indicators are only used for KS2 progress data. A 3 year trend is provided for progress in each subject (based on school's national ranking each year) but no trends are provided for KS1 or KS2 attainment (due to there only being 2 years of comparable data).

KS1 results (% attaining expected standards and greater depth) are presented in the same format as KS2. Note: KS1 data is attainment only; there are no progress measures for KS1 (although it does show a break down of KS1 attainment by start point (taken from EYFSP) which a sort of pseudo-progress measure). 

Phonics results (% attaining expected standard in Y1, and by the end of Y2) are shown as three year trends against national figures. IDSR also provides 2 pages of EYFSP outcomes and governors should pay particular attention to the percentage reaching a good level of development (GLD), which is also presented as a three year trend against national figures. One little quirk: on other pages of the IDSR, light blue bars indicate attainment of expected standards, and dark blue bars show attainment of higher standards. For EYFSP, light blue bars indicate the result for the whole cohort whilst dark blue bars are used for FSM pupils. 

In all cases, comparisons are made against national figures. Unlike in ASP and performance tables, local authority comparators do not feature in IDSR.

Data on performance of pupil groups is very limited in the IDSR. The report focusses on four key groups: disadvantaged pupils; and low, middle and high prior attainment pupils. At KS2. prior attainment is based on levels achieved at KS1 (low = below L2, middle = broadly L2, high = broadly L3). At KS1, prior attainment is based on development in the EYFSP and is defined as emerging, expected, or exceeding, but these can be viewed as low, middle and high prior attaining. 

To sum up, the IDSR is not the most user friendly report, but it is essential that governors are familiar with it. Schools need to devote time to IDSR training for governors.

6) FFT Dashboards
Published September and updated with validated data.
Availability: Secure site. Subscription to FFT required. Governor login available. 
FFT data is published earlier than other reports and is presented in a clear, accessible format, which makes it an attractive option for many schools. FFT dashboards provide analysis of KS1 and KS2 data; they do not provide analysis of phonics or EYFSP. FFT compare results to national figures (attainment) but they also compare them to an estimated outcome based on pupils' start points (progress). This means that a low prior attaining cohort may have results that are below national average, but above the estimated outcome (low attainment. high progress). Equally, a high prior attaining cohort may have results that are above national average but below the estimated outcome(high attainment, low progress). FFT provide trends for both progress and attainment (they have converted pre-2016 results into 'new money'), and indicate where results are significantly improving or declining, and where data is significantly above or below national average. Reports also show where the school ranks nationally for both attainment and progress. The overview page is particularly useful for governors, with its clear 'speed dial' format and table showing higher and lower performing groups in each subject. The pupil groups page is also extremely valuable in that it ranks groups in order of progress from lowest to highest - using a combined reading and maths progress measure - and provides red and green indicators to show if any group's progress is significantly below or above average. This clarity is something that is missing from ASP. If your school has an FFT subscription, it's definitely worth taking a look at FFT dashboards.

Coming up in the next blog.....

KS1-2 progress measures in reading, writing and maths are a critical part of the accountability machine, and it's important that governors have some understanding of how they are calculated, what they mean, and what the flaws are in these measures. This is the subject of the next blog post. 

Friday, 6 April 2018

5 things primary governors should know about data. Part 1: statutory assessment

Data is a minefield. It’s hard enough for head teachers and senior leaders to find their way through it all, but for many  governors it is a minefield in a swamp, with ditches filled with spikes, and pits of burning tar. And crocodiles. 

It is vital that governors have a sound working knowledge of school data. They need to know what data is collected and when, where they can find data for their school, and what - if anything - it tells them. They need to be able to separate the important stuff from the noise, pull out the key messages, ask the right questions, and understand the limitations of data. They need to understand how their school tracks the progress of its pupils but also understand that they must not seek to influence this process. They also pretty much need to learn a new language in order to get by, and this means knowing 1001 acronyms, which we'll deal with as we go along. 

Data is a minefield and governors need to be able to navigate it. So let’s begin our 5 things governors need to know about data.

I originally intended this to be a single blog post, by the way, but it would have been huge, so I've decided to make it bitesize. Hope it's useful.

1) Know what data is collected (statutory assessment)
Before we move on to those all important sources of data, we first need to know what data the DfE collect. This data is derived from statutory assessment, and there are four statutory assessment points in the primary phase: Early Years Foundation Stage Profile, Phonics, Key Stage 1, and Key Stage 2.

Early Years Foundation Stage Profile (EYFSP) is carried out at the end of the reception year (YR). Pupils' development is assessed against 17 Early Learning Goals (ELG) spread across 7 areas of learning. In each of these ELGs, a pupil's development is assessed as either emerging, expected or exceeding. If a pupil reaches the expected level of development in the 12 key ELGs - those that make up the prime areas of communication and language, physical development, and personal, social and emotional development; and the specific areas of numeracy and literacy - then they are deemed to have made a 'Good Level of Development' (GLD). The percentage reaching GLD at the end of reception is a key measure and it's worth governors being aware of it. This data is collected by the DfE but is not available in the public domain (i.e. in performance tables).

Phonics Screening Check (PSC) is carried out at the end of Year 1 (Y1). The assessment involves pupils attempting to decode 40 words, half of which are real, the other half made-up. The pass mark has been 32/40 since the assessment was introduced in 2012 but this may change at some point. If pupils do not achieve the pass mark in Y1 then they are assessed again in Y2. The percentage achieving the phonics check a) at the end of Y1 and b) by the end of Y2 are key measures, and again governors are advised to know - or be aware of - these figures. This data is collected by the DfE but is not available in the public domain. 

Key Stage 1 (KS1) assessments are made at the end of year 2 (Y2). Pupils sit tests in reading and maths (there is no test for writing). These tests are marked internally and marks are converted into a scaled score in the range 85-115. A score of 100 or above equates to the expected standard (unlike at KS2 there is no definition of a high score). KS1 scaled scores are only used to inform the final teacher assessment; the scores are NOT collected by the DfE and are therefore not used in accountability measures. Pupils receive a teacher assessment in reading, writing, maths and science. In science, pupils are simply assessed as having met or not met expected standards (science does not form part of key measures). In reading, writing and maths the vast majority of pupils are assessed as either working towards the expected standard (WTS), working at the expected standard (EXS), or working at greater depth within the expected standard (GDS). These are all pupils that are working within the KS1 curriculum. Your school may also have a small number of pupils assessed as pre-key stage (PKS) or working below (BLW). These are usually pupils with special educational needs (SEN) but may also be pupils for whom English is an additional language (EAL). The percentage of pupils attaining expected standards and greater depth in reading, writing and maths are key measures that governors should be aware of. Again, this data is collected by the DfE but is not available in the public domain. 

Key stage 2 (KS2) assessment is made at the end of year 6 (Y6). With the exception of writing in which there is only teacher assessment, at KS2 the test is king. There are tests in reading, maths, and grammar, punctuation and spelling (GPS, commonly referred to as SPaG). These tests are externally marked and scores are in the range of 80-120, with a score of 100 or above indicating that the pupil has attained the expected standard, and a score of 110+ classified as a high score. If pupils score below 100, they are deemed to have not met the expected standard (schools may  challenge this if the pupil misses the standard by a mark). Note that 110+ score is defined as a 'high score', not 'greater depth' (greater depth is for writing only). Pupils below the standard of the reading and maths test - due to SEN or EAL - will be assessed as  pre-key stage (PKS) or working below (BLW). Note: BLW/PKS assessments do not apply to GPS.

In writing, where there is no test, pupils are assessed by the teacher. Assessments are in the same format as KS1, with pupils assessed as either working towards the expected standard (WTS), working at the expected standard (EXS), or working at greater depth within the expected standard (GDS). As in reading and maths, pupils can also be defined as working below (BLW) or pre-key stage (PKS) if they are below the KS2 assessment criteria

In science, pupils are assessed as either meeting or not meeting expected standards. There are no pre-key standards.

KS2 results are collected by the DfE and are published in the public domain via the performance tables. The key measures are as follows:
  • Percentage of pupils attaining the expected standard in reading, writing, and maths combined
  • Percentage of pupils attaining a high standard in reading, writing and maths combined
  • Average scaled score in reading and maths tests
  • Average progress in reading, writing and maths
These are the measures that schools must publish on their websites and governors should certainly know about these figures. There is of course a lot more detail in the various sources of data, which governors should be aware of.

And it's those sources of data that we'll deal with in part 2.