Tuesday, 7 November 2017

Analyse School Performance summary template (primary)

Many of you will have downloaded this already but I thought it'd be useful to put it on my blog. For those who don't already have it, it's a rather low tech and unexciting word document designed to guide you through ASP and pull out the useful data. Aim is to summarise the system down to a few pages.

You can download it here

The file should open in Word Online. Please click on the 3 dots top right to access the download option. Please don't attempt to edit online (it should be view only anyway). Also, chances are it will be blocked by schools computers (schools always block my stuff).

A couple of points about the template:

1) Making sense of confidence intervals
Only worry about this is progress is significantly below average, or if data is in line but high and close to being significantly above.

If your data is significantly below average, take the upper limit of the confidence interval (it will be negative e.g. -0.25). This shows how much each pupils score needs to increase by for your data to be in line (0.25 points per pupil, or 1 point for every 4th pupil). Tip: multiply this figure by the number of pupils in the cohort (eg. -0.25 x 20 pupils = -5). If you have a pupil - for whom you have a solid case study on - that has a score at least equal to the result (i.e. -5 in this case), removing that pupil from the data should make your data in line with national average.

If your data is in line and you are interested to know how far it would need to shift to be significantly above, note the lower part of the confidence interval (it will be negative, e.g. -0.19). This again shows how much your data needs to shift up by, but in this case to be significantly above. In this case, each child's score needs to increase by 0.2 points for the overall progress to be significantly above (we need to get the lower limit of the confidence interval above 0 so it needs to rise by slightly more than the lower confidence limit). Obviously pupils cannot increase their scores by 0.2, so best to think of it as 1 point for every 5th child. Or. as above, multiply the lower confidence limit by the number of pupils in the cohort (e.g. -0.2 x 30 pupils = -6). If you have a pupil with a score at least equal to this result (i.e. -6) then removing them from the data should make the data significantly above average.

Easiest thing to do is model it using the VA calculator, which you can download from my blog (see August) or use the online version www.insighttracking.com/va-calculator

2) Difference no. pupils
This has caused some confusion. It's the same concept as applied in last year's RAISE and dashboards. Simply take the percentage gap between your result and national average (e.g. -12%), turn it into a decimal (e.g. -0.12) and multiply that by the number of pupils in the cohort (e.g. 30). In this case we work out 'diff no. pupils' as follows: -0.12 x 30 = -3.6. This means the schools result equates to 3 pupils below average. If the school result is above national then it works in the same way, it's just that the decimal multiplier is positive.

If you are calculating this for key groups, then multiply by the number in the group, not the cohort. For example, the 80% of the group achieved the result against a national group result of 62%, which means the group's result in 18% above national. There are 15 pupils in the group so we calculate 'diff no. pupils' as follows: 0.18 x 15 = 2.7. The group result therefore equates to 2 pupils above national.

I hope that all makes sense.

Happy analysing.

Wednesday, 25 October 2017

MATs: monitoring standards and comparing schools

A primary school I work with has been on the same journey through assessment land as many other schools up and down the country. Around two years ago they began to have doubts about the tracking system they were using - it was complex and inflexible, and the data it generated had little or no impact on learning. After much deliberation, they ditched it and bought in a more simple, customisable tool that could be set up and adapted to suit their needs. A year later and they have an effective system that teachers value, that provides all staff with useful information, and is set up to reflect their curriculum. A step forward.

Then they joined a MAT.

The organisation they are now part of is leaning on them heavily to scrap what they are doing and adopt a new system that will put them back at square one. It's one of those best-fit systems in which all pupils are 'emerging' (or 'beginning') in autumn, mastery is a thing that magically happens after Easter, and everyone is 'expected' to make one point per term. In other words, it's going back to levels with all their inherent flaws, risks and illusions. The school tries to resist the change in a bid to keep their system but the MAT sends data requests in their desired format, and it is only a matter of time before the school gives in.

It is, of course, important to point out that not all MATs are taking such a remote, top down, accountability driven approach, but some are still stuck in a world of (pseudo-) levels and are labouring under the illusion that you can use teacher assessment to monitor standards and compare schools, which is why I recently tweeted the following:


This resulted in a lengthy discussion about the reliability of various tests, and the intentions driving data collection in MATs. Many stated that assessment should only be used to identify areas of need in schools, in order to direct support to the pupils that need it; data should not be used to rank and punish. Of course I completely agree, and this should be a strength of the MAT system - they can share and target resources. But whatever the reasons for collecting data - and lets hope that its done for positive rather than punitive reasons - let's face it: MATs are going to monitor and the compare schools and usually this involves data. This brings me back to the tweet: if you want to compare schools, don't use teacher assessment, use standardised tests. Yes, there may be concerns about the validity of some tests on the market - and it is vital that schools thoroughly investigate the various products on offer and choose the one that is most robust, best aligned with their curriculum, and will provide them with the most useful information - but surely a standardised test will afford greater comparability than teacher assessment.

I am not saying that teacher assessment is always unreliable; I am saying that teacher assessment can be seriously distorted when it is used for multiple purposes (as stated in the final report of the Commission on Assessment without Levels). We need only look at the issues with writing at key stage 2, and the use of key stage 1 assessments in the baseline for progress measures to understand how warped things can get. And the distortion effect of high stakes accountability on teacher assessment is not restricted to statutory assessment; it is clearly an issue in schools' tracking systems when that data is not only used for formative purposes, but also to report to governors, LAs, Ofsted, RSCs, and senior managers in MATs. Teacher assessment is even used to set and monitor teachers' performance management targets, which is not only worrying but utterly bizarre.

Essentially, using teacher assessment to monitor standards is counter productive. It is likely to result in unreliable data, which then hides the very things that these procedures were put in place to reveal. And even if no one is deliberately massaging the numbers, there is still this issue of subjectivity: one teacher's 'secure' is another teacher's 'greater depth'. We could have two schools with very different in-year data: school A has 53% of pupils working 'at expected' whereas school B has 73%. Is this because school B has higher attaining pupils than school A? Or is it because school A has a far more rigorous definition of 'expected'?

MATs - and other organisations - have a choice: either use standardised assessment to compare schools or don't compare schools. In short, if you really want to compare things, make sure the things you're comparing are comparable.


Tuesday, 3 October 2017

Thoughts on new Ofsted inspection data summary report (primary)

Yesterday Ofsted released a 'prototype' of its new Inspection Data Summary Report and it's a major departure from the Ofsted Inspection Dashboard that we've become accustomed to over the past two years. On the whole it's a step in the right direction, with more positives than negatives, and it's good to see that Ofsted have listened to feedback and acted upon it. Here's a rundown of changes.

Positives
Areas for investigation. This is a welcome change. The new areas for investigation are clearer - and therefore more informative - than the 'written by robot' strengths and weaknesses that preceded them, many of which were indecipherable. They read more like the start point for a conversation and hopefully this will result in more productive, equitable relationship between inspectors and senior leaders. 

Context has moved to the front. Good. That's where it should be. It was worrying when context was shoved to the back in RAISE reports. This is hopefully a sign that school context will be taken into account when considering standards. As it should be. 

Sorted out the prior attainment confusion at KS2. Previous versions of the dashboard were confusing: progress measures based prior attainment on KS1 APS thresholds (low: <12, Mid: 12-17.5, High: 18+ (note: maths is double weighted)); attainment measures based prior attainment on the pupils level in the specific subject (low: L1 or below, mid: L2, high: L3). This has now been sorted out and prior attainment now refers to pupils KS1 APS in all cases. Unfortunately this is not the case for prior attainment of KS1 pupils - more on that below. 

Toning down the colour palette. Previous versions were getting out of hand with a riot of colour. The page of data for boys and girls at KS2 looked like a carnival. Thankfully, we now just have simple shades of blue so sunglasses are no longer required; and nowhere in the new report is % expected standard and % greater depth merged into a single bar with darker portions indicating the higher standard. These are now always presented in separate bars, thankfully. That page was always an issue when it came to governor training. 

Progress in percentiles. Progress over time is now shown using percentiles, which makes a lot of sense and is easy to understand. Furthermore, the percentiles are linked to progress scores, so it shows improvement in terms of progress not attainment. Percentiles show small steps of improvement over time, which means that schools can now put changes in progress scores into context, rather than guessing what changes mean until they move up a quintile. Furthermore, an indicator of statistical significance is provided, which may show that progress is be in the bottom 20% but is not significantly below, or perhaps is in the top 20% but is not significantly above, which adds some clarity. And finally, the percentiles for 2015 are based on VA data, rather than levels. Those responsible for the 'coasting' measure take note. 

Scatter plots. Whilst an interactive scatter plot (i.e. an online clickable version) is preferable, these are still welcome because they instantly identify those outliers that have had a significant impact on data. In primary schools, These are often pupils with SEND that are assessed as per-key stage, and who end up with huge negative scores that in no way reflect the true progress they made. One quick glance at a scatter plot reveals that all pupils are clustered around the average, with the exception of those two low prior attaining pupils that have progress scores of -18. 

Confidence intervals are shown. I was concerned that they'd stop doing this - showing the confidence interval as a line through the progress score - but thankfully this aspect has been retained. It's useful because schools can show how close they are to not being significantly below, or being significantly above. Inspectors will be able to see that if that pre-key stage pupil with individual progress score of -18 was removed from the data, that would shift the overall score enough to remove that red box. Statistical significance is, after all, just a threshold. 

Negatives
Prior attainment of KS1 pupils. I'm not against the idea of giving some indication of prior attainment - it provides useful context after all - but I have a bit of problem here. Unlike at KS2 where prior attainment bands are based on the pupils APS at KS1, at KS1 prior attainment is based on the pupils' development in specific early learning goals (ELG) at EYFS. Pupils are defined as emerging, expected or exceeding on basis of their development in reading, or writing, or maths (for the latter they take the lower of the two maths ELGs, to define the pupils prior attainment band). This approach to prior attainment therefore takes no account of pupils development in other areas, just the one that links to that specific subject. The problem with this approach is that you can have a wide variety of pupils in a single band. For example, the middle band (those categorised as expected) will contain pupils that have met all ELGs (i.e. made good level of development) alongside pupils that have met the ELG in reading but are emerging in other areas, and pupils that have met the ELG in reading and exceeded others. These are very different pupils. Data in RAISE showed us that pupils that made a good level of development are twice as likely to achieve expected standards at KS1 than those that didn't, so it seems sensible that any attempt to define prior attainment should take account of wider development across the EYFSP, and not just take subjects in isolation. Perhaps consider using an average score for EYFS prime and specific ELGs, to define prior attainment instead. 

Prior attainment of Y1-2 in the context page. Currently this is based on how NYC the percentage achieving specific ELGs differs from national average, whilst prior attainment for years 3-6 involves APS. As above, perhaps Ofsted should consider using an EYFS average score across the prime and specific ELGs instead. 

I am, by the way, rather intrigued by mention of APS for current years 3 and 4. Does this mean Ofsted have developed some kind of scoring system for new KS1 assessments? This surely has to happen as some point anyway, in order to place pupils into prior attainment groups for futures progress measures. 

Lack of tables. There's nothing wrong with a table; you can show a lot in a table. In the absence of tables to show information for key groups, the scatter plots are perhaps trying to do too much. Squares for boys, triangles for girls, pink for disadvantaged, grey for non-disadvantaged, and a bold border to indicate SEN. It's just a bit busy. But then again, we can see those pupils that are disadvantaged and SEN, so it can be useful. It's not a major gripe and time will tell if it works, but sometimes a good old table is just fine.

And finally a few minor niggles:

There is no such things as greater depth in Grammar, Punctuation and Spelling at KS2. Mind you, yesterday it had greater depth for all subjects at KS2 and that's changed already so it's obviously just a typo.

And many of the national comparator indicators on the bar graphs are wonky and don't line up. They look more like backslashes. 

But overall this is big improvement on the previous versions and will no doubt be welcomed by head teachers, senior leaders, governors and anyone else involved in school improvement. This, alongside ASP and the Compare Schools website, shows the direction of travel of school data: that it's becoming more simplified and accessible. 

And that's a good thing. 


Thursday, 7 September 2017

KS2 progress measures 2017: a guide to what has and hasn't changed

At the end of last term I wrote this blog post. It was my attempt to a) predict what changes the DfE would make to the KS2 progress methodology this year, and b) get my excuses in early about why my 2016 VA Calculator could not be relied upon for predicting VA for 2017. For what it's worth, I reckon the 2017 Calculator will be better for predicting 2018 VA, but 2016 data was all over the shop and provided no basis for predicting anything.

Anyway, no doubt you've all now downloaded your data from the tables checking website (and if you haven't, please do so now. Guidance is here) and have spent the last week trying to make sense of it, getting round what -1.8 means and how those confidence intervals work. Perhaps you've used my latest VA calculator to recalculate data with certain pupils removed, or updating results in light of review outcomes, or maybe changing results to those 'what if' outcomes. 

This is all good fun (or not depending on your data) and a useful exercise, especially if you are expecting a visit, but it's important to understand that the DfE has made changes to the methodology this year - some of which I predicted and some of which I didn't - and, of course, the better we understand how VA works, the better we can fight our corner.

So what's changed?

Actually let's start with what hasn't changed:

1) National average is still 0
VA is a relative measure. It involves comparing a pupil's attainment score to the national average score for all pupils with the same start point (i.e. the average KS2 score for the prior attainment group (PAG)). The difference between the actual and the estimated score is the pupil's VA score. Adding up all the differences and dividing by the number of pupils included in the progress measure gives us the school's VA score. If you calculate the national average difference the result will be 0. Always.

School VA scores can be interpreted as follows:
  • Negative: progress is below average 
  • Positive: progress is above average
  • Zero: progress is average
Note that a positive score does not necessarily mean all pupils made above average progress, and a negative score does not indicate that all pupils made below average progress. It's worth investigating the impact that individual pupils have on overall progress scores and take them out if necessary (I don't mean in a mafia way, obviously). 

2) The latest year's data is used to generate estimates 
Pupils are compared against the average score for pupils with same start point in the same year. This is why estimates based on the previous year's methodology should be treated with caution and used for guidance only. So, the latest VA calculator is fine for analysing 2017 data, but is not going to provide you with bombproof estimates for 2018. Same goes for FFT. 

3) KS1 prior attainment still involves double weighting maths
KS1 APS is used to define prior attainment groups (PAGs) for the KS2 progress measure. It used to be a straight up mean average, but since 2016 has involved double weighting maths, and is calculated as follows:

(R+W+M+M)/4

If that fills you with rage and despair, try this:

(((R+W)/2)+M)/2

Bands are as follows:

low PA: KS1 APS <12
Mid PA: KS1 APS 12-17.99
High PA: KS1 APS 18+

4) Writing nominal scores stay the same
The crazy world of writing progress continues. I thought the nominal scores for writing assessments might change but that's not the case, i.e. 

WTS: 91
EXS: 103
GDS: 113

This means that we'll continue to see wild swings in progress scores as pupils lurch 10 points in either direction depending on the assessment they get, and any pupil with a KS1 APS of 16.5 or higher has to get GDS to get a positive score, but GDS assessments are kept in a remote castle under armed guard. I love this measure.

5) As do pre-key stage nominal scores
No change here either, which means the problems continue. Scores assigned to pre-key stage pupils in reading, writing and maths are as follows:

PKF: 73
PKE: 76
PKG: 79

Despite reforms (see changes below) these generally result in negative scores (definitely if the pupils was P8 or above at KS1). It's little wonder so many schools are hedging their bets and entering pre-key stage pupils for tests in the hope they score the minimum of 80. 

6) confidence intervals still define those red and green boxes
These can go on both the changed and not changed piles. Confidence intervals change each year due to annual changes in standard deviations and numbers of pupils in the cohort, but the way in which they are used to define statistical significance doesn't. Schools have confidence intervals constructed around their progress scores, which involves an upper and a lower limit. These indicate statistical significance as follows:

Both upper and lower limit are positive (e.g. 0.7 to 3.9): progress is significantly above average
Both upper and lower limit are negative (e.g. -4.6 to -1.1): progress is significantly below average
Confidence interval straddles 0 (e.g. -1.6 to 2.2): progress is in line with average

7) Floor standards don't move
This shocked me. If i had to pick one data thing that I thought was certain to change it would be the floor standard thresholds. But no, they remain as follows:

Reading: -5
Writing: -7
Maths: -5

Schools are below floor if they fall below 65% achieving the expected standard in reading, writing and maths combined, and fall below any one of the above progress thresholds (caveat: if just below one measure then it needs to be sig-. Hint: it will be). Oh, and floor standards only apply to cohorts of 11 or more pupils.

And now for what has changed

1) Estimates - most go up but some go down
The estimates - those benchmarks representing average attainment for each PAG against which each pupil's KS2 score is compared - change every year. This year most have gone up (as expected) but some, for lower PAGs, have gone down. This is due to the inclusion of data from special schools, which was introduced to mitigate the issue of whopping negative scores for pre-key stage pupils.

Click here to view how the estimates have changed for each comparable PAG. Note that due to new, lower PAGs introduced for 2017, not all are comparable with 2016.

2) Four new KS1 PAGs
The lowest PAG in 2016 (PAG1) spanned the KS1 APS range from 0 to <2.5, which includes pupils that were P1 up to P6 at KS1. Introducing data from special schools in 2017 has enabled this to be split into 4 new PAGs, which better differentiates these pupils. The use of special school data has also had the effect of lowering progress estimates for low prior attainment pupils, which goes some way to mitigating the issue described here. However, despite these reforms, if the pupil has a KS1 APS of 2.75 or above (P8 upwards) a pre-key stage assessment at KS2 is going to result in a negative score.

3) New nominal scores for lowest attaining pupils at KS2
in 2016, all pupils that were below the standards of the pre-key stage at KS2 were assigned a blanket score of 70. This has changed this year, with a new series of nominal scores assigned to individual p-scales at KS2, i.e:

P1-3: 59 points
P4: 61 points
P5: 63 points
P6: 65 points
P7: 67 points
P8: 69 points
BLW but no p-scale: 71 points

I'm not sure how much this helps mainstream primary schools. If you have a pupil that was assessed in p-scales they would have been better off under the 2016 scoring regime (they would have received 70 points); as it stands they can get a maximum of 69. Great.

Please note: these nominal scores are used for progress measures only. They are not included in average scaled scores. 

4) Closing the progress loophole of despair
Remember this? In 2016, if a pupil was entered for KS2 tests and did not achieve enough marks to gain a scaled score, then they were excluded from progress measures, which was a bonus (unless they also had a PKS assessment, in which case they ended up with a nominal score that put a huge dent in the school's progress score). This year the DfE have closed this particular issue by assigning these pupils a nominal score of 79, which puts them on a par with PKG pupils (no surprise there). In the VA calculator, such pupils should be coded as N.

The loophole is still open by the way. Pupils with missing results, or who were absent from tests, are not included in progress measures, and I find that rather worrying.

5) Standard deviations change
These show how much, on average, pupils' scores deviate from the national average score; and they are used to construct the confidence intervals, which dictate statistical significance. This is another reason why we can't accurately predict progress in advance.

-----

So, there you go: quite a lot of change to get your head round. It has to be said that unless the DfE recalculate 2016 progress scores using this updated methodology (which they won't), I really can't see how last year's data can be compared to this year's.

But it will be, obviously. 







Thursday, 31 August 2017

KS2 VA Calculator 2017

Updated basic version of KS2 VA calculator can be downloaded here.

and new version with pupil groups tables (progress and average scores) can be downloaded here

Please download before use (ie don't attempt to edit the online version). You should see 3 dots top right of screen - click on these to download to your desktop. Recommend reading the notes page first and also worth reading the primary accountability guidance for more information about nominal scores, p-scales and pre-key stage pupils. Quite a bit more complicated this year with more prior attainment groups (24 instead of 21), closing of the loophole (pupils not scoring on test receive a score of 79), and nominal scores assigned to individual p-scales (rather than blanket score of 70 assigned to BLW).

Which, in my opinion, means this year's progress data is not comparable with last year's, but hey ho.....

Enjoy!

and let me know asap if you find any errors

Friday, 18 August 2017

The Progress Bank

For primary schools, there are two main issues with the current system of measuring progress: 1) it is high stakes, 2) it involves teacher assessment. Whether we are talking about our own internal tracking systems, or those official end of key stage DfE measures, the former clearly influences the latter. Imagine you are told that you have to run a marathon in under 3 hours (yes, I do like a running analogy) and that if you fail to do so, the consequences will be dire. Of course, there are some good marathon runners out there for whom this is feasible, and a handful of really good ones for whom this is no problem at all, but for the majority of us this is near impossible. Then we are told that no one will be monitoring your efforts; they will just be basing their judgement on your time, which you alone are responsible for keeping. Now how many sub-3 hours marathon times will we see?

This is the weird situation we find ourselves in: an incompatible mix of high stakes and self-evaluation. Like fire and duvets, they make for strange and dangerous bedfellows, and clearly it's not working. We all know that really.

Consider that weird, near vertical cliff that occurs at 32 marks every year in the national phonics screening check results; or the impact that current KS1 measures will have on the EYFSP now that the profile is used to establish prior attainment groups for those pseudo-progress measures in the dashboard. Then there are those well documented problems with using KS1 results as the baseline for progress measures. The DfE are attempting to solve this by implementing a reception baseline, but this is highly contentious and unpopular; and even if it's implemented in 2019 as planned, we won't see the results until 2026. Then there is the thorny Infant/junior/middle school problem that the reception baseline won't adequately solve; it will just blur the issue so no one really knows where the problems lie, or if there are any problems anyway.

And then, of course, there are those highly variable and suspiciously high KS2 writing results, and associated issues around moderation. Why do you think that KS2 writing was ditched from the baseline for the Progress 8 measure the minute we ran out of secondary pupils with KS2 writing test results? (That was last year by the way; the 2017 GCSE cohort were the first to have writing TA at KS2 and that won't be used in their progress 8 measures).

These are some of the well known issues relating to statutory teacher assessment, assessments that are done almost entirely for the purposes of accountability. It is little wonder that no one seems to have much faith in progress measures that rely on such data. But these issues are not restricted to statutory assessment; they also exist in the various tracking systems schools use, tracking systems which still rely on teacher assessment for measuring progress.

The final report ofthe Commission on Assessment without Levels warns about the risks of using teacher assessment for multiple purposes, yet this is still the norm in many schools. Teacher assessment is commonly used not only for formative purposes, but also for measuring pupil progress, monitoring standards, reporting to Governors, evaluating teacher performance, and even comparing schools. These multiple purposes exert conflicting pressures on the data that can and will lead to its distortion; and let's face it, teacher assessment is too subjective anyway. One teacher's 'secure' is another teacher's 'greater depth', so even if there were no high stakes attached, we still wouldn't have an accurate picture. This is made even more complicated by the various methods of the tracking systems themselves: different steps, bands and point scores; varying lists of key objectives; and contrasting definitions of 'age related expectations' based on spurious algorithms and arbitrary thresholds. And let's be honest, progress measures based on teacher assessment pretty much always involve reinventing levels anyway. No one is talking the same language; no one knows what's going on. No one can measure and compare progress.

That's the point of Progress Bank.

This is something I've been thinking about for a long time: a system that wouldn't rely on teacher assessment to measure progress and wouldn't rely on end of key stage results either. A system that is powered by whatever standardised tests the school chooses to use, and can measure progress from any point to any point, benchmarked against other pupils with similar start points.

This is obviously a rather ambitious project and not something I'm capable of building, which is why I approached the people at Insight Tracking. I like their current system - it's intuitive and highly customisable  - and they tend to build things quick. I needed their expertise and thankfully they agreed.

The system will work like this: schools upload standardised scores from the various tests they use, pick a start point (i.e. previously uploaded data, say at the start of Y1 or Y3) and an end point (probably the most recent upload), and they receive zero-centred VA scores for cohorts, key groups and individual pupils in whatever subjects are tested. The methodology is essentially the same as that used by the DfE to measure progress, so the data is in a common format, but the system is far more flexible in terms of start and end points, and is based on more regular, lower stakes testing. Schools will be able to interrogate their data using simple, interactive reports, which will focus not only on progress, but on attainment gaps, too.

The neat thing is it doesn't matter what tests you use, as long as they're standardised; and if results aren't standardised, the Progress Bank can standardise them if enough data is uploaded by enough schools to provide a suitable sample. If you use tests from multiple providers, that's fine; and if you change test provider, your old data will be stored in Progress Bank, so you won't lose it and can continue to use it. We can, if permission is given, even transfer test results when a pupil changes school. And if you decide to leave, the data is deleted. You own it.

The Progress Bank will be especially useful to junior and middle schools, which have a particular issue when it comes to progress measures, and this project has been expedited by the JUSCO Conference and conversations with members including Chris McDonald, the chair of the group. Obviously, the ideal solution for junior schools is to enable them to measure progress from a Year 3 on-entry baseline, not from KS1 results or from a reception baseline as proposed. Progress Bank will allow them to do that.

But it's not just for junior schools; it is aimed at any school that is interested in alternative, benchmarked progress measures. The system can even measure progress from KS1 scaled scores instead of KS1 teacher assessment used in official measures, or from any current standardised reception baseline assessment. We hope that the data will help schools challenge the flawed measures that is currently used to hold them to account. And by using standardised scores to measure progress, hopefully we can protect the integrity of teacher assessment by ensuring it is used solely for formative purposes, and perhaps reduce workload in terms of tracking and analysis too.

Now we just need lots of schools to get on board.

Find out more about the project and register your interest here:



And follow us on twitter @theprogressbank

Thursday, 20 July 2017

Making the cut

Assessment. It should be simple really: just checking what pupils do and don't know. But assessment appears to have turned into some kind of war, with the legions of accountability amassed on one side, and the special forces of teaching and learning besieged upon a slippery slope with nowhere to go. We become so focussed on outcomes - on floor standards, and coasting thresholds, and league tables - that we risk losing sight of what's important: the here and now. But it is focussing on the here and now that makes all the difference. The irony is that concentrating on accountability - on those distant and unpredictable performance measures - can jeopardise the very results you are striving for. In short, focus on teaching and learning, and the results take care of themselves.

This is, of course, easier said than done but something has to change. In too many schools assessment has become a burden: a top down directive disconnected from learning; an interminable, box-ticking, data-collecting, drain on teachers' time. The risks are clear: morale nose dives and pupils' learning is put at risk. We therefore need to ditch some of our assessment baggage - aim to do more with less - and this requires some serious rationalisation of our processes. It all comes down to one simple question:

Does this have a positive impact on learning?

We need to go through everything we do in the name of assessment and school improvement and ask that question, and we need to be ruthless and honest. What is the benefit and what is the cost? How long does this take? Does it tell us anything we don't already know? Is it having a negative impact? Is it taking teachers' time away from teaching? Ultimately, the only way to improve a school is to teach children well, and anything that distracts from that purpose is a risk.

So let's deconstruct our entire approach to assessment and lay it all out on the hall floor: the various tests you use, your marking policy, target setting (both for pupils and performance management), those lists of learning objectives stapled into pupils' books, and the component parts of your tracking system (yes! every single measure, category, grid, table, graph, chart and report). We now separate these into two piles: those that have a demonstrable, positive impact on teaching and learning, and those that are purely done for the purposes of accountability.

We keep the first pile and ditch the rest.

We now have a stripped down system that is fit for purpose, that is focussed on the right things. From now on, the information we provide to governors and external agencies is a byproduct of our assessment system, which exists to serve teaching and learning alone. If it works, it's right, no matter what others may say. Many will try to convince you that you're mad but deep down they probably just wish they could do the same. If you think this is all too radical, it's really not. There are many schools with extremely minimalist approaches to assessment that have had very successful inspections. Just as long as your approach is informative and has impact, then it's fine. If anything, the simpler the better. And Ofsted are not asking you to generate data purely for their benefit anyway. The Handbook states:

Ofsted does not expect performance- and pupil-tracking information to be presented in a particular format. Such information should be provided to inspectors in the format that the school would ordinarily use to track and monitor the progress of pupils in that school

And the workload review group report on data management had this to say:

Be ruthless: only collect what is needed to support outcomes for children. The amount of data collected should be proportionate to its usefulness. Always ask why the data is needed.

In alpine climbing there are two popular adages: 'if in doubt, leave it out', and 'if you haven't got it, you can't use it'. The first one is obvious, and it's what I'm trying to get schools to think about when they go about rationalising what they do. The second one links to it and recognises that if we do decide to carry something we'll most likely try to use it, in which case it becomes a potential distraction that can slow us down. It is common to hear headteachers and senior leaders say "we don't use all those bits of our system, we just use this grid". But the problem is that whilst all those other bits exist there is a temptation to use them, to waste your evenings and weekends wading through various reports and charts, and for governors to ask for them. Even worse, there is the potential for a 'visitor' to say "Oh, you use that system! Can you run this report for me please?"

Ditch it. If you haven't got it, you can't use it, and so it ceases to be an issue.

And when inevitably you do come up against someone asking for something they shouldn't be asking for, this should be your response:

"We don't do that in this school. It has no impact on learning"

Have a great summer.