As promised, here is my template that attempts to summarise IDSR and FFT data into 3-4 pages. Obviously you'll need your IDSR and FFT dashboards, and probably a spare couple of hours. Rather than write a lengthy blog on how to complete it, I've supplied an example (see links below).

Difference no. pupils is provided by the IDSR in some cases but where it isn't, it's calculated in the usual way:

Work out the % gap between the result and the national figure e.g.

School = 56%, National = 72%, gap = -16%

Convert that to a decimal i.e. -0.16

Multiply that by the number of pupils in the group or cohort (e.g. 28)

28 x -0.16 = -4.48

Therefore the gap equates to 4 pupils (in this case 4 pupils below national).

See notes below the tables for explanations. And tweet me if you get stuck.

Link to blank template is here

Link to completed example is here

## Friday, 24 November 2017

## Tuesday, 21 November 2017

### Using standardised scores in progress matrices

Schools are always looking for ways to measure and present progress. Most primary schools have tracking systems that offer
some sort of progress measure, but these are almost always based on teacher
assessment and involve some sort of level substitute: a best-fit band linked to
coverage of the curriculum with a point score attached. Increasingly schools
are looking beyond these methods in search of something more robust and this
has lead them to standardised tests.

One of the benefits of a standardised test is that
they are – as the name suggests – standardised, so schools can be confident
that they are comparing the performance of their pupils against a large sample
of pupils nationally. Another benefit is that schools will be less reliant on teacher assessment for monitoring of standards - one of the key points made in the
final report of the Commission on Assessment without Levels was that teacher
assessment is easily distorted when it’s used for multiple purposes (i.e. accountability as well as learning).
Standardised tests can also help inform teacher assessment so we can have more
confidence when we describe a pupil as ‘meeting expectations’ or ‘on track’.

And finally, standardised tests can provide a more reliable measure of progress across a year, key stage or longer. However, schools
often struggle to present the data in a useful and meaningful way. Scatter
plots – plotting previous test scores against latest - are useful because they
enable us to identify outliers. A line graph could also be used to plot change
in average score over time, or show the change in gap between key groups such
as pupil premium and others. But here I want to concentrate on the
humble progress matrix, which plots pupil names into cells on a grid based on
certain criteria. These are easily understood by all, enable us to spot pupils
that are making good progress and those that are falling behind, and they do
not fall into the trap of trying to quantify the distance travelled. They can
also help validate teacher assessment and compare outcomes in one subject
against another. In fact, referring to them as progress matrices is doing them
a disservice because they are far more versatile than that.

But before we can transfer our data into a matrix, we first
need to group pupils together on the basis of their standardised scores.
Commonly we see pupils defined as below, average and above using the 85 and 115
thresholds (i.e. one standard deviation from the mean) but this does not
provide a great deal of refinement and means that the average band contains 68%
of pupils nationally. It therefore makes sense to further subdivide the data and I think
the following thresholds are useful:

<70: well below average

70-84: below average

85-94: low average

95-105: average

106-115: high average

116-130: above average

>130: well above average

Or if you want something that resembles current assessment (controversial!):

<70: well below

70-84: below

85-94: working towards

95-115: Expected

116-130: above

>130: well above

By banding pupils using the above thresholds, we can then
use the data in the following ways:

**1)**

**To show progress.**

Plot pupils’ current category (see above) against the
category they were in previously. The start point could be based on a previous
standardised test taken, say, at the end of last year; or on the key stage 1
result, or an on-entry teacher assessment. Pupils names will plot in cells and
it is easy to spot anomalies.

**2)**

**To compare subjects**

As above but here we are plotting the pupils’ category (again,
based on the thresholds described above) in one subject against another. We can
then quickly spot those pupils that are high attaining in one subject and low
in another.

**3)**

**To validate and inform teacher assessment**

By plotting pupils’ score category against the latest
teacher assessment in the same subject, we can spot anomalies – those cases
where pupils are low in one assessment but high in the other. Often there are
good reasons for these anomalies but if it’s happening en masse – i.e. pupils
are assessed low by the teachers but have high test scores – then this may
suggest teachers are being too harsh in their assessments. It is worth noting
that this only really works if schools are using what is becoming known as a
‘point in time’ assessment, where the teacher’s assessment reflects the pupil’s
security in what has been taught so far rather than how much of the year’s
content they’ve covered and secured. In a point in time assessment, pupils may
be ‘secure’ or ‘above’ at any point during the year, not just in the summer
term.

**But what will Ofsted think?**

The myth-busting section of the Ofsted handbook has this to say
about tracking pupil progress:

*Ofsted*

**does not**expect performance and pupil-tracking information to be presented in a particular format. Such information should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of pupils in that school.
Matrices provide a neat and simple solution: they are easily
understood by all, and they allow us to effectively monitor pupil progress without
resorting to measuring it.

Definitely worth considering.

## Tuesday, 7 November 2017

### Analyse School Performance summary template (primary)

Many of you will have downloaded this already but I thought it'd be useful to put it on my blog. For those who don't already have it, it's a rather low tech and unexciting word document designed to guide you through ASP and pull out the useful data. Aim is to summarise the system down to a few pages.

You can download it here

The file should open in Word Online. Please click on the 3 dots top right to access the download option. Please don't attempt to edit online (it should be view only anyway). Also, chances are it will be blocked by schools computers (schools always block my stuff).

A couple of points about the template:

Only worry about this is progress is significantly below average, or if data is in line but high and close to being significantly above.

If your data is significantly below average, take the upper limit of the confidence interval (it will be negative e.g. -0.25). This shows how much each pupils score needs to increase by for your data to be in line (0.25 points per pupil, or 1 point for every 4th pupil). Tip: multiply this figure by the number of pupils in the cohort (eg. -0.25 x 20 pupils = -5). If you have a pupil - for whom you have a solid case study on - that has a score at least equal to the result (i.e. -5 in this case), removing that pupil from the data should make your data in line with national average.

If your data is in line and you are interested to know how far it would need to shift to be significantly above, note the lower part of the confidence interval (it will be negative, e.g. -0.19). This again shows how much your data needs to shift up by, but in this case to be significantly above. In this case, each child's score needs to increase by 0.2 points for the overall progress to be significantly above (we need to get the lower limit of the confidence interval above 0 so it needs to rise by slightly more than the lower confidence limit). Obviously pupils cannot increase their scores by 0.2, so best to think of it as 1 point for every 5th child. Or. as above, multiply the lower confidence limit by the number of pupils in the cohort (e.g. -0.2 x 30 pupils = -6). If you have a pupil with a score at least equal to this result (i.e. -6) then removing them from the data should make the data significantly above average.

Easiest thing to do is model it using the VA calculator, which you can download from my blog (see August) or use the online version www.insighttracking.com/va-calculator

This has caused some confusion. It's the same concept as applied in last year's RAISE and dashboards. Simply take the percentage gap between your result and national average (e.g. -12%), turn it into a decimal (e.g. -0.12) and multiply that by the number of pupils in the cohort (e.g. 30). In this case we work out 'diff no. pupils' as follows: -0.12 x 30 = -3.6. This means the schools result equates to 3 pupils below average. If the school result is above national then it works in the same way, it's just that the decimal multiplier is positive.

If you are calculating this for key groups, then multiply by the number in the group, not the cohort. For example, the 80% of the group achieved the result against a national group result of 62%, which means the group's result in 18% above national. There are 15 pupils in the group so we calculate 'diff no. pupils' as follows: 0.18 x 15 = 2.7. The group result therefore equates to 2 pupils above national.

I hope that all makes sense.

Happy analysing.

You can download it here

The file should open in Word Online. Please click on the 3 dots top right to access the download option. Please don't attempt to edit online (it should be view only anyway). Also, chances are it will be blocked by schools computers (schools always block my stuff).

A couple of points about the template:

**1) Making sense of confidence intervals**Only worry about this is progress is significantly below average, or if data is in line but high and close to being significantly above.

If your data is significantly below average, take the upper limit of the confidence interval (it will be negative e.g. -0.25). This shows how much each pupils score needs to increase by for your data to be in line (0.25 points per pupil, or 1 point for every 4th pupil). Tip: multiply this figure by the number of pupils in the cohort (eg. -0.25 x 20 pupils = -5). If you have a pupil - for whom you have a solid case study on - that has a score at least equal to the result (i.e. -5 in this case), removing that pupil from the data should make your data in line with national average.

If your data is in line and you are interested to know how far it would need to shift to be significantly above, note the lower part of the confidence interval (it will be negative, e.g. -0.19). This again shows how much your data needs to shift up by, but in this case to be significantly above. In this case, each child's score needs to increase by 0.2 points for the overall progress to be significantly above (we need to get the lower limit of the confidence interval above 0 so it needs to rise by slightly more than the lower confidence limit). Obviously pupils cannot increase their scores by 0.2, so best to think of it as 1 point for every 5th child. Or. as above, multiply the lower confidence limit by the number of pupils in the cohort (e.g. -0.2 x 30 pupils = -6). If you have a pupil with a score at least equal to this result (i.e. -6) then removing them from the data should make the data significantly above average.

Easiest thing to do is model it using the VA calculator, which you can download from my blog (see August) or use the online version www.insighttracking.com/va-calculator

**2) Difference no. pupils**This has caused some confusion. It's the same concept as applied in last year's RAISE and dashboards. Simply take the percentage gap between your result and national average (e.g. -12%), turn it into a decimal (e.g. -0.12) and multiply that by the number of pupils in the cohort (e.g. 30). In this case we work out 'diff no. pupils' as follows: -0.12 x 30 = -3.6. This means the schools result equates to 3 pupils below average. If the school result is above national then it works in the same way, it's just that the decimal multiplier is positive.

If you are calculating this for key groups, then multiply by the number in the group, not the cohort. For example, the 80% of the group achieved the result against a national group result of 62%, which means the group's result in 18% above national. There are 15 pupils in the group so we calculate 'diff no. pupils' as follows: 0.18 x 15 = 2.7. The group result therefore equates to 2 pupils above national.

I hope that all makes sense.

Happy analysing.

Subscribe to:
Posts (Atom)