Wednesday, 8 October 2014

The Good of Small Things

I recently sent out a tweet asking for thoughts and experiences relating to small schools, particularly regarding data. Now, I could have written a lengthy, and probably rather dull, blog on the subject of data and small schools but instead I thought I'd just publish the following email, which perfectly and succinctly summarises the problems facing these settings when it comes to accountability measures and inspection. Regrettably, the author wishes to remain anonymous but you know who you are. Thank you so much for the following contribution:


Hi James,

I saw your tweet about wanting to hear about the experiences of small schools on specific data issues and thought I’d rather email than tweet.

I write from the perspective of being a governor and volunteer data handler for a small school (96 pupils).


Whilst small pupil numbers are a real benefit in terms of the school being able to concentrate on looking at each pupil as an individual, noting each child’s attainment and progress and the next steps required I think small schools have particular problems when it comes to handling cohort and group data. No doubt you’ll be familiar with these – though perhaps no. 4 is only just becoming an issue with more focus on this in inspections.

1.       Small pupil numbers mean that we don’t have sufficient sample sizes to be confident that our collective data is telling us much at all. Our results have to be very good (or very bad!) to be classed as statistically significant on RAISEonline. Even worse if trying to analyse groups.

2.       It’s very hard to spot trends because we need results over several years to collect sufficient sample sizes. By the time we can be confident of a trend through data analysis, things will have moved on anyway.

3.       I have no confidence that the people who judge us by our data will necessarily be competent statisticians. In my experience some people start off their analysis with a general rider about small cohort numbers requiring caution and then throw caution to the wind and proceed to read far more into the data  than it can possibly support.

4.       I particularly worry about proving to Ofsted that “From each different starting point, the proportions of pupils ... exceeding expected progress in English and in mathematics are high compared with national figures” and hope that any inspector who visits us understands that if we only have 1 pupil at a particular starting point, say 2b in reading, and that child fails to reach a Level 5, our score of 0% making more than expected progress from that starting point is actually in line with a national percentage of 29%. I do hope inspectors are being briefed to read the data properly and would love to access an authoritatively written statement on this that properly explains the situation.

5.       Small schools (often in underfunded ‘leafy’ areas with little or no pupil premium) often do not have the budget to be able to afford or justify what seem like expensive data handling packages – especially given the question of how much use they will really be when sample sizes are so small. Even FFT is hard to justify now the LA no longer subscribe.

6.       In small schools where the head is also a class teacher and probably the premises & catering manager and there is no deputy head and perhaps just a single school secretary/administrator, he/she will have to balance data management along with all their other tasks. Where do they get time in the day to really master this subject let alone source useful software and keep abreast of new government initiatives?

7.       Headteachers find it hard to discuss the progress of disadvantaged pupils with governors if there are so few pupils receiving the pupil premium that discussion will lead to identification.


Hope the above is of some help and thanks for all your work shared over Twitter,


As stated above, I think this email perfectly encapsulates the issues experienced by small schools when it comes to data, particularly that presented in RAISE, the Ofsted Dashboard and Performance Tables. There is some provision for this in the Ofsted Handbook, which headteachers of small schools should be aware of:

'Where numbers of pupils are small and achievement fluctuates considerably from year to year, inspectors should take into account individual circumstances when comparing with national figures, and should consider any available data of aggregate performance for consecutive cohorts' (Ofsted Inspection Handbook, p.20, section 60).

This gives small schools the go ahead to merge cohort data and produce aggregated attainment and progress figures for, say, the last three Y6 cohorts, which will result in a more viable data set and more meaningful analyses. On a number of occasions I've calculated 3 year aggregated VA (using my VA calculator - see other blog), which has proved to be very useful indeed and made a positive contribution to the overall outcome of inspections. 

Also, small schools should access their FFT data, which provides 3 year average VA and CVA, and trend indicators. In addition, the new FFT Aspire system will allow schools to toggle between single year and 3 year averages. Potentially a very useful feature, especially when dealing with small cohorts or groups. If you are wondering whether or not inspectors will be interested in or accepting of FFT data, take note of the following:

'Evidence gathered by inspectors during the course of the inspection should include: any analysis of robust progress data presented by the school, including information provided by external organisations' (Ofsted Inspection Handbook, p.66, section 195)

I'm sure the issues outlined above resonate with staff in many if not all small schools up and down the country; and there are clearly specific challenges facing those schools. But there are ways round them and I hope that this has given you some ideas. Please get in touch if you have any questions or comments.

1 comment:

  1. An excellent summary with which I, as a head of a school with 104 pupils, can entirely identify. I put a huge amount of time processing the data and presenting it in different ways for a variety of audiences, even if the conclusion is that results are inconclusive. The focus on individual becomes all important, as it should be. I don't currently subscribe to a tracking programme for the reasons given above, APP and the points system being sufficiently well established and clear-cut to allow for a multitude of data processing tasks to take place. The one reason I might in future is, for me, the point missing above - the complexity of Assessment Without Levels. Colleague heads and I are having excellent discussions about this, and slowly edging towards some conclusions about the ways in which in-class assessment might lead to individual and cohort profiles which will stand up to external (and internal) scrutiny. I will share when ready!