Monday, 20 February 2017

Blueprint for a tracking system

In almost every meeting I have in schools someone will at some point say one of the following:

"Our tracking system isn't really working out for us anymore"


"Our tracking system doesn't do what we need it to do"


"It's just levels really, isn't it?"

And this is often the main reason for my visit: schools want to explore other options. They want a simple, flexible system that works for them, grows with them, adapts to fit their needs, is easy to use, and reduces workload. That's exactly what all systems should do, but sadly this is quite far from the reality. It's that whole master and servant thing; and sometimes, when it comes to the school-system relationship, it's quite difficult to tell which one's which. Schools are evidently getting fed up with systems that are bloated, confusing and inflexible. They want a change and who can blame them? Too much precious time is wasted navigating systems, finding workarounds, trying to map the curriculum to the objectives, and finding the data you need from numerous reports to populate the spreadsheet that someone has sent you because the system doesn't provide everything in one place. Systems need to be more Connect 4 and less like chess. More of an asset and less of, well, an ass.

Almost a year ago I wrote this blog post on the golden rules of tracking. It outlined a philosophy on tracking systems that I presented at the original Learning First conference in Sheffield. Those golden rules are:
  1. Separate assessment and tracking from performance management
  2. Ensure your system is a tool for teaching and learning rather than accountability
  3. Reduce the complexity
  4. Do not recreate levels
  5. Stop obsessing about measuring progress
Since then I've added to that another point: do not compromise your approach to assessment to fit the rules of a system. I've also changed point 3 to 'Reduce the complexity and the workload'. The post seemed to strike a chord and tied in with what many were thinking anyway. But in order to adopt these principles we need a complete overhaul to the way we track pupil progress in schools. With this in mind, I thought I'd write a post on what I think tracking systems should offer, which is well overdue considering how much time I spend ranting about what they get wrong.

So here are my golden practical rules of tracking. In my opinion, tracking systems should allow schools to:

1) Define and input their own objectives
Yes, all schools are guided by the programmes of study of the national curriculum, but they don't all teach the same things at the same time in the same order in the same way. Many schools have gone to great lengths to design a curriculum that ensures the right level of challenge for its diverse range of pupils, taking considerable care over the wording of key mastery indicators and 'non-negotiables'. Some schools are happy tracking against a broad framework of a few key objectives whilst others want a tighter criteria and therefore more statements. And many schools are using assessment materials from various third parties such as Ros Wilson, Assertive Mentoring, Rising Stars, Kangaroo Maths, or from the LA. Often these assessment materials are in paper form, stapled into books and highlighted to show pupils' security and progression. The problem is that many systems do not allow schools to input their own objectives or edit the existing ones, and so the system is not aligned to what is being taught. Teachers are then confronted with a time consuming and frustrating mapping exercise to attempt to link assessment materials to whatever national curriculum objectives are in the system. Yes, pupils need to secure the national curriculum objectives but schools are finding their own route towards those and would welcome the flexibility to setup their systems to mirror what happens in the classroom. Essentially, the core purpose of tracking system is, in my view, to work as an electronic markbook. That's the important bit - that's the bit that has impact. Get that right and the rest will follow.

Oh, and please keep those lists of objectives to a minimum. Too much tracking is unnecessary, demoralising and counterproductive. 

2) Define their own summative assessment descriptors
Schools use different terms to describe where pupils are in their learning: below, meeting, above; emerging, developing, secure; beginning, advancing, deepening; supported, independent, greater depth. Yes, we are often talking about the same thing but schools should be able to adapt their system to reflect the language used by teachers across the school. Also, many systems use terms such as emerging, developing, secure to reflect how much of the curriculum the pupil has covered and achieved. Increasingly schools are turning away from this approach and instead using these terms to denote the pupil's competence in what has been taught so far. So rather than secure being something that happens after Easter when they've covered a certain percentage of the curriculum, it is instead used to indicate that the pupil is working securely within the curriculum at that point in time. If schools choose to apply terms in this way then the system should be flexible enough to accommodate that approach. Furthermore, some schools are starting to use teacher assessments of progress as well as attainment to get around the inadequacy of linear metrics commonly used in systems. If schools need more than one set of descriptors to account for the achievement of all pupils, they should have that option.

3) Record depth of understanding
In a curriculum where pupils do not move on to next year's content early, how do we show 'above expected progress' for pupils that start the year 'at age related expectations'?* All fine for pupils that are below and catch up - they make progress in the traditional sense, through rapid and extensive coverage of the curriculum - but those secure learners present a problem. This is why it's useful to be able to differentiate on the basis of depth at the objective level. It means teachers can a) record and identify pupils on the basis of their competency in key areas, and b) schools can have some measure of progress for those pupils constrained within the parameters of their age-appropriate curriculum, should they need it. Most systems do this to a degree, using a rag rating system to show the pupil's security within each objective, but an associated numerical system is useful for aggregated cohort and group reporting, and possibly for individual pupil progress. Admittedly, this is probably more about external scrutiny than classroom practice but it's an easy win and does not add to workload. 

* I used inverted commas because I dislike both those terms but recognise they are commonly used.

4) Enter whatever data they like
A system should be like a scrapbook. It should allow schools to collect whatever assessment data they need, in whatever format it's in, and analyse it in anyway they see fit. If they want to track the changes in standardised scores and percentile ranks, let them. If they want to track progress from two separate baselines, they should be able to do that too. I guess this means that they should also be allowed to define their own pseudo-sublevels and points-based progress measures. I'd really rather they didn't, and I hope they change their mind in time, but if that's what they feel is needed right now, then it's better the system does it for them rather than forcing them down the route of building their own spreadsheet and adding to workload.

5) Track pupils out of year group
Not all pupils are accessing the curriculum for their year group and this causes problems in many systems. Either the systems don't allow tracking outside of year group and simply classify such pupils as 'emerging' along with everyone else; or they do allow it but always highlight the pupils with a red flag, define them as well below, and categorise them as making below expected progress. It would be welcome if schools could track the progress of pupils working below the curriculum for their year group without being penalised by the algorithms of the system. Why can't SEN pupils be shown to have made great progress?

6) Design their own reports
The holy grail of tracking: data on a page. A single report that shows all key data for each subject broken down by cohort and key group. Schools need the facility to be able to design and build their own reports to satisfy the needs of governors, LA monitoring visits and Ofsted. These might show the percentage of objectives secured and the percentage of pupils on track alongside the average test score for each key group, at the start of the year, the mid point and the end. If the school then decides to implement a new form of assessment, they should be able to input that data into the system and add a column to their report. There are too many teachers and senior leaders working around their systems, exporting data into excel, or running reports in order to complete a form or table to meet a particular requirement, perhaps sent by the LA or MAT. I've even seen headteachers reduced to running countless reports and writing key bits of data down on a pad of paper in order to transfer to the table they've been asked to fill in. Now, much of this data maybe pointless and increased workload should be resisted, but it is a fact of school life that people ask for numbers, and systems that allow report templates to be designed to fulfil these needs would be one less headache. And on the subject of reports, we really don't need that many. In addition to a custom built table, probably something like a pie chart that shows the percentage of pupils that are below, at and above where you expect them to be at any point in time. Maybe a progress matrix. I'm struggling to think of anything else that is actually useful.

7) Input and access their data on any device
It is 2017 after all!
The ultimate aim is to have neat, intuitive, flexible systems that save time and stress. Systems that are useful tools for teaching and learning, that do not influence a school's approach to assessment in anyway. They should be easy to use with minimal training; and whilst they shouldn't require much support, if a school does need help, someone on the end of a phone is always very welcome.

So, there's my pie-in-the-sky utopian daydream.

Any takers?

Monday, 6 February 2017

Mitigation in writing

The purpose of this post is not to explain the mechanics and issues of the KS2 writing progress measure - I've already covered that in enough detail here - but I do want to offer a method to help schools counter the nonsense. Suffice to say, the measure is badly designed, imprecise, and has a habit of making things look a lot worse than they are. In the absence of fine graded test scores, we only have the teacher assessments to work with. Rather than decide that this is insufficient data for an accurate progress measure (is there any such thing?), the decision was taken to assign nominal scores to the teacher assessments as follows:

Working towards the expected standard:  91
Working at the expected standard: 103
Working at greater depth: 113

These nominal scores for writing are then compared against fine graded, scaled score estimates - e.g. 96.69 or 105.77 - just as they are in reading and maths. Unfortunately, unlike reading and maths where pupils actually have test scores, in writing there is no such thing. The benchmarks that pupils' results are compared against are therefore unachievable, and the big leaps between nominal scores result in some huge negative and positive differences. 

So if schools have lots of pupils assessed as 'working towards' they tend to have big negative progress scores; if they have lots assessed as 'working at the expected standard', progress scores tend to be positive. The odd thing is that many, if not most, pupils assessed as working towards, who have negative progress scores, have benchmarks of greater than 91 but less than 103. This means that if they achieved the expected standard (and were assigned a nominal score of 103) their progress would be positive. And this got me thinking: surely all such pupils are actually in line, aren't they? They achieve working towards, they have a score of 91, their benchmark is 97.26 - which they can't achieve - and the next available score is 103. That, in my mind, means their progress is broadly in line with average. They've done what you'd expect them to do from their particular start point. They can't hit the benchmark, they can only fall short or exceed it. 

To counter the negative (and significantly negative) scores many schools have found themselves saddled with for KS2 writing, I propose the following approach for individual pupils: 

Above: positive progress score
In line: negative progress score but estimate is lower than next nominal score threshold
Below: negative progress score and estimate is higher than next nominal score threshold

For pupils assessed as working towards, this works out as:

Above: positive progress score
In line: negative progress score but estimate between 91-103
Below: negative progress score and estimate above 103

In one school recently I suggested a slight variation, which acts as a compromise to DfE methodology:

Above: positive progress score
In line: negative progress score and estimate 91-100
Below: negative progress score and estimate 100-103
Well below: negative progress score and estimate >103

Pupils were then colour coded dark green, light green, orange, and red respectively. They only had one 'red' pupil.

Once you've chosen your approach, simply state the percentage of pupils that fall into each category. If you like you could present it in a progress matrix from KS1 writing level start point. 

I may add this approach into my VA calculator at some point, but in the mean time you can do it manually from the KS2 pupil list, which you can download from RAISE. I definitely think it's worth exploring. Chances are your it'll make your writing data look better. 

It'll be fairer too.