So, as we all know, the findings of the SQA's comparability study have been reported and the three remaining baseline assessments (EEx, NFER, CEMS) have been deemed to be incomparable with one another. Wow! Really? The DfE attempted to convince us that they had intended to carry out this study all along, whereas I'm fairly sure they stated they'd wait until the 2015 cohort got to end of key stage 1 in 2018 before testing the reliability and validity of the data. I guess they just realised the mess that had been created and wanted a quick way out of a mire. It's not going away of course. SchoolsWeek, who ran a front page story on 26th February under the headline 'future of baseline tests in doubt' noted that 'a mooted alternative to tests is the introduction of "school readiness" checks - an option said to be preferred by number 10'. But it does mean that all current primary school cohorts will have their progress measured from key stage 1 to key stage 2, and there probably won't be an entry to key stage 2 value added measure until 2024, when the 2017/18 cohort reach the end of year 6.
And today SchoolsWeek have run another story: Numbers for baseline tests begin to thin out. Now, there doesn't appear to have been the collapse that many anticipated because some schools evidently still see value in having a baseline assessment, but in the absence of an official accountability measure, i.e. VA, it remains to be seen what schools do with this data. Many will run further standardised tests from other providers and hope that the numbers go up. This will probably mean subtracting the baseline score from a later standardised score in the hope that results will be positive. Some may assume this to be a form of value added (which it isn't). Many schools will seek to group pupils based on past and present standardised scores in order to construct progress matrices. The score thresholds used to define these groups will, in most cases, be entirely arbitrary rather than based on any statistical process. Over the next few years we will no doubt see some rather dubious and unscientific practices going on, which will be used to great effect in improvement conversations and inspections, and in many cases these will go unchallenged because they are based on standardised tests, rather than teachers' assessment.
It is entirely understandable why schools will adopt the practices they are about to adopt: they are desperate for a supposedly robust and reliable progress measure, and if it's based on tests it must be right, right? Well, anyone who has read any of my blogs in the past year or so will know my thoughts on progress measures offered by most tracking systems. We have become so desperate for a numerical scale of progress that we are willing to overlook the any deficiency in terms of accuracy and meaning. We will readily sacrifice our principles at this crossroads of assessment for a simple number because there are those out there - from various external bodies - who demand it. We are driven by number lust even though the numbers - the pseudo-APS - we generate tell us little or nothing about what pupils can and cannot do, and are therefore of no use to teachers trying to do the job of teaching. I wish we'd just give up trying to quantify distance travelled by pinning a point score onto teacher assessments.
So, yes, if we want to measure progress properly, and get away from pseudo levels and points of our tracking systems, then tests most certainly have a part to play. But assigning pupils into groups based on arbitrary test score thresholds or subtracting one test score from another isn't the way forward. What i'd like to see are the baseline test providers develop robust interim VA measures. Pupils do a baseline assessment, the scores from which are collected by the provider. Then, perhaps two or three years later, the same pupils take another test (it doesn't need to be from the same provider but the provider would need to collect these scores as well). Assuming a large enough population of pupils have taken the tests (i.e. thousands of pupils), VA analysis can then be carried out. Each pupils' test score in the later test is compared against the average test score for pupils with the same baseline score, and the average difference is calculated to show the cohorts' overall VA score. This would be a robust, valid and meaningful progress measure where each pupil's attainment is compared against potentially thousands of pupils with similar start points across hundreds of schools. Again, the baseline providers would not necessarily have to design and administer the later tests, they could instead offer to collect and analyse data from a number of different providers. They could even calculate VA from their own baseline assessment to key stage 1 tests.
And if they offered this then I suspect numbers taking baseline tests, rather than thinning out, would actually increase substantially, such is the need to robust progress measures.
I hope the baseline providers are giving this some thought.