I like accountability and, as I have said many, many times, the system was a disaster for poor kids before we had proper accountability for governing bodies, Headteachers and schools.
There are lots of streams of accountability in education, including the accountability of policymakers. We need ways of evaluating the success of education policy to ensure that such vital investment in our nation’s future is not purely at the whim of “here today and gone tomorrow politicians”, sometimes with personal axes to grind and even messianic delusions. Then there is the accountability of those who govern our schools – the LAs, the chains and federations and the local governing bodies. We need ways of evaluating their understanding of what is happening to the schools in their care, their ability to manage the conflicting pressure and demands on their leaders, managers and staff in order that the students in their care benefit from the huge investment for which they are ultimately responsible. Headteachers and other leaders need to be accountable for the outputs from their schools, for the myriad of decisions and strategies they implement which enhance or hinder the ability of staff to be properly effective and the life chances of individual students.
Our accountability mechanisms don’t do all these things for us.
I know that the argument goes “government are the most accountable because of elections”, but that is rubbish actually because elections are never solely about our schools. It seems to me that government adopts a “heads we win / tails you lose” strategy which works like this:
Stage One – past improvements in outputs were due to –
- measuring the wrong things
- “gaming behaviour” / cheating
- erosion of standards
Stage Two – change what is to be measured to demonstrate –
- standards did not improve under the last lot
- induce a change in behaviour of schools.
Stage Three – Heads we win / tails you lose
- measured outputs go down: we’ve made the system more rigorous
- measured outputs go up: now it is genuine improvement
I don’t understand why, when accountability measures are subject to sudden changes at the whim of politicians (as a clear substitute for reasoned policy change based on research), school leaders jump to change their decisions and strategies to ones they don’t think are in the students’ best interests.
Oh ……. actually that last one maybe I do understand, even though it depresses me and saddens me the most. I understand because it is such high stakes for heads and school leaders. And, sadly the “middle tier” (both chains and LAs) are often not effective at understanding and managing the conflicting pressures and demands on their leaders and schools in order to best support them in serving the students.
Which is why I don’t understand why accountability mechanisms do not hold LAs properly to account, or chains to account at all!
It is not a popular view, but I do think there has been an erosion of academic standards over many years
I helped teach Sociology GCSE a couple of years ago (my original degree) and I was pretty disappointed about how little academic sociological content is expected now compared to when I did O-level. It took me three attempts to get a Maths O-level (which I finally did aged 21 at the same time as my degree!). I suspect I would have sailed through GCSE.
We are an IB school at DYCA because I don’t believe A-level helps our young people adopt a properly academic approach to studies, or prepare them for success at university. Our first IB Diploma cohort spent Year 11 learning “proper subjects” as we banked their GCSEs in Year 10 at Cs and Bs. I taught them philosophy, politics and economics, Lynne taught them Science and ethical decisions for 21st century leaders, Al taught Theory of Knowledge and Sean taught Art Appreciation. OFSTED weren’t too impressed with their levels of progress, but when inner-city teenagers were stopping me on corridors asking if I had read John Stuart Mill, asking me to arrange work experience in the House of Commons with the local MP, applying to Russell Group and not being fazed by the interviews, I knew I had judged education quite rightly to be about more than GCSE! The fact I had some hard conversations with an HMI who lacked such understanding was the system’s problem, not mine!
I’m not sure about this “grade inflation” argument though. The fact I find GCSE and A-level limiting is about my vision as someone who believes in education: grade inflation is the territory of those who don’t. That last sentence is a bit arrogant so I need to try and explain.
Grade inflation is frankly not the issue
I’ve seen the charts about how the huge national increase in pass rates of English and Maths GCSE has been, and been present at the debates of whether this can possibly represent “genuine improvement”. I’ve also listened to the government state that every student should keep re-taking English and Maths until they get a C grade up to the age of 19. And then today the government were respectful enough to announce through a press leak, that only a student’s first grade will count in a school’s league table.
So …. only the first time counts, but you have to keep doing it until you get to the standard we require. Confused? You should be. The confusion is because the premis of all this is completely spurious.
If you make a high stakes accountability system in which heads literally lose their jobs and create “cliff edges”, don’t be surprised if every single drop of energy goes into that cliff edge! (particularly so with this measure which the vast majority of heads actually thought sensible) It would be far more astonishing if the C grades hadn’t dramatically risen over the last few years. Anything which gets 100% attention and effort will show dramatic Improvement. The better question is: should we have given this cliff edge 100% attention and effort?
But no, it was important for “Stage One” (as mentioned above) to concentrate the mind on grade inflation and gain a consensus about the need to do something. The “something” turned out to be “comparable outcomes”. Never in educational history has it been so obvious that the wrong answer is being given to the wrong question. I have struggled over many months to fully understand this and I am pretty sure that I do. I also think comparable outcomes was developed as the right solution to a real problem in the past, but the way it has been used over the last two years is something which shames us all.
The real issue is: what do we have exams for?
You see I think there are two equally valid interpretations of what we have exams for. Over recent years you will have heard colleagues across the system discussing “criterion referencing” and “norm referencing”. I’ve listened and read lots and, at first, I really thought a lot of these people were cleverer than me (remember that maths O-level!), but actually this is an issue that requires a proper examination of basic principles. It is about criterion and norm referencing, but let’s put it back to basic principles – what do we want our exams to do?:
- Should a grade awarded in an examination say “this student can understand / knows / is able to apply subject content at a set and understood level”?
- Should a grade awarded in an examination say “this student is in the x centile of their year cohort in knowing / understanding / applying this subject content”?
These are both valid purposes of exams BUT THEY ARE NOT THE SAME. Furthermore, I am now sure that an exam cannot do both no matter how clever these assessor bods think they are!
If we have exams which are designed around the first principle, then we have a good mechanism for measuring school and system improvement, but unless we get the accountability measures right, we will get behaviour which leads to improvements we don’t believe in. This is were we were. There are solutions to this problem. I respectfully suggest two: firstly government should take advice from the system (@headsroundtable?) about what accountability measures should be used to mitigate against undesirable behaviours, and should ensure proper criterion based standards in examinations through a properly accredited professional body of Chartered Assessors.
If we have exams which are designed around the second principle, we can always “stratify” a cohort, (understand that an “A” means “A” and an “E” means an “E” – useful to university and employers perhaps?), and it might be useful in judging system improvement in that if you need to score 10% more to get an “A” this year then the previous year, then the cohort has improved on last year’s. This might be analysed further: eg was there an initiative / policy introduced when they were in KS1 which we can see made a difference? So, you see, I am not totally against norm referencing: indeed as long as we understand that is what we are doing, it could be useful. What it cannot do, however, is be used in accountability measures at school level.
If we are using examinations around this second principle we cannot measure school improvement and we cannot have schools collaborating if we try to do so. We have created a zero sum game. If this is what our exams are to be about, we have to introduce a different measure of data-based accountability, which is, of course, another challenge.
The current mess we are in is because (amongst a number of other nonsenses) we have no agreement about what our examinations are there to do.
I have come to the inevitable conclusion that we need to face the question and we need to face the answer: what do we want our examination system to do? To be honest, I can live with either answer because there are plenty of arguments on both sides. But this does not mean we can do both, and we certainly cannot answer the accountability challenges while we are attempting to do so.
Data isn’t the only form of accountability
We need proper judgement-based accountability too (see back to my earlier description of a Year 11 curriculum and why). Part of the reason for the complete and utter mess we have got into is that judgement- based accountability now seems to rely totally on data-based accountability. We need to acknowledge that we need both, and they need to have a clear relationship. At the moment, however, we have HMI (who really should know better) stuck in a morass of data which is rapidly losing credibility.
Phew …….. I feel a lot better for that!
Thanks for getting to the end!