Drumbeat to Ditch NAPLAN is Growing Louder

11 May 2018

This article was first published in the Sydney Morning Herald on 6 May 2018.

There is a tendency in education for apparently logical solutions to backfire. The NAPLAN system is a case in point. It was launched with great fanfare a decade ago as a tool to "drive improvements in student outcomes and provide increased accountability to the community", according to its website. This sounds fair enough, but after 10 costly years virtually everyone who actually understands NAPLAN opposes it.
Federal Education Minister Simon Birmingham rejected on Friday the call of his NSW colleague Rob Stokes to abandon NAPLAN, but in light of the release of the Gonski 2.0 report this week, and a review of aspects of the test later this year, there is an opportunity to re-examine it and the consequences it has wrought on education.

For starters, NAPLAN was poorly designed. International expert Les Perelman, who reviewed NAPLAN for the NSW Teachers Federation, has called it "one of the strangest writing tests I’ve ever seen ... It’s measuring all the wrong things". For those interested, his report is detailed - and it is not the first.
Our teachers have observed other problems, such as students being tested on material not yet due to be taught, and distinctly odd marking criteria. For instance, one quarter of the marking criteria for writing in 2017 was not achieved by any Year 3 student in NSW, and one third was achieved by fewer than 5 per cent. Year 5 results were similar. It is hard to believe that a well-designed test would set standards that could not be met by a single student in NSW, regardless of talent, schooling or background.

The limits of statistical analysis are another major hindrance. Accurate statistics require a large population. If NAPLAN were well designed, we might use it to draw conclusions about education in NSW or Australia. But at the level of an individual, a class or a school, its results are so uncertain as to be almost useless.
AdvertisementAs Melbourne University academic, Margaret Wu explains: "[A] student’s NAPLAN score on a test could fluctuate by about plus or minus 12 per cent. For an individual student, the growth measures ... have an error margin greater than one year’s growth." It is entirely possible for a student who has gone backwards to appear to have made two full years’ growth – or vice versa.

For a class of up to 30 students, the class average is expected to fluctuate randomly by about 10 per cent each year. This is before taking into account common complicating factors, such as student movement between schools or events outside the teacher’s control affecting student performance – of which there are many.

No one who understands the data would attempt to draw any substantive conclusion about student or school performance. Teachers know this. Education experts know this. Yet parents are positively encouraged to use NAPLAN to compare schools via MySchool data and the media’s league tables. Even if parents are sufficiently reflective to focus on rates of progress rather than standards of achievement, they are doomed to be misled by the statistical model.

Perversely, the promotion of NAPLAN as a tool for parent choice and community accountability actually undermines quality teaching. The publication and misguided application of results has transformed NAPLAN into a high-stakes exam for both students and schools and that alters behaviour. Queenwood is fortunate to have an academic record so strong that we can largely ignore the pressure to drive up NAPLAN results, but many schools cannot.

The effects of high-stakes testing have been pernicious in countries such as Britain and the United States. Australian teachers are not (yet) driven to these extremes but investing time in practice tests and specific NAPLAN techniques is a rational and common response.

Teachers know that teaching to the test is not good, but who can blame them when they are supposedly accountable for flawed data? There is a small NAPLAN industry – resources, student coaching and training courses for teachers – that benefits from the test. Yet breadth of curriculum is being sacrificed because evaluating schools only on literacy and numeracy sidelines the sciences, humanities and the arts.
And we haven’t even begun to address the implications of online testing or robot-marking, the impact of NAPLAN testing on children’s mental health, the serious concerns over privatisation of the tests, or the enormous cost ($100 million every year) of running them.

The misuse of NAPLAN must stop. There has been welcome acknowledgment by Stokes that NAPLAN is "highly defective". There is growing awareness of the widespread lack of support amongst teachers and education experts, and the release of Gonski 2.0 seems to be adding momentum to the criticism.
As a school principal, I want to be able to give confidence to children and parents but right now, when parents ask whether they should withdraw their children from NAPLAN, I find it hard to argue why they shouldn’t. We can, in theory, test children to support systemic improvement, school improvement or individual diagnosis – but NAPLAN misses the mark for all three. Instead, it is confusing debate and driving short-term, cosmetic teaching adjustments at the expense of real learning.

The drumbeat for removal of NAPLAN is growing louder but the release of the Gonski 2.0 report only makes it more critical that we understand past mistakes. Due to its vast scope, the Gonski report is necessarily brief and its recommendations are fairly familiar. There is much to be welcomed but the real test will be detailed implementation and there are points of concern, including the emphasis on testing and data for tracking student progress and quality assurance processes. This was exactly what NAPLAN was supposed to do, but poor design and the misuse of data have ultimately undermined quality education. Gonski 2.0 must be deftly handled to avoid the same result.

Ms. Elizabeth Stone
Principal