It is a basic fact of schooling worldwide that children from advantaged homes arrive at school education-ready, while the disadvantaged are not. Children from advantaged backgrounds are often able to read and calculate, hold complex conversations and have a grasp of current events.
Many children from disadvantaged backgrounds may not know how to hold a book. Good early childhood education can inject a level of school-readiness but cannot entirely overcome the disadvantage. The best estimates of the average learning gap between the advantaged and disadvantaged groups, top to bottom, is about two years of learning at school entry.
Since the school reforms of 1989, school operational funding has included an element of measuring disadvantage, based on census data, to provide additional support for schools and hopefully improve learning outcomes. The model was very simple. Find out where the children from a given school live (in census terms, the ‘mesh blocks’), examine the social characteristics (income, benefit, household crowding etc) of the mesh blocks, calculate the level of disadvantage of that school and provide funding on that basis.
The decile system
The much maligned “decile system” came about because, in order to simplify funding arrangements, funding was allocated not to the school’s individual situation but on the grouped ranking with other schools. Decile one, for example, contained the schools with the ten per cent of most disadvantaged students.
This system has endured because it is relatively simple, data driven and easily updated every five years. It is hated by the sector because decile has become associated, in the mind of the public, with school quality. This was foreseeable and inevitable, as every single piece of research carried out on the reasons for school choice highlight social characteristics as the main factor influencing choice. Thus, higher decile equates with better children, thus better quality, in the mind of ‘choosers’.
And how could it be otherwise, really, when all our teachers are taught in the same institutions, school upkeep is relatively even, there is a national curriculum and the only significant variation in schools is the children populating the classrooms? As my research found in 2015, there has been massive white flight from the lowest decile schools over 20 years, which has meant that, on average, decile one schools are now 2.5 times smaller than decile 10 schools. This is a problem, of course, that abolishing deciles will not fix, but will simply become invisible and non-measurable.
The myth is that, in getting rid of deciles, the flight from disadvantaged schools would be halted. But it is the school choice system that has facilitated the ethnic/class flight, not the decile labels. In the absence of deciles, parents find other labels to put on schools, such as “gang”, “brown”, “violent”, “not children like ours”. We know this because other countries with choice and no convenient decile labels experience the same population movements.
New funding model
To get rid of the perceived decile problem, the MOE could simply fund each school on the census characteristics without doing the ranking and decile-making process. This would involve quite a lot more work with having to consider what each school should get on its own merits and in relation to other schools. It would increase bureaucracy without changing much in terms of actual funding. There would, as ever, be winners and losers in a zero-sum funding system.
However, MOE eyes are now set on a richer prize. The census is about old technology. It only happens every five years and is based on paper and pencil. In the new technological world, there must be a better way!
Data sharing – funding children on benefit status of parent
And there is. The generic term is called data-sharing. It comes in two types. The first would be a direct comparison between other agency records (in the current budget proposal, Ministry of Social Development benefit records) and school enrolments. As far as I can tell, no such data-sharing agreement exists, and it would arguably constitute a major potential breach of privacy to allow such databases to be matched. This probably is not the route intended by the budget announcement.
Second, is the relatively new ability to anonymously match data from different administrative systems, for example tax records, educational enrolment or outcomes, benefit records, student loans, ACC and health through a personal unique identifier (UID). The system, called the IDI, is administered by Statistics New Zealand and provides exciting opportunities for researchers and others to answer key population-based questions.
But, and it is a huge but, the wonderful indicators able to be compared for research purposes lie under an immoveable blanket of confidentiality. Were the data to be identifiable, it would be Orwell’s ‘Big Brother’ come to life. The question is whether using the IDI for funding purposes is a bridge too far in terms of preserving the utter confidentiality of the system. There is also a second question, given that many disadvantaged children are not cared for by their own parent/s, but by grandparents and other carers, as to whether the IDI is up to the challenge. However, we will put that aside for the moment. People who want to read up on the use of IDI data to identify disadvantage should refer to Treasury report 16/1.
The data that would need to be matched would be in three databases (at least) – parent to child (such as birth data, but this would exclude children born out of NZ), school attendance for the children (by school name) and length of time on benefit for the parent. In statistical terms it is a pretty simple match. The MOE would not know exactly who would be receiving the funding, so basic confidentiality could be maintained.
But, at the margins two very worrying elements emerge. The first is the inaccuracies caused by post-birth migrants, unusual family formations, foster families and so on, that probably make up 10 per cent of all students and a larger share of the disadvantaged. It would take a lot more work to count them (you would need to also look at immigration data and CYF data, for example).
The second concern is that there would be plenty of schools in the higher deciles where only a handful of children come from long-term benefit led families. If funding were received, for example for five children in a school, you might as well put a rubber stamp on their head reading, ”I am from a long term benefit dependent family”. Also, as the IDI scheme does not allow data for fewer than three cases (for obvious reasons), there would be a necessary marginal error in smaller groups.
My first concern as a researcher on school funding is to try to find out exactly how the scheme is going to work. I suspect that it has essentially been designed as a test case or pilot scheme in using administrative data for funding purposes, and I am sure there will be widespread interest in how it works, and how much it will cost to implement. Then they will need to work through the ethical implications of such models. I have begun by asking a series of OIA questions which have been put to the MOE. These are below.
A price on every head?
There are also some policy issues to be sorted out. For example, the IDI provides the possibility that each child could become a walking voucher offering schools a certain amount of funding for education based on personal and familial characteristics. There is certainly ongoing interest in school voucher systems by some groups, and the IDI would provide a finely tuned ability to cost out each person according to their individual disadvantage. But the social and ethical questions this would raise hopefully put it beyond any serious scope.
The important implication would be that a ranking of school characteristics for funding purposes would be replaced with a ranking of individual characteristics.
Official information request
I have sent the following OIA request to the MOE to attempt to better understand the scheme as announced.
Please provide the following information under the OIA 1982. In the Minister’s published speech to the National Cross-Sectional forum on 27 May this year, she noted:
To this end, Budget 2016 targets an additional $43.2 million over four years to state and state-integrated schools educating up to 150,000 students from long-term welfare-dependent families.
These students are one of the largest identifiable groups within our education system that is most at risk of educational underachievement.
Please answer the following questions related to this announcement:
- Please provide copies of any briefing papers, policy papers or cabinet papers related to this announcement.
- What data matching approach will be used to discover how many students from long term welfare dependent families attend each school, so that the funding can be allocated?
- How is ‘long term welfare dependent families’ to be defined?
- What legal basis allows for data-matching for such a purpose?
- We gather from the minister’s statement that the $42.1 million (as it shows later in the minister’s speech) includes: “$15.3 million for an extra 1250 students to access in-class support.”
- This leaves a net $26.8 million for allocation to the long term welfare dependent families over four years. Is that figure roughly correct?
- This then indicates an annual sum of around $6.7 million available for allocation. Is that figure roughly correct?
- This appears to translate to an annual sum per long term welfare dependent student (if there are 150,000) of just under $45. Is that figure roughly correct?
- What is the total estimated cost to the Ministry of Education in developing, testing, implementing and administering this scheme over the four years of its life?
- What are the next steps in developing and implementing the programme?