CHAPTER 13
From Purse to Wallet
It was 11 p.m. on the evening of the UK’s 2017 general election. The polls had been closed for one hour, and a rumour had started doing the rounds on social media. Youth turnout had gone up. A lot. People were pretty excited about it. ‘My contacts are telling me that the turnout from 18-24 year olds will be around 72/73%! Finally the Youth have turnedddd out!! #GE2017’ tweeted1 Alex Cairns, CEO and founder of The Youth Vote – a campaign to engage young people in UK politics. A couple of hours later, Malia Bouattia, then president of the National Union of Students, put out the same statistic in a tweet that went on to be retweeted over 7,000 times.2 The following morning David Lammy, Labour MP for the London borough of Tottenham, tweeted his congratulations: ‘72% turnout for 18-25 year olds. Big up yourselves #GE2017’.3 His tweet received over 29,000 retweets and over 49,000 likes.
There was just one problem: no one seemed to have the data to back any of this up. Not that this stopped news outlets from repeating the claims, all citing either unverified tweets or each other as sources.4 By Christmas Oxford English Dictionaries had named ‘youthquake’ as its word of the year, citing the moment ‘young voters almost carried the Labour Party to an unlikely victory’.5 We were witnessing the birth of a zombie stat.
A zombie stat is a spurious statistic that just won’t die – in part because it feels intuitively right. In the case of the UK’s 2017 general election we needed an explanation for why, contrary to nearly all polling predictions, the Labour Party did so well. An unprecedented increase in youth turnout fitted the bill: Labour had courted the youth vote, the story went, and it had almost won. But then, in January 2018, new data emerged from the British Electoral Survey.6 There was some debate over how definitive the data was,7 but the famous youthquake was downgraded to more of a youth-tremor at best. By March no one credible was talking about a ‘youth surge’ without substantial caveats, and the 72% statistic was firmly on life support.8
The British youthquake that never was had a fairly short life for a zombie stat. This is partly because while secret ballots preclude the possibility of absolutely conclusive polling data, we do at least collect data on them. A lot of data, in fact: elections are hardly an underresearched topic. But when a zombie stat emerges in an area where data is scarce, the stat becomes much harder to explode.
Take the claim that ‘70% of those living in poverty are women.’ No one is quite sure where this statistic originated, but it’s usually traced to a 1995 UN Human Development Report, which included no citation for the claim.9 And it pops up everywhere, from newspaper articles, to charity and activist websites and press releases, to statements and reports from official bodies like the ILO and the OECD.10
There have been efforts to kill it off. Duncan Green, author of From Poverty to Power, brands the statistic ‘dodgy’.11 Jon Greenberg, a staff writer for fact-checking website Politifact, claims, citing World Bank data,12 that ‘the poor are equally divided by gender’, with, if anything, men being slightly worse off. Caren Grown, senior director of Gender Global Practice at the World Bank, bluntly declares the claim to be ‘false,’ explaining that we lack the sex-specific data (not to mention a universally understood definition of what we mean by ‘poverty’) to be able to say one way or the other.13
And this is the problem with all this debunking. The figure may be false. It may also be true. We currently have no way of knowing. The data Greenberg cites no doubt does indicate that poverty is a gender-blind condition, but the surveys he mentions, impressive though their sample size may be (‘a compilation of about 600 surveys across 73 countries’), are entirely inadequate to the task of determining the extent of feminised poverty. And having an accurate measure is important, because data determines how resources are allocated. Bad data leads to bad resource allocation. And the data we have at the moment is incredibly bad.
Gendered poverty is currently determined14 by assessing the relative poverty of households where a man controls the resources (male-headed household) versus households where a woman controls the resources (female-headed household).15 There are two assumptions being made here. First, that household resources are shared equally between household members, with all household members enjoying the same standard of living. And second, that there is no difference between the sexes when it comes to how they allocate resources within their households. Both assumptions are shaky to say the least.
Let’s start with the assumption that all members of a household enjoy an equal standard of living. Measuring poverty by household means that we lack individual level data, but in the late 1970s, the UK government inadvertently created a handy natural experiment that allowed researchers to test the assumption using a proxy measure.16 Until 1977, child benefit in Britain was mainly credited to the father in the form of a tax reduction on his salary. After 1977 this tax deduction was replaced by a cash payment to the mother, representing a substantial redistribution of income from men to women. If money were shared equally within households, this transfer of income ‘from wallet to purse’ should have had no impact on how the money was spent. But it did. Using the proxy measure of how much Britain was spending on