But that’s not even the most troubling bit. The most troubling bit is that we have no idea how bad the problem actually is. Most algorithms of this kind are kept secret and protected as proprietary code. This means that we don’t know how these decisions are being made and what biases they are hiding. The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us. This, therefore, is a double gender data gap: first in the knowledge of the coders designing the algorithm, and second, in the knowledge of society at large, about just how discriminatory these AIs are.
Employment procedures that are unwittingly biased towards men are an issue in promotion as well as hiring. A classic example comes from Google, where women weren’t nominating themselves for promotion at the same rate as men. This is unsurprising: women are conditioned to be modest, and are penalised when they step outside of this prescribed gender norm.66 But Google was surprised. And, to do them credit, they set about trying to fix it. Unfortunately the way they went about fixing it was quintessential male-default thinking.
It’s not clear whether Google didn’t have or didn’t care about the data on the cultural expectations that are imposed on women, but either way, their solution was not to fix the male-biased system: it was to fix the women. Senior women at Google started hosting workshops ‘to encourage women to nominate themselves’, Laszlo Bock, head of people operations, told the New York Times in 2012.67 In other words, they held workshops to encourage women to be more like men. But why should we accept that the way men do things, the way men see themselves, is the correct way? Recent research has emerged showing that while women tend to assess their intelligence accurately, men of average intelligence think they are more intelligent than two-thirds of people.68 This being the case, perhaps it wasn’t that women’s rates of putting themselves up for promotion were too low. Perhaps it was that men’s were too high.
Bock claimed Google’s workshops as a success (he told the New York Times that women are now promoted proportionally to men), but if that is the case, why the reluctance to provide the data to prove it? When the US Department of Labor conducted an analysis of Google’s pay practices in 2017 it found ‘systemic compensation disparities against women pretty much across the entire workforce’, with ‘six to seven standard deviations between pay for men and women in nearly every job category’.69 Google has since repeatedly refused to hand over fuller pay data to the Labor Department, fighting in court for months to avoid the demand. There was no pay imbalance, they insisted.
For a company built almost entirely on data, Google’s reluctance to engage here may seem surprising. It shouldn’t be. Software engineer Tracy Chou has been investigating the number of female engineers in the US tech industry since 2013 and has found that ‘[e]very company has some way of hiding or muddling the data’.70 They also don’t seem interested in measuring whether or not their ‘initiatives to make the work environment more female-friendly, or to encourage more women to go into or stay in computing’, are actually successful. There’s ‘no way of judging whether they’re successful or worth mimicking, because there are no success metrics attached to any of them’, explains Chou. And the result is that ‘nobody is having honest conversations about the issue’.
It’s not entirely clear why the tech industry is so afraid of sex-disaggregated employment data, but its love affair with the myth of meritocracy might have something to do with it: if all you need to get the ‘best people’ is to believe in meritocracy, what use is data to you? The irony is, if these so-called meritocratic institutions actually valued science over religion, they could make use of the evidence-based solutions that do already exist. For example, quotas, which, contrary to popular misconception, were recently found by a London School of Economics study to ‘weed out incompetent men’ rather than promote unqualified women.71
They could also collect and analyse data on their hiring procedures to see whether these are as gender neutral as they think. MIT did this, and their analysis of over thirty years of data found that women were disadvantaged by ‘usual departmental hiring processes’, and that ‘exceptional women candidates might very well not be found by conventional departmental search committee methods’.72 Unless search committees specifically asked department heads for names of outstanding female candidates, they may not put women forward. Many women who were eventually hired when special efforts were made to specifically find female candidates would not have applied for the job without encouragement. In line with the LSE findings, the paper also found that standards were not lowered during periods when special effort was made to hire women: in fact, if anything, the women that were hired ‘are somewhat more successful than their male peers’.
The good news is that when organisations do look at the data and attempt to act on it, the results can be dramatic. When a European company advertised for a technical position using a stock photo of a man alongside copy that emphasised ‘aggressiveness and competitiveness’ only 5% of the applicants were women. When they changed the ad to a stock photo of a woman and focused the text on enthusiasm and innovation, the number of women applying shot up to 40%.73 Digital design company Made by Many found a similar shift when they changed the wording of their ad for a senior