During a malware outbreak, zero-day loopholes can boost transmission by increasing the susceptibility of target machines. In 2010, the ‘Stuxnet’ worm was discovered to have infected Iran’s Natanz nuclear facility. According to later reports, this meant it would have been able to damage the vital centrifuges. To successfully spread through the Iranian systems, the worm had exploited twenty zero-day loopholes, which was almost unheard of at the time. Given the sophistication of the attack, many in the media pointed to the US and Israeli military as potential creators of the worm. Even so, the initial infection may have been the result of something far simpler: it’s been suggested that the worm got into the system via a double agent with an infected USB stick.[21]
Computer networks are only as strong as their weakest links. A few years before the Stuxnet attack, hackers successfully accessed a highly fortified US government system in Afghanistan. According to journalist Fred Kaplan, Russian intelligence had supplied infected USB sticks to several shopping kiosks near the nato headquarters in Kabul. Eventually an American soldier had bought one and used it with a secure computer.[22] It’s not only humans who pose a security risk. In 2017, a US casino was surprised to discover its data had been flowing to a hacker’s computer in Finland. But the real shock was the source of the leak. Rather than targeting the well-protected main server, the attacker had got in through the casino’s internet-connected fish tank.[23]
Historically, hackers have been most interested in accessing or disrupting computer systems. But as technology increasingly becomes internet-connected, there is growing interest in using computer systems to control other devices. This can include highly personal technology. While that casino fish tank was being targeted in Nevada, Alex Lomas and his colleagues at British security firm Pen Test Partners were wondering whether it was possible to hack into Bluetooth-enabled sex toys. It didn’t take them long to discover that some of these devices were highly vulnerable to attack. Using only a few lines of code, they could in theory hack a toy and set it vibrating at its maximum setting. And because devices allow only one connection at a time, the owner would have no way of turning it off.[24]
Of course, Bluetooth devices have a limited range, so could hackers really do this in reality? According to Lomas, it’s certainly possible. He once checked for nearby Bluetooth devices while walking down a street in Berlin. Looking at the list on his phone, he was surprised to see a familiar ID: it was one of the sex toys that his team had shown could be hacked. Someone was presumably carrying it with them, unaware a hacker could easily switch it on.
It’s not just Bluetooth toys that are susceptible. Lomas’ team found other devices were vulnerable too, including a brand of sex toy with a WiFi-enabled camera. If people hadn’t changed the default password, it would be fairly easy to hack into the toy and access the video stream. Lomas has pointed out that the team has never tried to connect to a device outside their lab. Nor did they do the research to shame people who might use these toys. Quite the opposite: by raising the issue, they wanted to ensure that people could do what they wanted without fear of being hacked, and in doing so pressure the industry to improve standards.
It’s not just sex toys that are at risk. Lomas has found that the Bluetooth trick also worked on his father’s hearing aids. And some targets are even larger: computer scientists at Brown University discovered that it was possible to gain access to research robots, due to a loophole in a popular robotics operating system. In early 2018, the team managed to take control of a machine at the University of Washington (with the owners’ permission). They also found threats closer to home. Two of their own robots – an industrial helper and a drone – were accessible to outsiders. ‘Neither was intentionally made available on the public Internet,’ they noted, ‘and both have the potential to cause physical harm if used inappropriately.’ Although the researchers focused on university-based robots, they warned that similar problems could affect machines elsewhere. ‘As robots move out of the lab and into industrial and home settings, the number of units that could be subverted is bound to increase manifold.’[25]
The internet of things is creating new connections across different aspects of our lives. But in many cases, we may not realise exactly where these connections lead. This hidden network became apparent at lunchtime on 28 February 2017, when several people with internet-connected homes noticed that they couldn’t turn on their lights. Or turn off their ovens. Or get into their garages.
The glitch was soon traced to Amazon Web Services (AWS), the company’s cloud computing subsidiary. When a person hits the switch to turn on a smart light bulb, it will typically notify a cloud-based server – such as AWS – potentially located thousands of miles away. This server will then send a signal back to the bulb to turn it on. That February lunchtime, however, some of the AWS servers had briefly gone offline. With the server down, a large number of household devices had stopped responding.[26]
AWS has generally been very reliable – the company promises working servers over 99.99 per cent of the time – and if anything this reliability has boosted the popularity of such cloud computing services. In fact, they’ve become so popular that almost three-quarters of Amazon’s recent profits have come from AWS alone.[27] However, widespread use of cloud computing, combined with the potential impact of a server failure, has led to suggestions that AWS might