Here, at least, there’s fresh paint on the walls, and the thin carpet isn’t worn through. And there
MacDonald pushes open a beige door and ushers you into a cramped office. There’s a huge, old-fashioned- looking monitor on his desk, and a glass-fronted bookcase holding a small, dog-eared collection of journals and books. Judging from the dust and the yellowing corners, they haven’t been read in a while. Trophy copies of his papers, you assume. He flops down into a cheap swivel chair, and gestures at the two fabric-padded bucket seats in front of his desk. “Make yourselves at home. I’m sorry I can’t offer you any hospitality—our coffee machine’s broken again, and the corporate hospitality budget is somewhat lacking this decade.”
“Thanks,” you manage. The sense of deja vu resolves itself: You
“I shall remember not to confess to any murders I didn’t commit.” MacDonald seems to find your caution inappropriately amusing. You’re about to repeat and rephrase when he adds, “I understand you’re in need of domain-specific knowledge.” He leans forward, smirk vanishing. “Why me?”
“Your name came out of the hat.” You decide to press on. Probably he got the message: In any case, having an inappropriate sense of humour isn’t an arrestable offense. “We’re investigating a crime involving some rather strange coincidences that appear to involve some kind of social network.” The half smile vanishes from Dr. MacDonald’s face instantly. “You’re a permanent lecturer in informatics with a research interest in automated social engineering and, ah, something called ATHENA. Our colleagues recommended you on the basis of a review of the available literature on, uh, morality prosthetics and network agents.”
Kemal, sitting beside you with crossed arms, nods very seriously. MacDonald looks nonplussed.
“Really? Coincidences?” He pauses. “Coincidences. A social network. Can you tell me what
“Fatal ones,” says Kemal.
You sit back, mimic his posture, and smile at him. It’s all basic body-language bullshit, but if it puts him more at ease . . .
“How much do you know about choice architecture?”
He’s got you. You glance sidelong at Kemal, who shrugs minutely. “Not a lot.” The phrase rings a very vague bell, but no more than that. “Suppose you tell me?”
“If only my students were so honest . . . let’s review some basic concepts. In a nutshell: When you or I are confronted with some choice—say, whether to buy a season bus pass or to pay daily—we make our decision about what to do by using a
You nod, suppressing disappointment:
“It’s another approach to social engineering. Take policing, for example.” He nods at you. “There’s the law, which we’re all expected to be cognizant of and to obey, and there’s the big stick to convince us that it’s a lot cheaper to play along than to go against it—yourselves, and the courts and prison and probation services and all the rest of the panoply of justice. However, it should be obvious that the existence of law enforcement doesn’t prevent crime. In fact, no offense to your good selves, it
“For starters, in modern societies, the law is incredibly complex: There are at least eight thousand offenses on the books in England and about the same in this country, enough that you people have to use decision-support software to figure out what to charge people with, and perhaps an order of magnitude more regulations for which violations can be prosecuted—ignorance may not be a defense in law, but it’s a fact on the ground. To make matters worse, while some offenses are strict-liability—possession of child porn or firearms being an absolute offense, regardless of context—others hinge on the state of mind of the accused. Is it murder or manslaughter? Well, it depends on whether you
He pauses. “Are you following this?”
“Just a sec.” You flick your fingers at the virtual controls, roll your specs back in time a minute to follow MacDonald, who is on a professorial roll. “Yes, I’m logging you loud and clear. If you’ll pardon me for asking, though, I asked about automated social engineering? Not for a lecture on the impossibility of policing.” Perhaps you let a little too much irritation into your voice, as he shuffles defensively.
“I was getting there. There’s a lot of background . . .” MacDonald shakes his head. “I’m not having a go at you, honestly, I’m just trying to explain the background to our research group’s activity.”
Kemal leans forward. “In your own time, Doctor.” He doesn’t look at you, doesn’t make eye contact, but he’s clearly decided to nominate you for the bad-cop role. Which is irritating, because
“Alright. Well, moving swiftly sideways into cognitive neuroscience . . . in the past twenty years we’ve made huge strides, using imaging tools, direct brain interfaces, and software simulations. We’ve pretty much disproved the existence of free will, at least as philosophers thought they understood it. A lot of our decision-making mechanics are subconscious; we only become aware of our choices once we’ve begun to act on them. And a whole lot of other things that were once thought to correlate with free will turn out also to be mechanical. If we use transcranial magnetic stimulation to disrupt the right temporoparietal junction, we can suppress subjects’ ability to make moral judgements; we can induce mystical religious experiences: We can suppress voluntary movements, and the patients will report that they didn’t move because they didn’t
“In a nutshell, then, what I’m getting at is that the project of law, ever since the Code of Hammurabi—the entire idea that we can maintain social order by obtaining voluntary adherence to a code of permissible behaviour, under threat of retribution—is
“Which is where we come to the ATHENA research group—actually, it’s a joint European initiative funded by the European Research Council—currently piloting studies in social-network-augmented choice architecture for Prosthetic Morality Enforcement.”