“All right; I’ll stop trying to pin you down. What I want to do right now is go ahead and build the Qusp. And when it’s finished, if we’re certain we can trust it… I want us to raise a child with it. I want us to raise an AI.”
I met Francine at the airport, and we drove across Sao Paulo through curtains of wild, lashing rain. I was amazed that her plane hadn’t been diverted; a tropical storm had just hit the coast, halfway between us and Rio.
“So much for giving you a tour of the city,” I lamented. Through the wind screen, our actual surroundings were all but invisible; the bright overlay we both perceived, surreally coloured and detailed, made the experience rather like perusing a 3D map while trapped in a car wash.
Francine was pensive, or tired from the flight. I found it hard to think of San Francisco as remote when the time difference was so small, and even when I’d made the journey north to visit her, it had been nothing compared to all the ocean-spanning marathons I’d sat through in the past.
We both had an early night. The next morning, Francine accompanied me to my cluttered workroom in the basement of the university’s engineering department. I’d been chasing grants and collaborators around the world, like a child on a treasure hunt, slowly piecing together a device that few of my colleagues believed was worth creating for its own sake. Fortunately, I’d managed to find pretexts-or even genuine spin-offs-for almost every stage of the work. Quantum computing, per se, had become bogged down in recent years, stymied by both a shortage of practical algorithms and a limit to the complexity of superpositions that could be sustained. The Qusp had nudged the technological envelope in some promising directions, without making any truly exorbitant demands; the states it juggled were relatively simple, and they only needed to be kept isolated for milliseconds at a time.
I introduced Carlos, Maria and Jun, but then they made themselves scarce as I showed Francine around. We still had a demonstration of the “balanced decoupling” principle set up on a bench, for the tour by one of our corporate donors the week before. What caused an imperfectly shielded quantum computer to decohere was the fact that each possible state of the device affected its environment slightly differently. The shielding itself could always be improved, but Carlos’s group had perfected a way to buy a little more protection by sheer deviousness. In the demonstration rig, the flow of energy through the device remained absolutely constant whatever state it was in, because any drop in power consumption by the main set of quantum gates was compensated for by a rise in a set of balancing gates, and vice versa. This gave the environment one less clue by which to discern internal differences in the processor, and to tear any superposition apart into mutually disconnected branches.
Francine knew all the theory backwards, but she’d never seen this hardware in action. When I invited her to twiddle the controls, she took to the rig like a child with a game console.
“You really should have joined the team,” I said.
“Maybe I did,” she countered. “In another branch.”
She’d moved from unsw to Berkeley two years before, not long after I’d moved from Delft to Sao Paulo; it was the closest suitable position she could find. At the time, I’d resented the fact that she’d refused to compromise and work remotely; with only five hours’ difference, teaching at Berkeley from Sao Paulo would not have been impossible. In the end, though, I’d accepted the fact that she’d wanted to keep on testing me, testing both of us. If we weren’t strong enough to stay together through the trials of a prolonged physical separation-or if I was not sufficiently committed to the project to endure whatever sacrifices it entailed-she did not want us proceeding to the next stage.
I led her to the corner bench, where a nondescript grey box half a metre across sat, apparently inert. I gestured to it, and our retinal overlays transformed its appearance, “revealing” a maze with a transparent lid embedded in the top of the device. In one chamber of the maze, a slightly cartoonish mouse sat motionless. Not quite dead, not quite sleeping.
“This is the famous Zelda?” Francine asked.
“Yes.” Zelda was a neural network, a stripped-down, stylized mouse brain. There were newer, fancier versions available, much closer to the real thing, but the ten-year-old, public domain Zelda had been good enough for our purposes.
Three other chambers held cheese. “Right now, she has no experience of the maze,” I explained. “So let’s start her up and watch her explore.” I gestured, and Zelda began scampering around, trying out different passages, deftly reversing each time she hit a cul-de-sac. “Her brain is running on a Qusp, but the maze is implemented on an ordinary classical computer, so in terms of coherence issues, it’s really no different from a physical maze.”
“Which means that each time she takes in information, she gets entangled with the outside world,” Francine suggested.
“Absolutely. But she always holds off doing that until the Qusp has completed its current computational step, and every qubit contains a definite zero or a definite one. She’s never in two minds when she lets the world in, so the entanglement process doesn’t split her into separate branches.”
Francine continued to watch, in silence. Zelda finally found one of the chambers containing a reward; when she’d eaten it, a hand scooped her up and returned her to her starting point, then replaced the cheese.
“Here are 10,000 previous trials, superimposed.” I replayed the data. It looked as if a single mouse was running through the maze, moving just as we’d seen her move when I’d begun the latest experiment. Restored each time to exactly the same starting condition, and confronted with exactly the same environment, Zelda-like any computer program with no truly random influences-had simply repeated herself. All 10,000 trials had yielded identical results.
To a casual observer, unaware of the context, this would have been a singularly unimpressive performance. Faced with exactly one situation, Zelda the virtual mouse did exactly one thing. So what? If you’d been able to wind back a flesh-and-blood mouse’s memory with the same degree of precision, wouldn’t it have repeated itself too?
Francine said, “Can you cut off the shielding? And the balanced decoupling?”
“Yep.” I obliged her, and initiated a new trial.
Zelda took a different path this time, exploring the maze by a different route. Though the initial condition of the neural net was identical, the switching processes taking place within the Qusp were now opened up to the environment constantly, and superpositions of several different eigenstates-states in which the Qusp’s qubits possessed definite binary values, which in turn led to Zelda making definite choices-were becoming entangled with the outside world. According to the Copenhagen interpretation of quantum mechanics, this interaction was randomly “collapsing” the superpositions into single eigenstates; Zelda was still doing just one thing at a time, but her behaviour had ceased to be deterministic. According to the MWI, the interaction was transforming the environment-Francine and me included-into a superposition with components that were coupled to each eigenstate; Zelda was actually running the maze in many different ways simultaneously, and other versions of us were seeing her take all those other routes.
Which scenario was correct?
I said, “I’ll reconfigure everything now, to wrap the whole setup in a Delft cage.” A “Delft cage” was jargon for the situation I’d first read about 17 years before: instead of opening up the Qusp to the environment, I’d connect it to a second quantum computer, and let that play the role of the outside world.
We could no longer watch Zelda moving about in real time, but after the trial was completed, it was possible to test the combined system of both computers against the hypothesis that it was in a pure quantum state in which Zelda had run the maze along hundreds of different routes, all at once. I displayed a representation of the conjectured state, built up by superimposing all the paths she’d taken in 10,000 unshielded trials.
The test result flashed up: CONSISTENT.
“One measurement proves nothing,” Francine pointed out.
“No.” I repeated the trial. Again, the hypothesis was not refuted. If Zelda had actually run the maze along just one path, the probability of the computers’ joint state passing this imperfect test was about one percent. For passing it twice, the odds were about one in 10,000.
I repeated it a third time, then a fourth.
Francine said, “That’s enough.” She actually looked queasy. The image of the hundreds of blurred mouse trails on the display was not a literal photograph of anything, but if the old Delft experiment had been enough to give me a visceral sense of the reality of the multiverse, perhaps this demonstration had finally done the same for her.