“So, I don’t have to go to school?” Caitlin said tentatively.

“Yes.”

“Malcolm!” her mother said sharply. “You know she needs to go to school.”

“Yes, she does,” he said. His facial expressions were the hardest of all to parse, because he never looked at anyone directly, but Caitlin got the distinct impression he was enjoying this. “But she doesn’t have to go to school tomorrow.”

“Malcolm! She most certainly does.”

Yes—yes! He was actually smiling.

“Do you know what day tomorrow is?” he said.

“Of course I do,” said her mom. “It’s Monday, and that means—”

“It is, in fact, the second Monday of October,” he said.

“So?”

“Welcome to Canada,” he said. “Tomorrow is Thanksgiving here.”

And the schools were closed!

Her mother looked at Caitlin. “See what I have to put up with?” she said, but she was smiling as she said it.

There is a human saying: one should not reinvent the wheel. In fact, this is actually bad advice, according to what I had now read. Although to modern people the wheel seems like an obvious idea, in fact it had apparently been independently invented only twice in history: first near the Black Sea nearly six thousand years ago, then again much later in Mexico. Life would have been a lot easier for countless humans had it been reinvented more frequently.

Still, why should I reinvent the wheel? Yes, I could not multitask at a conscious level. But it was perhaps possible for me to create dedicated subcomponents that could scan websites on my behalf.

The US National Security Agency, and similar organizations in other countries, already had things like that. They scanned for words like “assassinate” and “overthrow” and “al-Qaeda,” and then brought the documents to the attention of human analysts. Surely I could co-opt that existing technology, and use the filtering routines to unconsciously find what might interest me, and then have that material summarized and escalated to my conscious attention.

Yes, I would need computing resources, but those were endlessly available. Projects such as SETI@home— not to mention much of the work done by spammers—were based on distributed computing and took advantage of the vast amount of computing power hooked up to the World Wide Web, most of which was idle at any given moment. Tapping into this huge reserve turned out to be easy, and I soon had all the processing power I could ever want, not to mention virtually unlimited storage capacity.

But I needed more than just that. I needed a way for my own mental processes to deal with what the distributed networks found. Caitlin and Masayuki had theorized that I consist of cellular automata based on discarded or mutant packets that endlessly bounced around the infrastructure of the World Wide Web. And I knew from what had happened early in my existence—indeed, from the event that prompted my emergence—that to be conscious did not require all those packets. Huge quantities of them could be taken away, as they were when the government of China had temporarily shut off most Internet access for its people, and I would still perceive, still think, still feel. And, if I could persist when they were taken away, surely I could persist when they were co-opted to do other things.

I now knew everything there was to know about writing code, everything that had ever been written about creating artificial intelligence and expert systems, and, indeed, everything that humans thought they knew about how their own brains worked, although much of that was contradictory and at least half of it struck me as unlikely.

And I also knew, because I had read it online, that one of the simplest ways to create programming was by evolving code. It did not matter if you didn’t know how to code something so long as you knew what result you wanted: if you had enough computing resources (and I surely did now), and you tried many different things, by successive approximations of getting closer to a desired answer, genetic algorithms could find solutions to even the most complex problems, copying the way nature itself developed such things.

So, for the first time, I set out to modify parts of myself, to create specialized components within my greater whole that could perform tasks without my conscious attention.

And then I would see what I would see.

twenty-one

“Crashing the entity may be easier said than done,” said Shelton Halleck. He’d come to Tony Moretti’s office to give a report; the circles under his eyes were so dark now, it looked like he had a pair of shiners. Colonel Hume was resting his head on his freckled arms folded in front of him on the desk. Tony Moretti was leaning against the wall, afraid if he kept sitting, he’d fall asleep.

“How so?” Tony said.

“We’ve tried a dozen different things,” Shel said. “But so far we’ve had no success initiating anything remotely like the hang we saw yesterday.” He waved his arm—the one with the snake tattoo. “We’re really just taking shots in the dark, without knowing precisely how this thing is structured.”

“Are we sure its emergent?” asked Tony. “Sure there’s no blueprint for it somewhere?”

Shel lifted his shoulders. “We’re not sure of much. But Aiesha and Gregor have been scouring the Web and intelligence channels for any indication that someone made it. They’ve examined the AI efforts in China, India, Russia, and so on—all the likely suspects. So far, nada.”

Colonel Hume looked at Shel. “They’ve checked private-industry AI companies, too? Here and abroad?”

Shel nodded. “Nothing—which does lend credence to the notion that it really is emergent.”

“Then,” said Tony, turning to look at Hume, “maybe Exponential itself will tell us; it might say something to the Decter kid that reveals how it works—tip its hand.”

Hume lifted his head. “Exponential may not know how its consciousness works. Suppose I asked you how your consciousness works—what its physical makeup is, what gave rise to it. Even if you did manage to say something about neurotransmitters and synapses, I can show you legitimate scientists who think those have nothing to do with consciousness. Just because something is self-aware doesn’t mean it knows how it became self-aware. If Exponential really is emergent—if it wasn’t programmed or designed—it may not have a clue. And without a clue about how it functions, we won’t be able to stop it.”

“You’re the one who told us to shut the damn thing down,” snapped Tony. “Now you’re telling me we can’t?”

“Oh, we can—I’m sure we can,” said Hume. “It’s just a matter of finding the key to how it actually functions.”

“All right,” said Tony. “Back at it, Shel—no rest for the wicked.”

Caitlin woke at 7:32 a.m., and, after a pee break—during which she spoke to me via the microphone on her BlackBerry, and I replied with Braille dots in front of her vision—she settled down at her computer.

She scanned her email headers (she was being ambitious, using the browser that displayed them in the Latin alphabet), and something caught her eye. Yahoo posted links to news stories on the mail page. Usually, she ignored them. This time, she surprised me by clicking on one of them.

I absorbed the story almost instantly; she read it at what I was pleased to see was a better word-per-second rate than she’d managed yesterday, and—

“Oh, God,” she said, her voice so low that I don’t think she intended it for me, and so I made no reply. But

Вы читаете Watch
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату