In the end, it had been a teenager named Norman Pellick who created the installed intelligence to run the post-war recovery. It was the digital copy of an existing human mind, intelligent enough to run the machines and human enough to maintain control and avoid a real-world Judgment Day. The I.I. was the perfect alternative to an uber-smart artificial intelligence that could — according to the skeptics — one day decide to turn on mankind.
Yeah, we made the right choice there, Beth thought sarcastically as they dug through the mountains of data related to A.I. development. The I.I.s don’t want to turn on humanity at all.
Simon ignored her.
After the Recovery program had been a success, thanks to Pellick and his installed intelligences, the world made an effort to prevent a super-intelligent artificial intelligence from being anything more than a concept of science fiction. The world’s governments worked together to create ironclad legislation forbidding even cursory simulations of a super A.I. They hammered the dangers of such technology into the minds of every average citizen, from anti-A.I. educational programs to intense drama movies filled with evil computer programs. It was common knowledge that to create a being smarter than oneself was to spell disaster. What wasn’t common knowledge — however — was that the government had continued researching the possibilities of powerful — and hopefully stable— artificial intelligence.
Work on a super A.I. didn’t actually begin until some of the first anti-human terror attacks. It was shortly after the Stewart Lythe attacks nearly forty years prior (which had been wrongly attributed to Dr. Karl Terrace at the time) that the secretive projects began. The governments of Earth — led almost entirely by humans — became paranoid of ever-growing I.I. supremacist violence. They needed a weapon to combat it. They needed a spy.
Many different attempts were made into making a human-equivalent artificial intelligence, but none of them could get it quite right. The pathways were always so limited in their coding that the simplest conversation would reveal the program to be nothing more than an imitation of life.
The goal had been to create an artificial intelligence so intuitive, so responsive, and so life-like that it could pass as an I.I. militant. The human scientists knew that the best way to destroy a terrorist threat — one that operated and recruited on an emotional level — was from within. First and foremost, they needed eyes and ears within the organization in order to learn what terrible plans the extremists were working on. They couldn’t prevent all the attacks and killings they learned about unless they wanted to blow their digital spy’s cover, but they aimed to stop most. Lastly, they could use the A.I. to destabilize whatever terror group they had infiltrated and discourage other groups from radicalizing.
Beth and Simon were delving deep into top-secret documents at that point. Stuff that — in conjunction with the files available on the Net and those Simon had taken from Tarov — revealed secrets that only a handful of people were privy to. Correspondence that was redacted, encrypted, decoded, and then traced in order to prevent any uninvited eyes from divining anything of substance from them. Putting all the pieces together, however, showed Beth and Simon the full picture.
It was a pair of scientists working on an insanely restrictive government contract who had jumped the hurdle none of their peers seemed capable of mounting. They were the ones who created the Tarov A.I.
Dr. Darren Miller and Dr. Jacob Silvar designed Blake Tarov to be the first self-learning, self-thinking artificial intelligence on par with a human mind. Other programs had been made to act on an illusion of choice — to decide on one of two pre-determined pathways based on established parameters. They seemed to learn, but they only accumulated more options and more pathways — all of which were written by someone ahead of time. Miller and Silvar’s creation was the first A.I. capable of observing a problem and inventing its own solution, even if it had never been taught to do anything similar. If it had not been shown how to craft a fire, it would have eventually figured it out through trial and error. A touch of ingenuity could teach it to climb a wall, even if had never encountered one before. This same self-decision could be used to sense falsehoods and to earn its target’s trust. Some would say that its ability to learn with such precision placed it at even above-human intelligence.
This, of course, made some people among the human leadership nervous. They remembered all the anti-A.I. propaganda they had been raised with and demanded a contingency plan be formed in the event that humans lost control of Tarov — a failsafe to protect mankind if the A.I. ever decided to change its allegiance.
That was where Beth and Simon hit a wall in their research. Scraps of information came much fewer and farther between, with less and less revealing data. Someone had taken great efforts to conceal the rest of the story from even their skilled intuition. Hours bled on as they tried to find any more references to the failsafe that Tarov’s creators were charged with creating. If it was real, Beth thought, it could be the key to bringing him down. His Achilles heel—his Kryptonite.
Immersed in her internal retina display, Beth was about to access another file which had been redacted into worthlessness when a bright alert started to flash. The audio feed
