“Which is…?”
“Our matryoshka brain project. We've currently got some 32,000 satellite modules orbiting our home star, connected in a network using SCUT channels.
“But… JOVAH?”
“Judicious Omnicompetent Volitional Adaptive Heuristic.”
I mimed gagging. “You started with the name, didn’t you.”
Hugh laughed. “Acronyms: the lowest form of pun.”
“All this, even a kick-ass name, and you still haven't achieved true AI?”
“It's not about scaling, Bill. Crows and parrots were some of the more intelligent non-humans on earth, despite having brain smaller than a walnut. Some dolphins had brain-to-body mass ratios as high as humans, but they still never displayed human-level intelligence. The biggest brain-to-body mass ratio actually belong to a species of shrew. What matters is the organization of the brain, and the wiring that connects different subprocesses. The current thinking is that were either missing something basic, or we've gone down a blind alley that we can step back from. JOVAH is incredibly powerful. It can process vast quantities of information in virtually no time. It's memory space and storage are almost infinite, but it's still essentially an AMI. It still has no ability to process counterfactual thinking experiences know WTF moments, nor does it have anything like a sense of self, or any kind of internal dialogue.”
“I know WTF moments, but counterfactual?”
Hugh grinned. “Okay, let's say you've programmed an AMI to guide some wheeled vehicles from one point to another on a large flat surface. It can handle that. But now let's say the vehicles are really on a spherical surface, like Earth, so the coordinates won't work out cleanly, and the vehicle will always arrive a little off the expected destination. The AMI will never adjust its algorithms unless it's ordered to. It will never wonder why it's always wrong. You could program the AMI to be self-correcting, and once it had figured out that spherical geometry worked better than plane geometry, it would use the new formula, but it would never wonder why. It would never generalize from that to wonder about gravity or astronomy or anything. A real intelligence would have a WTF moment and start trying to figure out what was going on.”
“And you don't have that.”
“Not even close. We can program in each additional layer of behavior, but it never goes beyond what we've programmed. I'm simplifying of course, even in the 21st century, researchers were beyond this level, but it's the same idea.”
“What about just simulating a brain? They did that on earth in the 22nd century. We are proof of that.”
“Bill, it's the difference between recording a live action video and digitally generating a realistic animation from scratch. They were doing the former with VCRs before original Bob was born. They still had managed to the latter at the point when he died, at least not believably.”
“So we can simulate an existing intelligence, but we can't create one from scratch.”
“Exactamundo, mon frere. Very frustrating.”
I chuckled at Hugh's informality. It was possibly a little forced. He seemed to be trying to make me feel at ease.
“Wow. Do you still think it's even possible?”
We've never found any reason to believe that our own intelligence uses anything more than the physical laws of the universe. I think replication pretty much proves that, so yes, it's a hard problem but it's not impossible problem.”
“Why not just go with an enhanced replicant?”
“Doesn't work. Well, I mean, it works, but it isn’t the result we’re trying for. The structure of the human brain, even replicated one, is limited by the biological architecture that it developed on. That's why we have Guppy's. A backup loaded into JOVAH can frame-jack much higher than the rest of us, but it's still just a Bob. We've tried. In fact, our sys admin is a Bob clone running in a virtual machine on JOVAH. He's a speed super intelligence, but not a quality super intelligence.”
“Huh.” I shook myself mentally. “So, getting back on subject. You have the moot listing. Any idea when you'll be able to-”
“It's done.”
I raised both eyebrows. “Wow? Fast.”
“That is the point. Or one of them, anyway. Now the bad news.”
“Uh-oh.”
Hugh gave me a sickly grin. “Yeah, they spent a lot of time preparing. They couldn't get into everything, but they really did a job on what they could access. Among other things, they managed to insert a monitor into your comms stack.”
My jaw dropped. “Oh. So they know everything we’re talking about.”
“Nope. Anything they can do we can do better. Right now, we're having a conversation about beer, as far as they know.”
“Shit. We’re going to be a long time untangling this.”
“It gets worse. Our analysis says that if you attempt to physically take back the stations, they'll implement the self-destruct.”
I stood up. “Double shit, Bob’s getting ready to do just that.”
Garfield dropped into his La-Z-Boy and tossed a report at me. “These are the final numbers. We’ve got 48% of the Bobiverse online. Of the other 52%, 18% are Starfleet.”
“That many?”
“They've been replicating aggressively, Bill. I think they been planning this for a while, now. So anyway, just over a third of the Bobiverse is off-line, hard. We’re still getting some new connections as people figure out how to use the SCUT transceivers on drones and other local equipment, but that's only good for basic communications.”
“Meanwhile, this.” I waved a sheet that I've been holding. “I was checking your list of outages. They’re all units that I updated a few days ago, because they still had the original keys. Someone recorded my session, saved the new keys, then used them to corrupt those stations.”
I gritted my teeth. “I fell for a classic piece of social engineering. Got scared into doing exactly what they wanted, and they were ready for it. Wow.”
“That's very sophisticated. Almost more than I'm willing to accept from these guys. They seem more like a bunch of goofs than manipulative geniuses.”
“Well, reality trumps expectations, I guess. Also,” I picked up another sheet, “the Starfleet ultimatum. I think Lenny