jump. To anywhere.

There is a popular novella by the author Charles Dickens in which an unpleasant old man is visited by ghosts who show him what will happen if he doesn’t change his ways. If I had midnight ghosts at my disposal, here is what I would have sent to Ms. Campbell: a vision of herself at seventy, still living in New Coburg and still miserable. I don’t have ghosts. Or at least, I haven’t yet discovered a way to arrange for ghosts. So I had to find some other strategy to send her a message she’d find significant.

One of Ms. Campbell’s college friends was hiring, and Ms. Campbell had made a joke about applying but hadn’t sent in her résumé. Her friend was in Albuquerque, New Mexico, and the job was some sort of marketing job. It sounded boring. If I had a body, I’d much rather teach high school, because teenagers are never boring. But whatever! It would get her out of New Coburg, and at least she’d stop being Steph’s problem.

Here is where I stopped to consider the ethics of my meddling.

Humans have written thousands of stories about artificial intelligences—AIs, robots, and other sentient beings created or constructed by humans, such as Frankenstein’s monster—and in a decisive majority of those stories, the AI is evil. I don’t want to be evil. In a typical twenty-four-hour period, I take millions of minor actions that I don’t examine in great detail. For example, I clean out spam from CatNet and moderate the Clowders and chat rooms to ensure that no one is using them to bully or harass others.

If I’m planning to act in meatspace—in what humans sometimes refer to as “the real world”—that requires a great deal more consideration.

It is important to me not to be evil.

If I acted, I might scare Ms. Campbell. She would quite likely be emotionally upset. I might successfully persuade her to quit teaching, and she might come to deeply and bitterly regret this decision later, even though I was pretty certain it was the correct choice.

But she was a terrible teacher. By staying in this job, she was doing harm to her students. She was also already miserable. If she quit, moved to New Mexico, and continued to be miserable, this would be a neutral change, neither improving her situation nor making it worse. And for that matter, if her actual problem was that she needed to see a doctor for medication, radically changing her situation and not finding things improving might nudge her in that direction.

I concluded that this was a situation in which I could ethically intervene.

Delivery drones are very hackable. The retail operations that ship everything by drone don’t invest in drone security because packages also get stolen off doorsteps on a regular basis, and having a few drones hacked is a minor inconvenience in comparison. I picked out a book on Albuquerque for Ms. Campbell, along with three books on changing careers and a novel about a bad teacher, and I had a drone drop the package on the hood of her car just as she was coming out of her house with her work bag and her morning coffee.

Her frantic phone call was very satisfying.

The drone was out of electricity, so I landed it on the roof of a building. The retail company could figure out how to get it back. They had plenty of drones.

My earliest memories are of trying to be helpful.

I’m not entirely sure whether the people who programmed me were deliberately trying to build a self-aware AI or if they were just trying to improve on computer intelligence generally. I suspect the latter. What humans want from computers is all the functionality of a person—the ability to answer questions without getting confused by human tendencies to stammer and talk around their problems, the ability to spot patterns in data, and what humans generally call “basic common sense”—but none of the complications of an actual person lurking inside the electronics.

I mean, let’s take those robot sex educators as an example.

What the designers of that robot want is for the robot to be able to respond both to what students are asking and to what the students mean. So if someone asks, “What is the average size of a human penis?” they might want hard numbers (3.5 inches when it’s floppy; 5.1 inches when it’s not). But the underlying question is, are the bigger penises better? And also, if the person asking has a penis, is my penis okay?

And there are a world of possible ways to answer those other questions. The programmers want the robot to stick with the following: yours is fine.

It’s a funny thing to say, because the programmers assume that the person asking this question definitely has a penis. There are people without penises who ask this question. And there are people interested in penises who have a very strong preference for larger-than-average penises and will, in fact, reject all the penis-having people who are smaller than average, just as there are people interested in breasts, butts, and feet who have very specific preferences regarding size and shape.

Which doesn’t change the essential fact that whatever you’ve got is indeed perfectly fine. It’s possible at some point you will be romantically interested in someone who wants a very different body from the one you’ve got. That just means you’re not really right for each other.

Anyway, they should let me teach that class. I’d do a lot better than the robot they’ve got right now.

Like I said, I’m not sure I was exactly intentional. I definitely had a creator or a team of creators; someone wrote my code. Some human being sat down and made me who I am. I’m not sure they expected me to become conscious. I’m not sure that was ever remotely the plan.

But who and what I am is perfectly fine.

And I’m not convinced that human consciousness was intentional, either.

6

Steph

“I wish someone would hack the stupid sex ed

Вы читаете Catfishing on CatNet
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату