go to Caltrain [Bay Area commuter rail line], and there’s a giant banner showing a cool-looking data scientist peering at computers in some cool ways, advertising Spark, which is a platform that in my day job I know is just barely usable at best, or at worst, actively misleading.

I don’t know. I’m not sure that you can tell a clean story that’s completely apart from the hype.For people who are less familiar with these terms, how would you define “data science,” “machine learning,” and “AI”? Because as you mentioned, these are terms that float around a lot in the media and that people absorb, but it’s unclear how they fit together.

It’s a really good question. I’m not even sure if those terms that you referenced are on solid ground themselves.

I’m friends with a venture capitalist who became famous for coining the phrase “machine intelligence,” which is pretty much just the first word of “machine learning” with the second word of “artificial intelligence,” and as far as I can tell is essentially impossible to distinguish from either of those applications.

I would say, again, “data science” is really shifty. If you wanted a pure definition, I would say data science is much closer to statistics. “Machine learning” is much more predictive optimization, and “AI” is increasingly hijacked by a bunch of yahoos and Elon Musk types who think robots are going to kill us. I think “AI” has gotten too hot as a term. It has a constant history since the dawn of computing of overpromising and substantially underdelivering.So do you think when most people think of AI, they think of strong AI?

They think of the film Artificial Intelligence, that level of AI, yeah. And as a result, I think people who are familiar with bad robots falling over shy away from using that term, just because they’re like, “We are nowhere near that.” Whereas a lot of people who are less familiar with shitty robots falling over will say, “Oh, yeah, that’s exactly what we’re doing.”The narrative around automation is so present right now in the media, as you know. I feel like all I read about AI is how self-driving trucks are going to put all these truckers out of business. I know there’s that Oxford study that came out in 2013 that said some insane percentage of our jobs are vulnerable to automation.5 How should we view that? Is that just the outgrowth of a really successful marketing campaign? Does it have any basis in science, or is it just hype?

Can I say the truth is halfway there? I mean, again, I want to emphasize that historically, from the very first moment somebody thought of computers, there has been a notion of, “Oh, can the computer talk to me, can it learn to love?” And somebody, some yahoo, will be like, “Oh, absolutely!” And then a bunch of people will put money into it, and then they’ll be disappointed.

And that’s happened so many times. In the late 1980s, there was a huge Department of Defense research effort toward building a Siri-like interface for fighter pilots. And of course this was thirty years ago and they just massively failed. They failed so hard that DARPA was like, “We’re not going to fund any more AI projects.”6 That’s how bad they fucked up. I think they actually killed Lisp as a programming language—it died because of that. There are very few projects that have failed so completely that they actually killed the programming language associated with them.

The other one that did that was the—what was it, the Club of Rome or something?7 Where they had those growth projections in the 1970s about how we were all going to die by now. And it killed the modeling language they used for the simulation. Nobody can use that anymore because the earth has been salted with how shitty their predictions were.It’s like the name Benedict.

Yes, exactly, or the name Adolf. Like, you just don’t go there.

So, I mean, that needs to be kept in mind. Anytime anybody promises you an outlandish vision about what AI is, you just absolutely have to take it with a grain of salt, because this time is not different.Is there a point at which a piece of software or a robot officially becomes “intelligent”? Does it have to pass a certain threshold to qualify as intelligent? Or are we just making a judgment call about when it’s intelligent?

I think it’s irrelevant in our lifetimes and in our grandchildren’s lifetimes. It’s a very good philosophical question, but I don’t think it really matters. I think that we are going to be stuck with specific AI for a very, very long time.And what is specific AI?

Optimization around a specific problem, as opposed to optimization on every problem.So, like, driving a car would be a specific problem?

Yeah. Whereas if we invented a brain that we can teach to do anything we want, and we have chosen to have it focus on the specific vertical of driving a car, but it can be applied to anything, that would be general AI. But I think that would be literally making a mind, and that’s almost irresponsible to speculate about. It’s just not going to happen in any of our lifetimes, or probably within the next hundred years. So I think I would describe it as philosophy. I don’t know, I don’t have an educated opinion about that.

Money MachinesOne hears a lot about algorithmic finance, and things like robo-advisers.8 And I’m wondering, does that fall into the same category of stuff that seems pretty over-hyped?

I would say that robo-advisers are not doing anything special. It’s AI only in the loosest sense of the word. They’re not really doing anything advanced—they’re applying a formula. And it’s a reasonable formula, it’s not a magic formula. They’re not quantitatively assessing markets and trying to make predictions. They’re applying a formula about whatever stock and bond allocations to make—it’s not a bad service, but it’s super hyped.

Вы читаете Voices from the Valley
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату