book

Voices in AI – Episode 23: A Conversation with Pedro Domingos

By Byron Reese

Today’s leading minds talk AI with host Byron Reese

In this episode Byron and Pedro Domingos talk about the master algorithm, machine creativity, and the creation of new jobs in the wake of the AI revolution.



0:00


0:00


0:00

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Pedro Domingos, a computer science professor at the University of Washington, and the author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake our World. Welcome to the show, Pedro.

Pedro Domingos: Thanks for having me.

What is artificial intelligence?

Artificial intelligence is getting computers to do things that traditionally require human intelligence, like reasoning, problem solving, common sense knowledge, learning, vision, speechand language understanding, planning, decision making and so on.

And is it artificial in the sense that artificial turf is artificial—in that it isn’t really intelligence, it just looks like intelligence? Or is it actually truly intelligent, and it’s just the “artificial” demarks that we created it?

That’s a fun analogy. I hadn’t heard that before. No, I don’t think AI is like artificial turf. I think it’s real intelligence. It’s just intelligence of a different kind. We’re used to thinking of human intelligence, or maybe animal intelligence, as the only intelligence on the planet.

What happens now is a different kind of intelligence. It’s a little bit like, does a submarine really swim? Or is it faking that it swims? Actually, it doesn’t really swim, but it can still travel underwater using very different ideas. Or, you know, does a plane fly even though it doesn’t flap its wings? Well, it doesn’t flap its wings but it does fly. AI is a little bit like that. In some ways, actually, artificial intelligence is intelligent in ways that human intelligence isn’t.

There are many areas where AI exceeds human intelligence, so I would say that they’re different forms of intelligence, but it is very much a form of intelligence.

And how would you describe the state-of-the-art, right now?

In science and technology progress often happens in spurts. There are long periods of slow progress and then there are periods of very sudden, very rapid progress. And we are definitely in one of those periods of very rapid progress in AI, which was a long time in the making.

AI is a field that’s fifty years old, and we had what was called the “AI spring” in the ‘80s, where it looked like it was going to really take off. But then that didn’t really happen at the end of the day, and the problem was that people back then were trying to do AI using what’s called “knowledge engineering.” If I wanted an AI system to do medical diagnosis, I had to interview doctors and program the doctor’s knowledge of diagnosis in the form of rules into the computer, and that didn’t scale.

The thing that has changed recently is that we have a new way to do AI, which is machine learning. Instead of trying to program the computers to do things, the computers program themselves by learning from data. So now what I do for medical diagnosis is I give the computer a database of patient records, what their symptoms and test results were, and what the diagnosis was—and from just that, in thirty seconds, the computer can learn, typically, to do medical diagnosis better than human doctors.

So, thanks to that, thanks to machine learning, we are now seeing a phase of very rapid progress. Also, because the learning algorithms have gotten better—and very importantly: the beauty of machine learning is that, because the intelligence comes from the data, as the data grows exponentially, the AI systems get more intelligent with essentially no extra work from us. So now AI is becoming very powerful. Just on the back of the weight of data that we have.

The other element, of course, is computing power. We need enough computing power to turn all that data into intelligent systems, but we do have those. So the combination of learning algorithms, a lot of data, and a lot of computing power is what is making the current progress happen.

And, how long do you think we can ride that wave? Do you think that machine learning is the path to an AGI, hypothetically? I mean, do we have ten, twenty, thirty, forty more years of running with, kind of, the machine learning ball? Or, do we need another kind of breakthrough?

I think machine learning is definitely the path to artificial general intelligence. But I think there are a few people in AI who would disagree with that. You know, your computer can be as intelligent as you want. If it can’t learn, you know, thirty minutes later it will be falling behind humans.

So, machine learning really is essential to getting to intelligence. In fact, the whole idea of the singularity—it was I. J. Good, back in the ‘50s, who had this idea of a learning machine that could make a machine that learned better than it did. As a result of which, you would have this succession of better and better, more and more intelligent machines until they left humans in the dust.

Now, how long will it take? That’s very hard to predict, precisely because progress is not linear. I think the current bloom of progress at some point will probably plateau. I don’t think we’re on the verge of having general AI. We’ve come a thousand miles, but there’s a million miles more to go. We’re going to need many more breakthroughs, and who knows where those breakthroughs will come from.

In the most optimistic view, maybe this will all happen in the next decade or two, because things will just happen one after another, and we’ll have it very soon. In the more pessimistic view, it’s just too hard and it’ll never happen. If you poll the AI experts, they never just say it’s going to be several decades. But the truth is nobody really knows for sure.

What is kind of interesting is not that people don’t know, and not that their forecasts are kind of all over the map, but that if you look at the extreme estimates, five years are the most aggressive, and then the furthest out are like five hundred years. And what does that suggest to you?

You know, if I went to my cleaners and I said, “Hey, when is my shirt going to be ready?” and they said, “Sometime between five and five hundred days” I would be like, “Okay… something’s going on here.”

Why do you think the opinions are so variant on when we get an AGI?

Well, the cleaners, when they clean your shirt, it’s a very well-known, very repeatable process. They know how long it takes and it’s going to take the same thing this time, right? There are very few unknowns. The problem in AI is that we don’t even know what we don’t know.

We have no idea what we’re missing, so some people think we’re not missing that much. There are the optimists that say, “Oh, we just need more data.” Right? Back in the ‘80s they said, “Oh, we just need more knowledge,” and then, that wasn’t the case. So that’s the optimistic view. The more pessimistic view is that this is a really, really hard problem, and we’ve only scratched the surface. So the uncertainty comes from the fact that we don’t even know what we don’t know.

We certainly don’t know how the brain works, right? We have vague ideas of what different parts of it do, but in terms of how a thought is encoded, we don’t know. Do you think we need to know more about our own intelligence to make an AGI, or is it like, “No, that’s apples and oranges. It doesn’t really matter how the brain works. We‘re building an AGI differently”?

Not necessarily. So, there are different schools of thought in AI, and this is part of what I talk about in my book. There is one school of thought in AI, the Connectionists, whose whole agenda is to reverse-engineer the brain. They think that’s the shortest path, you know, “Here’s the competition, go reverse-engineer it, figure out how it works, build it on the computer, and then we’ll have intelligence.” So that is definitely a plausible approach.

I think it’s actually a very difficult approach, precisely because we understand so little about how the brain works. In some ways maybe it’s trying to solve a problem by way of solving the hardest of problems.

And then there are other AI types, namely the Symbolists, whose whole idea is, “No, we don’t need to understand things at that low level. In fact, we’re just going to get lost in the weeds if we try to do that. We have to understand intelligence at a higher-level abstraction, and we’ll get there much sooner that way. So forget how the brain works, that’s really not important.”

Again, the analogy with the brains and airplanes is a good one. What the Symbolists say is, “If we try to make airplanes by building machines that will flap their wings, we’ll never have them. What we need to do is understand the laws of physics and aerodynamics, and then build machines based on that.”

So there are different schools of thought. And I actually think it’s good that there are different schools of thought—and we’ll see who gets there first.

So, you mentioned your book, The Master Algorithm, which is of course required reading in this field. Can you give the listener, who may not be as familiar with it, an overview of what is The Master Algorithm? What are we looking for?

Yeah, sure. So the book is essentially an introduction to machine learning for a general audience. So not just for technical people, but business people, policy makers, just citizens and people who are curious. It talks about the impact that machine learning is already having in the world.

A lot of people think that these things are science fiction, but they are already in their lives and they just don’t know it. It also looks at the future, and what we can expect coming down the line. But mainly, it is an introduction to what I was just describing—that there are five main schools of thought in machine learning.

There are the people who want to reverse-engineer the brain; the ones who want to simulate evolution; the ones who do machine learning by automating the scientific method; the ones who use Bayesian statistics; and the ones who do reasoning by analogy, like people do in everyday life. And then I look at what these different methods can and can’t do.

The name The Master Algorithm comes from this notion that a machine learning algorithm is a master algorithm, in the same sense that a master key opens all doors. A learning algorithm can do all sorts of different things while being the same algorithm.

The is really what’s extraordinary about machine learning… In traditional computer science, if I want the computer to play chess, I have to write a program explaining how to play chess. And if I want the computer to drive a car, I had to write a program explaining how to drive a car. With machine learning, the same learning algorithm can learn to play chess, or drive a car, or do a million different other things—just by learning from the appropriate data.

And each of these tribes of machine learning has its own master algorithm. The more optimistic members of that tribe believe that you can do everything with that master algorithm. My contention in the book is that each of these algorithms is only solving part of the problem. What we need to do is unify them all into a grand theory of machine learning, in the same way that physics has a standard model and biology has a central dogma. And then, that will be the true master algorithm. And I suggest some paths towards that algorithm, and I think we’re actually getting pretty close to it.

One thing I found empowering in the book—and you state it over and over at the beginning—is that the master algorithm is aspirationally accessible for a wide range of people. You basically said, “You, listening to the book, this is still a field where the layman can still have some amount of breakthrough.” Can you speak to that for just a minute?

Absolutely. In fact, that’s part of what got me into machine learning is that—unlike physics or mathematics or biology, which are very mature fields, and you really can only contribute once you have at least a PhD—computer science and AI and machine learning are still very young. So, you could be a kid in a garage and have a great idea that will be transformative. And I hope that that will happen.

I think, even after we find this master algorithm that’s the unification of the five current ones, as we were talking about, we will still be missing some really important, really deep ideas. And I think in some ways, someone coming from outside the field is more likely to find those, than those of us who are professional machine learning researchers, and are already thinking along these tracks of these particular schools of thought.

So, part of my goal in writing the book was to get people who are not machine learning experts thinking about machine learning, and possibly having the next great ideas that will get us closer to AGI.

And, you also point out in the book why you believe that we know that such a thing is possible, and one of your proof points is our intelligence.

Exactly.

Can you speak to that?

Yeah. So this is, of course, one of those very ambitious goals that people should be at the outset a little suspicious of, right? Is this, like the philosopher’s stone or the perpetual motion machine, is it really possible? And again, some people don’t think it’s possible.

I think there’s a number of reasons why I’m pretty sure it is possible, one of which is that we already have existing proofs. One existing proof is our brain, right? As long as you believe in reductionism, which all scientists do, then the way your brain works can be expressed as an algorithm.

And if I program that algorithm into a computer, then that algorithm can learn everything that your brain can. Therefore, in that sense at least, one version of the master algorithm already exists.

Another one is evolution. Evolution created us and all life on Earth. And it is essentially an algorithm, and we roughly understand how that algorithm works; so there is another existing instance of the master algorithm.

Then there are also—besides these more empirical reasons—theoretical reasons which tell us that a master algorithm exists. One of which is that, for each of the five tribes, for their master algorithm there’s a theorem that says: If you give enough data to this algorithm, it can learn any function.

So, at least at that level, we already know that master algorithms exist. Now the question is, how complicated will it be? How hard will it be to get us there? How broadly good would that algorithm be, in terms of learning from a reasonable amount of data in a reasonable amount of time?

You just said all scientists are reductionist. Is that necessarily the case? Like, can you not be a scientist and believe in something like strong emergence, and say, “Actually, you can’t necessarily take the human mind down to individual atoms and kind of reconstruct I mean you don’t have to appeal to mysticism to

Yeah, yeah, absolutely. So, what I mean… This is a very good point. In fact, in the sense that you’re talking about, we cannot be reductionists in AI. So what I mean by “reductionist” is just the idea that we can decompose a complex system into simpler, smaller parts that interact and that make up the system.

This is how all of the sciences and engineering works. But this does not preclude the existence of emergent properties. So, the system can be more than the sum of its parts, if it’s non-linear. And very much the brain is a non-linear system. And that’s what we have to do to reach AI. You could even say that machine learning is the science of emergent properties.

In fact, one of the names by which it has been known in some quarters is “self-organizing systems.” And in fact, what makes AI hard, the reason we haven’t already solved it, is that the usual divide-and-conquer strategy which scientists and engineers follow—of dividing problems into smaller and smaller sub-problems, and then solving the sub-problems, and putting the solutions together—tends not to work in AI, because the sub-systems are very strongly coupled together. So, there are emergent properties, but that does not mean that you can’t reduce it to these pieces; it’s just a harder thing to do.

Marvin Minsky, I remember, talked about how we kind of got tricked a little bit by the fact that it takes very few fundamental laws of the universe to understand most of physics. The same with electricity. The same with magnetism. There are very few simple laws to explain everything that happens. And so the hope had been that intelligence would be like that. Are we giving up on that notion?

Yes, so again, there are different views within AI on this. I think at one end there are people who hope we will discover a few laws of AI, and those would solve everything. At the other end of the spectrum there are people like Marvin Minsky who just think that intelligence is a big, big pile of hacks.

He even has a book that’s like, one of these tricks per page. And who knows how many more there are. I think, and most people in AI believe, that it’s somewhere in between. If AI is just a big pile of hacks, we’re never going to get there. And it can’t really be just a pile of hacks, because if the hacks were so powerful as to create intelligence, then you can’t really call them hacks.

On the other hand, you know, you can’t reduce it to a few laws, like Newton’s laws. So this idea of the master algorithm is that, at the end of the day, we will find one algorithm that does intelligence, but that algorithm is not going to be a hundred lines of code. It’s not going to be millions of lines of code either. You know, if the algorithm is thousands or maybe tens of thousands of lines of code, that would be great. It’ll still be a complex theory—much more complex than the ones we have in physics—but it’ll be much, much simpler than what people like Marvin Minsky envisioned.

And if we find the master algorithm, is that good for humanity?

Well, I think it’s good or bad depending on what we do with it. Like all technology, machine learning just gives us more power. You can think of it as a superpower, right? Telephones let us speak at a distance, airplanes let us fly, and machine learning lets us predict things and lets technology adapts automatically to our needs. All of this is good if we use it for good. If we use it for bad, it will be bad, right? The technology itself doesn’t know how it’s going to be used.

Part of my reason for writing this book is that everybody needs to be aware of what machine learning is, and what it can do, so that they can control it. Because, otherwise, machine learning will just give more control to those few who actually know how to use it.

I think if you look at the history of technology, over time, in the end, the good tends to prevail over the bad, which is why we live in a better world today than we did two hundred or two thousand years ago. But we have to make it happen, right? It just doesn’t fall from the tree like that.

And so, in your view, the master algorithm is essentially synonymous with AGI, in the sense that it can figure anything out—it’s a general artificial intelligence.

Would it be conscious?

Yeah, so, by the way: I wouldn’t say the master algorithm is synonymous with AGI. I think it’s the enabler of AGI. Once we have the master algorithm, we’re still going to need to apply it to vision, and language, and reasoning, and all these things. And then we’ll have AGI.

So, one way to think about this is that it’s an 80/20 rule. The master algorithm is the twenty percent of the work that gets you eighty percent of the way, but you still need to do the rest, right? So maybe this is a better way to think about it.

Fair enough. So, I’ll just ask the question a little more directly. What do you think consciousness is?

That’s a very good question. The truth is, what makes consciousness simultaneously so fascinating and so hard is that, at the end of the day, if there is one thing that I know it’s that I’m conscious, right? Descartes said, “I think, therefore I am,” but maybe he should’ve said “I’m conscious, therefore I am.”

The laws of physics, who knows, they might even be wrong. But the fact that I’m conscious right now is absolutely unquestionable. So, everybody knows that about themselves. At the same time, because consciousness is a subjective experience, it doesn’t lend itself to the scientific method. What are reproducible experiments when it comes to consciousness? That’s one aspect.

The other one is that consciousness is a very complex, emergent phenomenon. So, nobody really knows what it is, or understands it, even at a fairly shallow level. Now, the reason we believe others have consciousness… You believe that I have consciousness because you’re a human being, and I’m a human being, so since you have consciousness, I probably have consciousness as well. And this is really the extent of it. For all you know, I could be a robot talking to you right now, passing the Turing test, and not be conscious at all.

Now, what happens with machines? How can we tell whether a machine is conscious or not? This has been grist for the mill of a lot of philosophers over the last few decades. I think the bottom line is that once a computer starts to act like it’s consciousness, we will treat it as if it’s conscious, we will grant it consciousness.

In fact, we already do that, even with very simple chatbots and what not. So, as far as everyday life goes, it actually won’t be long. In some ways, it’ll happen that people treat computers as being conscious, sooner than they treat the computers as being truly intelligent. Because that’s all we need, right? We project these human properties onto things that act humanly, even in the slightest way.

Now, at the end of the day, if you gaze down into that hardware and those circuits, is there really consciousness there? I don’t know if we will ever be able to really answer that question. Right now, I actually don’t see a good way. I think there will come a point at which we understand consciousness well enough—because we understand the brain well enough—that we are fairly confident that we can tell whether something is conscious or not.

And then at that point I think we will apply this criteria to these machines; and these machines—at least the ones that have been designed to be conscious—will pass the tests. So, we will believe that machines have consciousness. But, you know, we can never be totally sure.

And do you believe consciousness is required for a general intellect?

I think there are many kinds of AI, and many AI applications which do not require consciousness. So, for example, if I tell a machine learning system to go solve cancer—that’s one of the things we’d like to do, cure cancer, and machine learning is a very big part of the battle to cure cancer—I don’t think it requires consciousness at all. It requires a lot of searching, and understanding molecular biology, and trying different drugs, maybe designing drugs, etc. So, ninety percent of AI will involve no consciousness at all.

There are some applications of AI, and some types of AI, that will require consciousness, or something indistinguishable from of it. For example, housebots. We would like to have a robot that cooks dinner and does the dishes and makes the bed and what not.

In order to do all those things, the robot has to have all the capabilities of a human, has to integrate all of these senses: vision, and touch, and perception, and hearing and what not; and then make decision based on it. I think this is either going to be consciousness or something indistinguishable from it.

Do you think there will be problems that arise if that happens? Let’s say you build Rosie the Robot, and you don’t know if the robot is conscious or merely acting as if it is. Do you think at that point we have to have this question of, “Are we fine enslaving what could be a conscious machine to plunge our toilet for us?”

Well, that depends on what you consider enslaving, right? So, one way to look at this—and it’s the way I look at it—is that these are still just machines, right? Just because they have consciousness doesn’t mean that they have human rights. Human rights are for humans. I don’t think there’s such thing as robot rights.

The deeper question here is, what gives something rights? One school of thought is that it’s the ability to suffer that gives you rights, and therefore animals should have rights. But, if you think about it historically, the idea of having animal rights… even fifty years ago would’ve seemed absurd. So, by the same standard, maybe fifty years from now, people will want to have robot rights. In fact, there are some people already talking about it.

I think it’s a very strange idea. And often people talk about, “Oh, well, will the machines be our friends or will they be our slaves? Will they be our equals? Will they be inferior?” Actually, I think this whole way of framing things is mistaken. You know, the robots will be neither our equals nor our slaves. They will be our extensions, right?

Robots are technology, they augment us. I think it’s not so much that the machines will be conscious, but that through machines we will have a bigger consciousness—in the same way that, for example, the Internet already gives us a bigger consciousness than we had when there was no Internet.

So, discussing robots leads us to a topic that’s on the news literally every day, which is the prospect that automation and technological advances will eliminate jobs faster than it can create new ones. Or, it will eliminate jobs and replace them with inaccessible kinds of jobs. What do you think about that? What do you think the future holds?

I think we have to distinguish between the near term, by which I mean the next ten years or so, and the long term. In the near term, I think some jobs will disappear, just like jobs have disappeared to automation in the past. AI is really automation on steroids. So I think what’s going to happen in the near term is not so different from what has happened in the past.

Some jobs will be automated, so some jobs will disappear, but many new jobs will appear as well. It’s always easier to see the jobs which disappear than the ones that appear. Think for example of being an app developer. There’s millions of people today who make a living today being an app developer.

Ten years ago that job didn’t exist. Fifty years ago you couldn’t even imagine that job. Two hundred years ago, ninety-something percent of Americans were farmers, and then farming got automated. Now today only two percent of Americans work in agriculture. That doesn’t mean that the other ninety-eight percent are unemployed. They’re just doing all these jobs that people couldn’t even imagine before.

I think a lot of that is what’s going to happen here. We will see entirely new job categories appear. We will also see, on a more mundane level, more demand for lots of existing jobs. For example, I think truck drivers should be worried about the future of their jobs, because self-driving trucks are coming, so there will be an endpoint.

There are many millions of truck drivers in the US alone. It’s one of the most widespread occupations. But now, what will they do? People say, “Oh, you can’t turn truck drivers into programmers.” Well, you don’t have to turn them into programmers. Think about what’s going to happen…

Because trucks are now self-driving, goods will cost less. Goods will cost less, so people will have more money in their pockets, and they will spend it on other things—like, for example, having bigger, better houses. And therefore, there will be more demand for construction workers, and some of these truck drivers will become construction workers and so on.

You know, having said all that, I think that in the near term the most important thing that’s going to happen to jobs is actually—neither the ones that will disappear, nor the ones that will appear—most jobs will be transformed by AI. The way I do my job will change because some parts will become automated. But now I will be able to do more things better, or more than I could do before, when I didn’t have the automation. So, really the question everybody needs to think about is, what parts of my job can I automate? Really, the best way to protect your job from automation is to automate it yourself, and then ask, “What can I do using these machine learning tools?”

Automation is like having a horse. You don’t try to outrun a horse; you ride the horse. And we have to ride automation, to do our jobs better and in more ways than we can now.

So, it doesn’t sound like you’re all that pessimistic about the future of employment?

I’m optimistic, but I also worry. I think that’s a good combination. I think if we’re pessimistic we’ll never do anything. Again, if you look at the history of technology, the optimists at the end of the day are the ones who made the world a better place, not the pessimists.

But at the same time, naïve optimism is very dangerous, right? We need to worry continuously about all the things that could go wrong, and make sure that they don’t go wrong. So I think that a combination of optimism and worry is the right one to have.

Some people say we’ll find a way to merge, mentally, with the AI. Is that even a valid question? And if so, what do you think of it?

I think that’s what’s going to happen. In fact, it’s already happening. We are going to merge with our machines step-by-step. You know, like a computer is a machine that is closer to us than a television. A smartphone is closer to us than a desktop is, and the laptop is somewhere in between.

And we’re already starting to see these things such as Google Glass and augmented reality, where in essence the computer is extending our senses, and extending our part to do things. Elon Musk has this company that is going to create an interface between neurons and computers, and in fact, in research labs this already exists.

I have colleagues that work on that. They’re called brain-computer interfaces. So, step-by-step, right? The way to think about this is, we are cyborgs, right? Human beings are actually the cyborg species. From day one, we were of one with our technology.

Even our physiology would be different if we couldn’t do things like light fires and throw spears. So this has always been an ongoing process. Part of us is technology, and that will become more and more so in the future. Also with things like the Internet, we are connecting ourselves into a bigger, you know… Humanity itself is an emergent phenomenon, and having the Internet and computers allows a greater level to emerge.

And I think exactly how this happened and when, of course, is up for grabs; but that’s the way things are going.

You mentioned in passing a minute ago the singularity. Do you believe that that is what will happen, as it’s commonly thought? That there is going to be this kind of point, in the reasonably near future, from which we cannot see anything beyond it? Because we don’t have any frame of reference?

I don’t believe that the singularity will happen in those terms. So this idea of exponentially increasing progress that goes on forever… that’s not going to happen, because it’s physically impossible, right? No exponential goes on forever. It always flattens out sooner or later.

All exponentials are really what are called “S curves” in disguise. They go up faster and faster—and this is how all previous technology waves have looked—but then they flatten out, and finally they plateau.

Also, this notion that at some point things will become completely incomprehensible for us… I don’t believe that either, because there will always be parts that we understand, number one; and there are limits to what any intelligence can do, human or non-human.

By that stance, the singularity has already happened. A hundred years ago, the most advanced technology was maybe something like a car, right? And I could understand every part of how a car works, completely. Today we already have technology, like the computer systems that we have today, and nobody understands that whole system. Different people understand different parts.

With machine learning in particular, the thing that’s notable about machine learning algorithms is that they can do very complex things very well, and we have no idea how they’re doing them. And yet, we are comfortable with that, because we don’t necessarily care about the details of how it is accomplished, we just care whether the medical diagnosis was correct, or the patient’s cancer was cured, or the car is driving correctly. So I think this notion of the singularity is a little bit off.

Having said that, we are currently in the middle of one of these S curves. We are seeing very rapid progress, and by the time this has run its course, the world will be a very, very different place from what it is today.

How so?

All these things that we’ve been talking about. We will have intelligent machines surrounding us. Not just humanoid machines but intelligence on tap, right? In the same way that today you can use electricity for whatever you want just by plugging into a socket, you will be able to plug into intelligence.

And indeed, the leading tech companies are already trying to make this happen. So there will be all these things which the greater intelligence enables. Everybody will have a home robot in the same way that they have a car. We will have this whole process that the Internet is enabling, and that the intelligence on top of the Internet is enabling, and the Internet of things, and so on.

There will something like this larger emergent being, if you will, that’s not just individual human beings or just societies. But again, it’s hard to picture exactly what that would be, but this is going to happen.

You know, it always makes the news when an artificial intelligence masters some game, right? We all know the list: you had chess, and then you had Jeopardy, of course, and then you had AlphaGo, and then recently you had poker. And I get that games are kind of a natural place, because I guess it’s a confined universe with very rigid, specific rules, and a lot of training data for teaching it how to function in that.

Are there types of problems that machine learning isn’t suited to solve? I mean, just kind of philosophically—it doesn’t matter how good your algorithms are, or how much data you have, or how fast a computer is—this is not the way to solve that particular problem.

Well, certainly some problems are much harder than others, and—as you say—games are easier in the sense that they are these very constrained, artificial universes. And that’s why AI can do so well in them. In fact, the summary of what machine learning and AI are good for today, is that they are good for these tasks which are somewhat well-defined and constrained.

What people are much better at are things that require knowledge of the world, they require common sense, they require integrating lots of different information. We’re not there yet. We don’t have the learning algorithms that can do that.

So the learning algorithms that we have today are certainly good for some things, but not others. But again, if we have the master algorithm then we will be able to do all these things, and we are making progress towards that, so, we’ll see.

Any time I see a chatbot or something that’s trying to pass the Turing test, I always type the same first question, which is: “Which is bigger, a nickel or the sun?” And not a single one of them has ever answered it correctly.

Well, exactly, because they don’t have common sense knowledge. It’s amazing what computers can do in some ways, and it’s amazing what they can’t do in others—like these really simple pieces of common sense logic. In a way, one of the big lessons that we’ve learned in AI is that automating the job of a doctor or a lawyer is actually easy.

What is very hard to do with AI is what a three-year-old can do. If we could have a robot baby that can do what a one-year-old can do, and learn the same way, we would have solved AI. It’s much, much harder to do those things; things that we take for granted, like picking up an object, for example, or like walking around without tripping. We take this for granted because evolution spent five hundred million years developing it. It’s extremely sophisticated, but for us it’s below the conscious level.

The things for us that we are conscious of, and that we have to go to college for, well, we’re not very good at them; we just learned to do them recently. Those, the computers can do much better. So, in some ways in AI, it’s the hard things that are easy and the easy things that are hard.

Does it mean anything if something finally passes the Turing test? And if so, when do you think that might happen? When will it say, “Well, the sun is clearly bigger than a nickel?

Well, with all due respect to Alan Turing—who was a great genius and an AI pioneer—most people in AI, including me, believe that the Turing test is actually a bad idea. The reason the Turing test is a bad idea is that it confuses being intelligent with being human. This idea that you can prove that you’re intelligent by fooling a human into thinking you’re a human is very weird, if you think about it. It’s like saying an airplane doesn’t fly until it can fool birds into thinking it’s a bird. That doesn’t make any sense.

True intelligence can take many forms, not necessarily the human form. So, in some ways we don’t need to pass the Turing test to have AI. And in other ways, the Turing test is too easy to pass, and by some standards has already been passed by systems that no one would call intelligent. Talking with someone for five minutes and fooling them into thinking you’re a human is actually not that hard, because humans are remarkably adept at projecting humanity into anything that acts human.

In fact, even in the ‘60s there was this famous thing called ELIZA, that basically just picked up keywords in what you said and gave back these canned responses. And if you talked to ELIZA for five minutes, you’d actually think that it was a human.

Although Weizenbaum’s observation was, even when people knew ELIZA was just a program, they still formed emotional attachments to it, and that’s what he found so disturbing.

Exactly, so human beings have this uncanny ability to treat things as human, because that’s the only reference point that we have, right? It’s this whole idea of reasoning by analogy. If we have something that behaves even a little bit like a human—because there’s nothing else in the universe to compare it to—we start treating it more like a human and project more human qualities into it.

And, by the way, this is something that, once companies start making bots—this is already happening with chatbots like Siri and Cortana and what not, and it’ll happen even more so with home robots—there’s going to be a race to make the robots more and more humanlike. Because if you form an emotional attachment to my product, that’s what I want, right? I’ll sell more of it, and for a higher price, and so on and so forth. So, we’re going to see uncannily human-like robots and AIs—whether this is a good or bad things is another matter.

What do you think creativity is? And would an AGI, by definition, be creative, right? It could write a sonnet, or…

Yeah, an AGI, by definition, would be creative. One thing that you hear a lot these days, and that unfortunately is incorrect, is that, “Oh, we can automate these menial, routine jobs, but creativity is this deeply human thing that will never be automated.” And, this is kind of like a superficially-plausible notion, but, in fact, there are already examples of, for example, computers that can compose music.

There is this guy, David Cope, a professor at UC Santa Cruz—he has a computer program that will create music in the style of the composer of your choice. And he does this test where he plays a piece by Mozart, a piece by a human composer imitating Mozart, and a piece by his computer—by his system. And he did this at a conference that I was in, and he asked people to vote for which one was the real Amadeus, and the real one won, but the second place was actually the computer. So a computer can already write Mozart better than a professional, highly-educated human composer can.

Computers have made paintings that are actually quite beautiful and striking, many of them. Computers these days write news stories. There’s this company called Narrative Fiction that will write news stories for you. And the likes of Forbes or Fortune—I forget which one it is—actually published some of the things that they write. So it’s not a novel yet, but we will get there.

And also, in other areas, like for example chess and AlphaGo are notable examples… Both Kasparov and Lee Sedol, when they were beaten by the computer, had this remarkable reaction saying, “Wow, the computer was so creative. It came up with these moves that I would never have thought of, that seemed dumb at first but turned out to be absolutely brilliant.”

And computers have done things in mathematics, theorems and proofs and etc., all of which, if done by humans, would be considered highly creative. So, automating creativity is actually not that hard.

It’s funny, when Kasparov first said it seemed creative, what he was implying was that IBM cheated, that people had intervened. And IBM hadn’t cheated. But, that’s a testament to just how—

—There were actually two phases, right? He said that at first, so he was suspicious; because, again, how could something not human actually be doing that? But then later, after the match when he had lost and so on, if you remember, there was this move that Deep Blue made that seemed like a crazy move, and Kasparov said, like, “I could smell a new kind of intelligence playing against me.”

Which is very interesting for us AI-types, because we know exactly what was going on, right? It was these, you know, search algorithms and a whole bunch of technology that we understand fairly well. It’s interesting that from the outside this just seemed like a new kind of intelligence, and maybe it is.

He also said, “At least it didn’t enjoy beating me.” Which I guess someday, though, it may, right?

Oh, yeah, yeah! And you know that could happen depending on how we build them, right? The other very interesting thing that happened in that match—and again, I think it’s symptomatic—is that Kasparov is someone who always won by basically intimidating his opponents into submission. They just got scared of him, and then he beat them.

But the thing that happened with Deep Blue, was that Deep Blue couldn’t be intimidated by him; it was just a machine, right? As a result of which, Kasparov himself—suddenly, for the first time in his life, probably—became insecure. And then, after he lost that game, in the following game, he actually made these mistakes that he would never make, because he had suddenly become insecure.

Foreboding, isn’t it? We talked about emergence a couple of times. There’s the Gaia hypothesis that maybe all of the life on our planet has an emergent property: some kind of an intelligence that we can’t perceive, any more than our cells can perceive us.

Do you have any thoughts on that? And do you have any thoughts on if, eventually, the Internet could just become emergent—an emergent consciousness?

Right. Like most scientists, I don’t believe in the Gaia hypothesis, in the sense that the Earth, as it is, does not have enough self-regulating ability to achieve the homeostasis that living beings do. In fact, sometimes you get these negative feedback cycles where things actually go very wrong. So, most scientists don’t believe in the Gaia hypothesis for Earth today.

Now, what I think—and a lot of other people think this is the case—is that maybe the Gaia hypothesis will be true in the future. Because as the Internet expands, and the Internet of Things—with sensors all over the place, literally all over the planet—and a lot of actions continue being taken based on those sensors to, among other things, preserve us and presumably other kinds of life on Earth… I think if we fast-forward a hundred years, there’s a very good chance that Earth will look like Gaia, but it will be a Gaia that is technological, as opposed to just biological.

And in fact, I don’t think that there’s an opposition between technology and biology. I think technology will just be the extension of biology by other means. It’s biology that’s made by us. I mean, we’re creatures, and so the things that we make are also biology, in that sense.

So if you look at it that way, maybe what has happened is that since the very beginning, Earth has been evolving towards Gaia, we just haven’t gotten there yet. But technology is very much part of getting there.

What do you think of the OpenAI initiative?

The OpenAI initiative’s goal is to do AI for the common good. Because, you know, people like Elon Musk and Sam Altman were afraid that because the biggest quantity of AI research is being done inside companies—like Google and Facebook and Microsoft and Amazon and what not—it would be owned by them. And AI is very powerful, so it’s dangerous if AI is just owned by these companies.

So, their goal is to do AI research that is going to be open, hence the name, and available to everybody. I think this is a great agenda, so I very much agree with trying to do that. I think there’s nothing wrong with having a lot of AI research in companies, but I think it’s important that there also be AI research that is in the public domain. Universities are one aspect of doing that, something like OpenAI is another example, something like the Allen Institute for AI is another example of doing AI for the public good in this way. So, I think this is a good agenda.

What they’re going to do exactly, and what their chances of succeeding are, and how their style of AI will compare to the styles of AI that are being produced by these other labs, whether industry or academia, is something that remains to be seen. But I’m curious to see what they get out of it.

The worry from some people is that… They make it analogous to a nuclear weapon, in that if you say, “We don’t know how to build one, but we can get 99% of the way there, and we’re going to share that with everybody on the planet.” And then you hope that the last little bit that makes it an AGI isn’t a bad actor of some kind. Does that make sense to you?

Yeah, yeah… I understand the analogy, but you have to remember that AI and nuclear weapons are very different for a couple of reasons. One is that nuclear weapons are essentially destructive things, right? Yeah, you can turn them into nuclear power, but they were invented to blow things up.

Whereas AI is a tool that we use to do all sorts of things, like diagnose diseases and place ads on webpages, and things from big to small. The thing is, the knowledge to build a nuclear bomb is actually not that hard to come by. Fortunately, what is very hard to come by is the enriched uranium, or plutonium, to build the bomb.

That’s actually what keeps any terrorist group from building a bomb. It’s not the lack of knowledge, it’s the lack of the materials. Now, in AI it’s actually very different. You just need computing power, and you can just plug into the cloud and get that computing power. AI is just algorithms. It’s already accessible. Lots of people can use it for whatever they want.

In a way, the safety lies in actually having AI in the hands of everybody, so that it’s not in the hands of a few. If only one person or one company had access to the master algorithm, they would be too powerful. If everybody has access to the master algorithm then there will be competition, there will be collaboration. There will be like a whole ecosystem of things that happen, and we will be safer that way, just as we are with the economy as it is. But, having said that, we will need something like an AI police.

William Gibson in Neuromancer had this thing called the Turing police, right? The Turing police are AIs whose job is to police the other AIs, to make sure that they don’t go bad, or that they get stopped when they go bad. And this is no different from what already happens. We have highways, and bank robbers can use the highways to get away. That’s no reason to not have highways, but of course the police also need to have cars so they can catch the robbers, so I think it’s going to be a similar thing with AI.

When I do these chats with people in AI, science fiction writers always come up. They always reference them, they always have their favorites and what not. Do you have any books, movies, TV shows or anything like that that you watch them and you go, “Yes, that could happen”?

Unfortunately, a lot of the depictions of AI and robots in movies and TV shows is not very realistic, because the computers and robots are really just humans in disguise. This is how you make an interesting story, is by making the robots act like humans. They have evil plans to take over the world, or somebody falls in love with them, and things like that—and that’s how you make an interesting movie.

But real AIs, as we were talking about, are very different than that. A lot of the movies that people associate with AI—like Terminator, for example—are really not stuff that will happen, but with a provision that science fiction is a great source of self-fulfilling prophecies, right? People read those things and then they try to make them happen. So, who knows.

Having said that, what is an example of a movie depicting AI that I think could happen, and is fairly interesting and realistic? Well, one example is the movie Her. The movie Her is basically about a virtual assistant that is very human-like, and ten years ago that would’ve been a very strange movie. These days we already have things like Siri, and Cortana, and Google Now, which are, of course, still a far cry from Her. But I think we’re going to get closer and closer to that.

And final question: What are you working on, and are you going to write another book? What keeps you busy?

Two things: I think we are pretty close to unifying those five master algorithms, and I’m still working on that. That’s what I’ve been working on for the last ten years. And I think we’re almost there. I think once we’re there, the next thing is that, as we’ve been talking about, that’s not going to be enough. So we need something else.

I think we need something beyond the existing five paradigms we have, and I’m working on a new type of learning that I hope will actually take us beyond what those five could do. Some people have jokingly called it the sixth paradigm, and maybe my next book will be called The Sixth Paradigm. That makes it sound like a Dan Brown novel, but that’s definitely something that I’m working on.

When you say you think the master algorithm is almost ready… Will there be a “ta-da” moment, like, here it is? Or, is it kind of a gradualism?

It’s a gradual thing. Look at physics, they’ve unified three of the forces—electromagnetism and the strong and weak forces, but they still haven’t unified gravity with them. There are proposals like string theory to do that.

These “a-ha” moments often only happen in retrospect. People propose a theory, and then maybe it gets tested, and then maybe it gets revised, and then finally when all the pieces are in place people go, “Oh, wow.” And I think it’s going to be like that with the master algorithm as well.

We have candidates, we have ways of putting these pieces together. It still remains to be seen whether they can do all the things that we want, and how well they will scale. Scaling is very important, because if it’s not scalable then it’s not really solving the problem, right? So, we’ll see.

All right, well thank you so much for being on the show.

Thanks for having me, this was great!

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Read more here:: gigaom.com/feed/

Book Review: Digitise or Die

By Zenobia Hedge

This is a very big book. The review copy was a PDF file that ran to 251 pages. The title is somewhat dramatic but the author does make a convincing case for that statement, citing companies such as Kodak and Nokia. However that is a relatively easy task.

The main thrust of the book is to expand “on the IoT, beyond the pure technology element, in a way that would help companies understand how to transform, leverage themselves and supersede their competition.”

That is a commendable objective, but the book includes a lot of background information before we get to Chapter 3 on “Digitisation Strategy: The IoT Methodology” on page 37. For example, it covers the mainframe computer period from 1950 to 1980; building the backbone of the Internet from 1980 to 2000; and Internet services such as Google from 2000 to 2017, says Bob Emmerson.

The methodology starts with the customers: their needs and pain points, and then it indicates how to digitise the current portfolio while embracing technology, differentiation strategies, business models and a transition process. However, in Chapter 3 the book seems to contradict itself when covering the first step.

The stated starting point is four customer–centric bullet points, but this is followed by a statement that the first step begins by addressing IoT technology, which that has six different layers. If it’s not a contradiction then it is confusing.

The remaining three elements are summarised in Chapter 4, which devotes a mere eight pages to the customer’s needs. However, Chapter 5 employs 60 pages on IoT technology and the reader is told that “to understand the IoT you must understand what hides behind IoT acronyms such as CoAP, AEP, Thread and so forth.” It would have been better to cover technology after the chapters on differentiation strategies and business models and to do so in less detail.

Bob Emmerson

The main thrust of the book, outlined earlier, is somewhat ambitious and it is easy to criticise the result. That said, Digitise or Die covers a lot of IoT ground, too much at times, but that can be seen as a positive problem. It works well for business professionals who know that they need to understand the emerging environment but don’t know where to start. It’s also a book that can serve as a reference work.

The author is Nicolas Windpassinger, global vice president Partner Program, Schneider Electric. Sales revenue will be donated to Alzheimer’s Association and Fondation de France. It’s available for pre-order on Amazon and will be launched mid November.

The book has been reviewed by Bob Emmerson, freelance writer and telecoms industry observer

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Book Review: Digitise or Die appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Voices in AI – Episode 17: A Conversation with James Barrat

By Byron Reese

Today’s leading minds talk AI with host Byron Reese

In this episode, Byron and James talk about jobs, human vs. artificial intelligence, and more.



0:00


0:00


0:00

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom. I am Byron Reese. Today I am so excited that our guest is James Barrat. He wrote a book called Our Final Invention, subtitled Artificial Intelligence and the End of the Human Era. James Barratt is also a renowned documentary filmmaker, as well as an author. Welcome to the show, James.

James Barrat: Hello.

So, let’s start off with, what is artificial intelligence?

Very good question. Basically, artificial intelligence is when machines perform tasks that are normally ascribed to human intelligence. I have a very simple definition of intelligence that I like. Because ‘artificial intelligence’—the definition just throws the ideas back to humans, and [to] human intelligence, which is the intelligence we know the most about.

The definition I like is: intelligence is the ability to achieve goals in a variety of novel environments, and to learn. And that’s a simple definition, but a lot is packed into it. Your intelligence has to achieve goals, it has to do something—whether that’s play Go, or drive a car, or solve proofs, or navigate, or identify objects. And if it doesn’t have some goal that it achieves, it’s not very useful intelligence.

If it can achieve goals in a variety of environments, if it can do object recognition and do navigation and do car-driving like our intelligence can, then it’s better intelligence. So, it’s goal-achieving in a bunch of novel environments, and then it learns. And that’s probably the most important part. Intelligence learns and it builds on its learning.

And you wrote a widely well-received book, Artificial Intelligence: Our Final Invention. Can you explain to the audience just your overall thesis, and the main ideas of the book?

Sure. Our Final Invention is basically making the argument that AI is a dual-use technology. A dual-use technology is one that can be used for great good, or great harm. Right now we’re in a real honeymoon phase of AI, where we’re seeing a lot of nifty tools come out of it, and a lot more are on the horizon. AI, right now, can find cancer clusters in x-rays better than humans. It can do business analytics better than humans. AI is doing what first year legal associates do, it’s doing legal discovery.

So we are finding a lot of really useful applications. It’s going to make us all better drivers, because we won’t be driving anymore. But it’s a dual-use technology because, for one thing, it’s going to be taking a lot of jobs. You know, there are five million professional drivers in the United States, seven million back-office accountants—those jobs are going to go away. And a lot of others.

So the thesis of my book is that we need to look under the hood of AI, look at its applications, look who’s controlling it, and then in a longer term, look at whether or not we can control it at all.

Let’s start with that point and work backwards. That’s an ominous statement. Can we record it at all? What are you thinking there?

Can we control it at all.

I’m sorry, yes. Control it at all.

Well, let me start, I prefer to start the other way. Stephen Hawking said that the trouble with AI is, in the short term, who controls it, and in the long term, can we control it at all? And in the short term, we’ve already suffered some from AI. You know, the NSA recently was accessing your phone data and mine, and getting your phone book and mine. And it was, basically, seizing our phone records, and that used to be illegal.

Used to be that if I wanted to seize, to get your phone records, I needed to go to a court, and get a court order. And that was to avoid abridging the Fourth Amendment, which prevents illegal search and seizure of property. Your phone messages are your property. The NSA went around that, and grabbed our phone messages and our phone data, and they are able to sift through this ocean of data because of AI, because of advanced data mining software.

One other example—and there are many—one other example of, in the short term, who controls the AI, is, right now there are a lot of countries developing battlefield robots and drones that will be autonomous. And these are robots and drones that kill people without a human in the loop. And these are AI issues. There are fifty-six nations developing battlefield robots.

The most sought after will be autonomous battlefield robots. There was an article just a couple of days ago about how the Marines have a robot that shoots a machinegun on a battlefield. They control it with a tablet, but their goal, as stated there, is to make it autonomous, to work on its own.

In the longer-term we, I’ll put it in the way that Arthur C. Clark put it to me, when I interviewed him. Arthur C. Clark was a mathematician and a physicist before he was a science fiction writer. And he created the HAL 9000 from 2001: A Space Odyssey, probably the most famous homicidal AI. And he said, when I asked him about the control problem of artificial intelligence, he said something like this: He said, “We humans steer the future not because we are the fastest or the strongest creatures, but because we are the most intelligent. And when we share the planet with something that’s more intelligent than we are, it will steer the future.”

So the problem we’re facing, the problem we’re on the cusp of, I can simplify it with a concept called ‘the intelligence explosion’. The intelligence explosion was an idea created by a statistician named I. J. Good in the 1960s. He said, “Once we create machines that do everything as well or better than humans, one of the things they’ll do is create smart machines.”

And we’ve seen artificial intelligence systems slowly begin to do things better than we do, and it’s not a stretch to think about a time to come, when artificial intelligence systems do advanced AI research and development better that humans. And I. J. Good said, “Then, when that happens, we humans will no longer set the pace of intelligence advancement, it will be machines that will set the pace of advancement.”

The trouble of that is, we know nothing about how to control a machine, or a cognitive architecture, that’s a thousand or million times more intelligent than we are. We have no experience with anything like that. We can look around us for analogies in the animal world.

How do we treat things that we’re a thousand times more intelligent than? Well, we treat all animals in a very negligent way. And the smart ones are either endangered, or they’re in zoos, or we eat them. That’s a very human-centric analogy, but I think it’s probably appropriate.

Let’s push on this just a little bit. So do you…

Sure.

Do you believe… Some people say ‘AI’ is kind of this specter of a term now, that, it isn’t really anything different than any other computer programs we’ve ever run, right? It’s better and faster and all of that, but it isn’t qualitatively anything different than what we’ve had for decades.

And so why do you think that? And when you say that AIs are going to be smarter than us, a million times smarter than us, ‘smarter’ is also a really nebulous term.

I mean, they may be able to do some incredibly narrow thing better than us. I may not be able to drive a car as well as an AI, but that doesn’t mean that same AI is going to beat me at Parcheesi. So what do you think is different? Why isn’t this just incrementally… Because so far, we haven’t had any trouble.

What do you think is going to be the catalyst, or what is qualitatively different about what we are dealing with now?

Sure. Well, there’s a lot of interesting questions packed into what you just said. And one thing you said—which I think is important to draw out—is that there are many kinds of intelligence. There’s emotional intelligence, there’s rational intelligence, there’s instinctive and animal intelligence.

And so, when I say something will be much more intelligent than we are, I’m using a shorthand for: It will be better at our definition of intelligence, it will be better at solving problems in a variety of novel environments, it will be better at learning.

And to put what you asked in another way, you’re saying that there is an irreducible promise and peril to every technology, including computers. All technologies, back to fire, have some good points and some bad points. AI I find qualitatively different. And I’ll argue by analogy, for a second. AI to me is like nuclear fission. Nuclear fission is a dual-use technology capable of great good and great harm.

Nuclear fission is the power behind atom bombs and behind nuclear reactors. When we were developing it in the ‘20s and ‘30s, we thought that nuclear fission was a way to get free energy by splitting the atom. Then it was quickly weaponized. And then we used it to incinerate cities. And then we as a species held a gun at our own heads for fifty years with the arms race. We threatened to make ourselves extinct. And that almost succeeded a number of times, and that struggle isn’t over.

To me, AI is a lot more like that. You said it hasn’t been used for nefarious reasons, and I totally disagree. I gave you an example with the NSA. A couple of weeks ago, Facebook was caught up because they were targeting emotionally-challenged and despairing children for advertising.

To me, that’s extremely exploitative. It’s a rather soulless and exploitative commercial application of artificial intelligence. So I think these pitfalls are around us. They’re already taking place. So I think the qualitative difference with artificial intelligence is that intelligence is our superpower, the human superpower.

It’s the ability to be creative, the ability to invent technology. That was one thing Stephen Hawking brought up when he was asked about, “What are the pitfalls of artificial intelligence?”

He said, “Well, for one thing, they’ll be able to develop weapons we don’t even understand.” So, I think the qualitative difference is that AI is the invention that creates inventions. And we’re on the cusp, this is happening now, and we’re on the cusp of an AI revolution, it’s going to bring us great profit and also great vulnerability.

You’re no doubt familiar with Searle’s “Chinese Room” kind of question, but all of the readers, all of the listeners might not be… So let me set that up, and then get your thought on it. It goes like this:

There’s a person in a room, a giant room full of very special books. And he doesn’t—we’ll call him the librarian—and the librarian doesn’t speak a word of Chinese. He’s absolutely unfamiliar with the language.

And people slide him questions under the door which are written in Chinese, and what he does—what he’s learned to do—is to look at the first character in that message, and he finds the book, of the tens of thousands that he has, that has that on the spine. And in that book he looks up the second character. And the book then says, “Okay, go pull this book.”

And in that book he looks up the third, and the fourth, and the fifth, all the way until he gets to the end. And when he gets to the end, it says “Copy this down.” And so he copies these characters again that he doesn’t understand, doesn’t have any clue whatsoever what they are.

He copies them down very carefully, very faithfully, slides it back under the door… Somebody’s outside who picks it up, a Chinese speaker. They read it, and it’s just brilliant! It’s just absolutely brilliant! It rhymes, it’s Haiku, I mean it’s just awesome!

Now, the question, the kind of ta-da question at the end is: Does the man, does the librarian understand Chinese? Does he understand Chinese?

Now, many people in the computer world would say yes. I mean, Alan Turing would say yes, right? The Chinese room passes the Turing Test. The Chinese speakers outside, as far as they know, they are conversing with a Chinese speaker.

So do you think the man understands Chinese? And do you think… And if he doesn’t understand Chinese… Because obviously, the analogy of it is: that’s all that computer does. A computer doesn’t understand anything. It doesn’t know if it’s talking about cholera or coffee beans or anything whatsoever. It runs this program, and it has no idea what it’s doing.

And therefore it has no volition, and therefore it has no consciousness; therefore it has nothing that even remotely looks like human intelligence. So what would you just say to that?

The Chinese Room problem is fascinating, and you could write books about it, because it’s about the nature of consciousness. And what we don’t know about consciousness, you could fill many books with. And I used to think I wanted to explore consciousness, but it made exploring AI look easy.

I don’t know if it matters that the machine thinks as we do or not. I think the point is that it will be able to solve problems. We don’t know about the volition question. Let me give you another analogy. When Ferrucci, [when] he was the head of Team Watson, he was asked a very provocative question: “Was Watson thinking when it beat all those masters at Jeopardy?” And his answer was, “Does a submarine swim?”

And what he meant was—and this is the twist on on the Chinese Room problem—he meant [that] when they created submarines, they learned principles of swimming from fish. But then they created something that swims farther and faster and carries a huge payload, so it’s really much more powerful than fish.

It doesn’t reproduce and it doesn’t do some of the miraculous things fish do, but as far as swimming, it does it. Does an airplane fly? Well, the aviation pioneers used principles of flight from birds, but quickly went beyond that, to create things that fly farther and faster and carry a huge payload.

I don’t think it matters. So, two answers to your question. One is, I don’t think it matters. And I don’t think it’s possible that a machine will think qualitatively as we do. So, I think it will think farther and faster and carry a huge payload. I think it’s possible for a machine to be generally intelligent in a variety of domains.

We can see intelligence growing in a bunch of domains. If you think of them as rippling pools, ripples in a pool, like different circles of expertise ultimately joining, you can see how general intelligence is sort of demonstrably on its way.

Whether or not it thinks like a human, I think it won’t. And I think that’s a danger, because I think it won’t have our mammalian sense of empathy. It’ll also be good, because it won’t have a lot of sentimentality, and a lot of cognitive biases that our brains are labored with. But you said it won’t have volition. And I don’t think we can bet on that.

In my book, Our Final Invention, I interviewed at length Steve Omohundro, who’s taken upon himself—he’s an AI maker and physicist—and he’d taken it upon himself to create more or less a science for understanding super intelligent machines. Or machines that are more intelligent than we are.

And among the things that he argues for, using rational-age and economic theory—and I won’t go into that whole thing—but it’s in Our Final Invention, it’s also in Steve Omohundro’s many websites. Machines that are self-aware and are self-programming, he thinks, will develop basic drives that are not unlike our own.

And they include things like self-protection, creativity, efficiency with resources,and other drives that will make them very challenging to control—unless we get ahead of the game and create this science for understanding them, as he’s doing.

Right now, computers are not generally intelligent, they are not conscious. All the limitations of the Chinese Room, they have. But I think it’s unrealistic to think that we are frozen in development. I think it’s very realistic to think that we’ll create machines whose cognitive abilities match and then outstrip our own.

But, just kind of going a little deeper on the question. So we have this idea of intelligence, which there is no consensus definition on it. Then within that, you have human intelligence—which, again, is something we certainly don’t understand. Human intelligence comes from our brain, which is—people say—‘the most complicated object in the galaxy’.

We don’t understand how it works. We don’t know how thoughts are encoded. We know incredibly little, in the grand scheme of things, about how the brain works. But we do know that humans have these amazing abilities, like consciousness, and the ability to generalize intelligence very effortlessly. We have something that certainly feels like free will, we certainly have something that feels like… and all of that.

Then on the other hand, you think back to a clockwork, right? You wind up a clock back in the olden days and it just ran a bunch of gears. And while it may be true that the computers of the day add more gears and have more things, all we’re doing is winding it up and letting it go.

And, isn’t it, like… not only a stretch, not only a supposition, not only just sensationalistic, to say, “Oh no, no. Someday we’ll add enough gears that, you wind that thing up, and it’s actually going to be a lot smarter than you.”

Isn’t that, I mean at least it’s fair to say there’s absolutely nothing we understand about human intelligence, and human consciousness, and human will… that even remotely implies that something that’s a hundred percent mechanical, a hundred percent deterministic, a hundred percent… Just wind it and it doesn’t do anything. But…

Well, you’re wrong about being a hundred percent deterministic, and it’s not really a hundred percent mechanical. When you talk about things like will, will is such an anthropomorphic term, I’m not sure if we can really, if we can attribute it to computers.

Well, I’m specifically saying we have something that feels and seems like will, that we don’t understand.

If you look, if you look at artificial neural nets, there’s a great deal about them we don’t understand. We know what the inputs are, and we know what the outputs are; and when we want to make better output—like a better translation—we know how to adjust the inputs. But we don’t know what’s going on in a multilayered neural net system. We don’t know what’s going on in a high resolution way. And that’s why they’re called black box systems, and evolutionary algorithms.

In evolutionary algorithms, we have a sense of how they work. We have a sense of how they combine pieces of algorithms, how we introduce mutations. But often, we don’t understand the output, and we certainly don’t understand how it got there, so that’s not completely deterministic. There’s a bunch of stuff we can’t really determine in there.

And I think we’ve got a lot of unexplained behavior in computers that’s, at this stage, we simply attribute to our lack of understanding. But I think in the longer term, we’ll see that computers are doing things on their own. I’m talking about a lot of the algorithms on Wall Street, a lot of the flash crashes we’ve seen, a lot of the cognitive architectures. There’s not one person who can describe the whole system… the ‘quants’, they call them, or the guys that are programming Wall Street’s algorithms.

They’ve already gone, in complexity, beyond any individual’s ability to really strip them down.

So, we’re surrounded by systems of immense power. Gartner and company think that in the AI space—because of the exponential nature of the investment… I think it started out, and it’s doubled every year since 2009—Gartner estimates that by 2025, that space will be worth twenty-five trillion dollars of value. So to me, that’s a couple of things.

That anticipates enormous growth, and enormous growth in power in what these systems will do. We’re in an era now that’s different from other eras. But it is like other Industrial Revolutions. We’re in an era now where everything that’s electrified—to paraphrase Kevin Kelly, the futurist—everything that’s electrified is being cognitized.

We can’t pretend that it will always be like a clock. Even now it’s not like a clock. A clock you can take apart, and you can understand every piece of it.

The cognitive architectures we’re creating now… When Ferrucci was watching Watson play, and he said, “Why did he answer like that?” There’s nobody on his team that knew the answer. When it made mistakes… It did really, really well; it beat the humans. But comparing [that] to a clock, I think that’s the wrong metaphor.

Well, let’s just poke at it just one more minute, and then we can move on to something else. Is that really fair to say, that because humans don’t understand how it works, it must be somehow working differently than other machines?

Put another way, it is fair to say, because we’ve added enough gears now, that nobody could kind of keep them all straight. I mean nobody understands why the Google algorithm—even at Google—turns up what it does when you search. But nobody’s suggesting anything nondeterministic, nothing emergent, anything like that is happening.

I mean, our computers are completely deterministic, are they not?

I don’t think that they are. I think if they were completely deterministic, then enough brains put together could figure out a multi-tiered neural net, and I don’t think there’s any evidence that we can right now.

Well, that’s exciting.

I’m not saying that it’s coming up with brilliant new ideas… But a system that’s so sophisticated that it defeats Go, and teaches grandmasters new ideas about Go—which is what the grandmaster who it defeated three out of four times said—[he] said, “I have new insights about this game,” that nobody could explain what it was doing, but it was thinking creatively in a way that we don’t understand.

Go is not like chess. On a chess board, I don’t know how many possible positions there are, but it’s calculable. On a Go board, it’s incalculable. There are more—I’ve heard it said, and I don’t really understand it very well—I heard it said there are more possible positions on a Go board than there are atoms in the universe.

So when it’s beating Go masters… Therefore, playing the game requires a great deal of intuition. It’s not just pattern-matching. Like, I’ve played a million games of Go—and that’s sort of what chess is [pattern-matching].

You know, the grandmasters are people who have seen every board you could possibly come up with. They’ve probably seen it before, and they know what to do. Go’s not like that. It requires a lot more undefinable intuition.

And so we’re moving rapidly into that territory. The program that beat the Go masters is called AlphaGo. It comes out of DeepMind. DeepMind was bought four years ago by Google. Going deep into reinforcement learning and artificial neural nets, I think your argument would be apt if we were talking about some of the old languages—Fortran, Basic, Pascal—where you could look at every line of code and figure out what was going on.

That’s no longer possible, and you’ve got Go grandmasters saying “I learned new insights.” So we’re in a brave new world here.

So you had a great part of the book, where you do a really smart kind of roll-up of when we may have an AGI. Where you went into different ideas behind it. And the question I’m really curious about is this: On the one hand, you have Elon Musk saying we can have it much sooner than you think. You have Stephen Hawking, who you quoted. You have Bill Gates saying he’s worried about it.

So you have all of these people who say it’s soon, it’s real, and it’s potentially scary. We need to watch what we do. Then on the other camp, you have people who are equally immersed in the technology, equally smart, equally, equally, equally all these other things… like Andrew Ng, who up until recently headed up AI at Baidu, who says worrying about AGI is like worrying about overpopulation on Mars. You have other people saying the soonest it could possibly happen is five hundred years from now.

So I’m curious about this. Why do you think, among these big brains, super smart people, why do they have… What is it that they believe or know or think, or whatever, that gives them such radically different views about this technology? How do you get your head around why they differ?

Excellent question. I first heard that Mars analogy from, I think it was Sebastian Thrun, who said we don’t know how to get to Mars. We don’t know how to live on Mars. But we know how to get a rocket to the moon, and gradually and slowly, little by little—No, it was Peter Norvig, who wrote the sort of standard text on artificial intelligence, called AI: A Modern Approach.

He said, you know, “We can’t live on Mars yet, but we’re putting the rockets together. Some companies are putting in some money. We’re eventually going to get to Mars, and there’ll be people living on Mars, and then people will be setting another horizon.” We haven’t left our solar system yet.

It’s a very interesting question, and very timely, about when will we achieve human-level intelligence in a machine, if ever. I did a poll about it. It was kind of a biased poll; it was of people who were at a conference about AGI, about artificial general intelligence. And then I’ve seen a lot of polls, and there’s two points to this.

One is the polls go all over the place. Some people said… Ray Kurzweil says 2029. Ray Kurzweil’s been very good at anticipating the progress of technology, he says 2029. Ray Kurzweil’s working for Google right now—this is parenthetically—he said he wants to create a machine that makes three hundred trillion calculations per second, and to share that with a billion people online. So what’s that? That’s basically reverse engineering of a brain.

Making three hundred trillion calculations per second, which is sort of a rough estimate of what a brain does. And then sharing it with a billion people online, which is making superintelligence a service, which would be incredibly useful. You could do pharmacological research. You could do really advanced weather modeling, and climate modeling. You could do weapons research, you could develop incredible weapons. He says 2029.

Some people said one hundred years from now. The mean date that I got was about 2045 for human-level intelligence in a machine. And then my book, Our Final Invention, got reviewed by Gary Marcus in the New Yorker, and he said something that stuck with me. He said whether or not it’s ten years or one hundred years, the more important question is: What happens next?

Will it be integrated into our lives? Or will it suddenly appear? How are we positioned for our own safety and security when it appears, whether it’s in fifty years or one hundred? So I think about it as… Nobody thought Go was going to be beaten for another ten years.

And here’s another way… So those are the two ways to think about it: one is, there’s a lot of guesses; and two, does it really matter what happens next? But the third part of that is this, and I write about it in Our Final Invention: If we don’t achieve it in one hundred years, do you think we’re just going to stop? Or do you think we’re going to keep beating at this problem until we solve it?

And as I said before, I don’t think we’re going to create exactly human-like intelligence in a machine. I think we’re going to create something extremely smart and extremely useful, to some extent, but something we, in a very deep way, don’t understand. So I don’t think it’ll be like human intelligence… it will be like an alien intelligence.

So that’s kind of where I am on that. I think it could happen in a variety of timelines. It doesn’t really matter when, and we’re not going to stop until we get there. So ultimately, we’re going to be confronted with machines that are a thousand or a million times more intelligent than we are.

And what are we going to do?

Well, I guess the underlying assumption is… it speaks to the credibility of the forecast, right? Like, if there’s a lab, and they’re working on inventing the lightbulb, like: “We’re trying to build the incandescent light bulb.” And you go in there and you say, “When will you have the incandescent light bulb?” and they say “Three or four weeks, five weeks. Five weeks tops, we’re going to have it.”

Or if they say, “Uh, a hundred years. It may be five hundred, I don’t know.” I mean in those things you take a completely different view of, do we understand the problem? Do we know what we’re building? Do we know how to build an AGI? Do we even have a clue?

Do you believe… or here, let me ask it this way: Do you think an AGI is just an evolutionary… Like, we have AlphaGo, we have Watson, and we’re making them better every day. And eventually, that kind of becomes—gradually—this AGI. Or do you think there’s some “A-ha” thing we don’t know how to do, and at some point we’re like “Oh, here’s how you do it! And this is how you get a synapse to work.”

So, do you think we are nineteen revolutionary breakthroughs away, or “No, no, no, we’re on the path. We’re going to be there in three to five years.”?

Ben Goertzel, who is definitely in the race to make AGI—I interviewed him in my book—said we need some sort of breakthrough. And then we got to artificial neural nets and deep learning, and deep learning combined with reinforcement learning, which is an older technique, and that was kind of a breakthrough. And then people started to beat—IBM’s Deep Blue—to beat chess, it really was just looking up tables of positions.

But to beat Go, as we’ve discussed, was something different.

I think we’ve just had a big breakthrough. I don’t know how many revolutions we are away from a breakthrough that makes intelligence general. But let me give you this… the way I think about it.

There’s long been talk in the AI community about an algorithm… I don’t know exactly what they call it. But it’s basically an open-domain problem-solver that asks something simple like, what’s the next best move? What’s the next best thing to do? Best being based on some goals that you’ve got. What’s the next best thing to do?

Well, that’s sort of how DeepMind took on all the Atari games. They could drop the algorithm into a game, and it didn’t even know the rules. It just noticed when it was scoring or not scoring, and so it was figuring out what’s the next best thing to do.

Well if you can drop it into every Atari game, and then you drop it into something that’s many orders of magnitude above it, like Go, then why are we so far from dropping that into a robot and setting it out into the environment, and having it learn the environment and learn common sense about the environment—like, “Things go under, and things go over; and I can’t jump into the tree; I can climb the tree.”

It seems to me that general intelligence might be as simple as a program that says “What’s the next best thing to do?” And then it learns the environment, and then it solves problems in the environment.

So some people are going about that by training algorithms, artificial neural net systems and defeating games. Some people are really trying to reverse-engineer a brain, one neuron at a time. That’s sort of, in a nutshell—to vastly overgeneralize—that’s called the bottom-up, and the top-down approach for creating AGI.

So are we a certain number of revolutions away, or are we going to be surprised? I’m surprised a little too frequently for my own comfort about how fast things are moving. Faster than when I was writing the book. I’m wondering what the next milestone is. I think the Turing Test has not been achieved, or even close. I think that’s a good milestone.

It wouldn’t surprise me if IBM, which is great at issuing itself grand challenges and then beating them… But what’s great about IBM is, they’re upfront. They take on a big challenge… You know, they were beaten—Deep Blue was beaten several times before it won. When they took on Jeopardy, they weren’t sure they were going to win, but they had the chutzpah to get out there and say, “We’re gonna try.” And then they won.

I bet IBM will say, “You know what, in 2020, we’re going to take on the Turing Test. And we’re going to have a machine that you can’t tell that it’s a machine. You can’t tell the difference between a machine and a human.”

So, I’m surprised all the time. I don’t know how far or how close we are, but I’d say I come at it from a position of caution. So I would say, the window in which we have to create safe AI is closing.

Yes, no… I’m with you; I was just taking that in. I’ll insert some ominous “Dun, dun, dun…” Take that a little further.

Everybody has a role to play in this conversation, and mine happens to be canary in a coal mine. Despite the title of my book, I really like AI. I like its potential. Medical potential. I don’t like its war potential… If we see autonomous battlefield robots on the battlefield, you know what’s going to happen. Like every other piece of used military equipment, it’s going to come home.

Well, the thing is, about the military… and the thing about technology is…If you told my dad that he would invite into his home a representative of Google, and that representative would sit in a chair in a corner of the house, and he would take down everything we said, and would sell that data to our insurance company, so our insurance rates might go up… and it would sell that data to mortgage bankers, so they might cut off our ability to get a mortgage… because dad talks about going bankrupt, or dad talks about his heart condition… and he can’t get insurance anymore.

But if we hire a corporate guy, and we pay for it, and put him in our living room… Well, that’s exactly what we’re doing with Amazon Echo, with all the digital assistants. All this data is being gathered all the time, and it’s being sold… Buying and selling data is a four billion dollar-a-year industry. So we’re doing really foolish things with this technology. Things that are bad for our own interests.

So let me ask you an open-ended question… prognostication over shorter time frames is always easier. Tell me what you think is in store for the world, I don’t know, between now and 2030, the next thirteen years. Talk to me about unemployment, talk to me about economics, all of that. Tell me the next thirteen years.

Well, brace yourself for some futurism, which is a giant gamble and often wrong. To paraphrase Kevin Kelly again, everything that’s electrical will be cognitized. Our economy will be dramatically shaped by the ubiquity of artificial intelligence. With the Internet of Things, with the intelligence of everything around us—our phones, our cars…

I can already talk to my car. I’m inside my car, I can ask for directions, I can do some other basic stuff. That’s just going to get smarter, until my car drives itself. A lot of people… MIT did a study, that was quoting a Cambridge study, that said: “Forty-five percent of our jobs will be able to be replaced within twenty years.” I think they downgraded that to like ten years.

Not that they will be replaced, but they will be able to be replaced. But when AI is a twenty-five trillion dollar—when it’s worth twenty-five trillion dollars in 2025—everybody will be able to do anything, will be able to replace any employee that’s doing anything that’s remotely repetitive, and this includes doctors and lawyers… We’ll be able to replace them with the AI.

And this cuts deep into the middle class. This isn’t just people working in factories or driving cars. This is all accountants, this is a lot of the doctors, this is a lot of the lawyers. So we’re going to see giant dislocation, or giant disruption, in the economy. And giant money being made by fewer and fewer people.

And the trouble with that is, that we’ve got to figure out a way to keep a huge part of our population from starving, from not making a wage. People have proposed a basic minimum income, but to do that we would need tax revenue. And the big companies, Amazon, Google, Facebook, they pay taxes in places like Ireland, where there’s very low corporate tax. They don’t pay taxes where they get their wealth. So they don’t contribute to your roads.

Google is not contributing to your road system. Amazon is not contributing to your water supply, or to making your country safe. So there’s a giant inequity there. So we have to confront that inequity and, unfortunately, that is going to require political solutions, and our politicians are about the most technologically-backward people in our culture.

So, what I see is, a lot of unemployment. I see a lot of nifty things coming out of AI, and I am willing to be surprised by job creation in AI, and robotics, and automation. And I’d like to be surprised by that. But the general trend is… When you replace the biggest contract manufacturer in the world… Foxconn just replaced thirty-thousand people in Asia with thirty-thousand robots.

And all those people can’t be retrained, because if you’re doing something that’s that repetitive, and that mechanical… what can you be retrained to do? Well, maybe one out of every hundred could be a floor manager in a robot factory, but what about all the others? Disruption is going to come from all the people that don’t have jobs, and there’s nothing to be retrained to.

Because our robots are made in factories where robots make the robots. Our cars are made in factories where robots make the cars.

Isn’t that the same argument they used during the Industrial Revolution, when they said, “You got ninety percent of people out there who are farmers, and we’re going to lose all these farm jobs… And you don’t expect those farmers are going to, like, come work in a factory, where they have to learn completely new things.”

Well, what really happened in the different technology revolutions, back from the cotton gin onward is, a small sector… The Industrial Revolution didn’t suddenly put farms out of business. A hundred years ago, ninety percent of people worked on farms, now it’s ten percent.

But what happened with the Industrial Revolution is, sector by sector, it took away jobs, but then those people could retrain, and could go to other sectors, because there were still giant sectors that weren’t replaced by industrialization. There was still a lot of manual labor to do. And some of them could be trained upwards, into management and other things.

This, as the author Ford wrote in The Rise of Robots—and there’s also a great book called The Fourth Industrial Age. As they both argue, what’s different about this revolution is that AI works in every industry. So it’s not like the old revolutions, where one sector was replaced at a time, and there was time to absorb that change, time to reabsorb those workers and retrain them in some fashion.

But everybody is going to be… My point is, all sectors of the economy are going to be hit at once. The ubiquity of AI is going to impact a lot of the economy, all at the same time, and there is going to be a giant dislocation all at the same time. And it’s very unclear, unlike in the old days, how those people can be retrained and retargeted for jobs. So, I think it’s very different from other Industrial Revolutions, or rather technology revolutions.

Other than the adoption of coal—it went from generating five percent to eighty percent of all of our power in twenty years—the electrification of industry happened incredibly fast. Mechanization, replacement of animal power with mechanical power, happened incredibly fast. And yet, unemployment remains between four and nine percent in this country.

Other than the Depression, without ever even hiccupping—like, no matter what disruption, no matter what speed you threw at it—the economy never couldn’t just use that technology to create more jobs. And isn’t that maybe a lack of imagination that says “Well, no, now we’re out. And no more jobs to create. Or not ones that these people who’ve been displaced can do.”

I mean, isn’t that what people would’ve said for two hundred years?

Yes, that’s a somewhat persuasive argument. I think you’ve got a point that the economy was able to absorb those jobs, and the unemployment remained steady. I do think this is different. I think it’s a kind of a puzzle, and we’ll have to see what happens. But I can’t imagine… Where do professional drivers… they’re not unskilled, but they’re right next to it. And it’s the job of choice for people who don’t have a lot of education.

What do you retrain professional drivers to do once their jobs are taken? It’s not going to be factory work, it’s not going to be simple accounting. It’s not going to be anything repetitive, because that’s going to be the job of automation and AI.

So I anticipate problems, but I’d love to be pleasantly surprised. If it worked like the old days, then all those people that were cut off the farm would go to work in the factories, and make Ford automobiles, and make enough money to buy one. I don’t see all those driverless people going off to factories to make cars, or to manufacture anything.

A case in point of what’s happening is… Rethink Robotics, which is Rodney Brooks’ company, just built something called Baxter; and now Baxter is a generation old, and I can’t think of what replaced it. But it costs about twenty-two thousand dollars to get one of these robots. These robots cost basically what a minimum wage worker makes in a year. But they work 24/7, so they really replace three shifts, so they really are replacing three people.

Where do those people go? Do they go to shops that make Baxter? Or maybe you’re right, maybe it’s a failure of imagination to not be able to anticipate the jobs that would be created by Baxter and by autonomous cars. Right now, it’s failing a lot of people’s imagination. And there are not ready answers.

I mean, if it were 1995 and the Internet was, you’re just hearing about it, just getting online, just hearing it… And somebody said, “You know what? There’s going to be a lot of companies that just come out and make hundreds of billions of dollars, one after the other, all because we’ve learned how to connect computers and use this hypertext protocol to communicate.” I mean, that would not have seemed like a reasonable surmise.

No, and that’s a great example. If you were told that trillions of dollars of value are going to come out of this invention, who would’ve thought? And maybe I personally, just can’t imagine the next wave that is going to create that much value. I can see how AI and automation will create a lot of value, I only see it going into a few pockets though. I don’t see it being distributed in any way that the Silicon Valley startups, at least initially, were.

So let’s talk about you for a moment. Your background is in documentary filmmaking. Do you see yourself returning to that world? What are you working on, another book? What kind of thing is keeping you busy by day right now?

Well, I like making documentary films. I just had one on PBS last year… If you Google “Spillover” and “PBS” you can see it is streaming online. It was about spillover diseases—Ebola, Zika and others—and it was about the Ebola crisis, and how viruses spread. And then now I’m working on a film about paleontology, about a recent discovery that’s kind of secret, that I can’t talk about… from sixty-six million years ago.

And I am starting to work on another book that I can’t talk about. So I am keeping an eye on AI, because this issue is… Despite everything I talk about, I really like the technology; I think it’s pretty amazing.

Well, let’s close with, give me a scenario that you think is plausible, that things work out. That we have something that looks like full employment, and…

Good, Byron. That’s a great way to go out. I see people getting individually educated about the promise and peril of AI, so that we as a culture are ready for the revolution that’s coming. And that forces businesses to be responsible, and politicians to be savvy, about developments in artificial intelligence. Then they invest some money to make artificial intelligence advancement transparent and safe.

And therefore, when we get to machines that are as smart as humans, that [they] are actually our allies, and never our competitors. And that somehow on top of this giant wedding cake I’m imagining, we also manage to keep full employment, or nearly-full employment. Because we’re aware, and because we’re working all the time to make sure that the future is kind to humans.

Alright, well, that is a great place to leave it. I am going to thank you very much.

Well, thank you. Great questions. I really enjoyed the back-and-forth.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Read more here:: gigaom.com/feed/

Book review : Working with blockchain

By Sheetal Kumbhar

“Working with Blockchain: all the basics” is a concise, clear explanation of Bitcoin and Distributed Ledger Technology (Blockchain) that is ushering in a groundbreaking way of conducting business. There’s been a lot of hype on the Net about Bitcoin, but this publication puts the business case front and center and backs it up real-world use cases.

The key message is obvious once you read it. “Business over the Internet is based on antiquated rules and processes that were conceived in the pre-Internet age and never designed for the scale, speed and granularity of today’s networked economy.” Think banks and letters of credit, says Bob Emmerson.

The 21st century much-needed alternative it to employ a crypto-currency, Bitcoin, and use a distributed ledger to enable cheap, fast, worldwide payments, conducted transparently and without intermediaries. This is not only doable, it’s being done. Blockchain is improving transparency and trust, simplifying business processes, creating brand-new opportunities and bringing benefits to society and businesses.

There is a clear analogy to the IoT. Although IoT is not essential for Blockchain to function, the combination packs a powerful punch. Check it out here followed by a search on Bitcoin.

The book doesn’t go into techie details but it is comprehensive. There is a brief history of Blockchain and a chapter that covers the issues that are addressed, such as immunity to hacks, the different types of Blockchain Technology and business basics.

Chapter 5 covers a key topic, where and how to apply Blockchain for business processes and it highlights aspects that add value in business ecosystems. They include: the involvement of multiple parties; areas where trust and confidentiality are important; where fast transactions are required; and where there is a community, e.g. a union, association or consortium.

Practical advice on how to get some hands-on experience comes in the next chapter. You start by installing a Bitcoin wallet, and pay for the proposed small number of Bitcoins. Then you download the Bitcoin Blockchain, and that enables participation in a distributed ledger. The authors do not advice investing serious money and speculating with Bitcoins.

Now you can make and receive payments and experience how the verification of payments proceeds transparently. The remainder of the chapter provides advice on how to identify and check use cases followed by a four-step process that will get your business up Blockchain speed.

The remaining chapters cover a variety of Blockchain solutions that include: climate change; critical infrastructure protection; financial services; government and utilities; travel and transportation; and finally the entertainment industry. The book concludes with real-world examples and references.

Conclusion: This publication is an easy, informative read that will get you up to speed with a breakthrough development that meets a real market need.

The authors: Louis de Bruin, European Blockchain leader with IBM Global Business Services, and Willem Vermeend, an Internet entrepreneur and a Professor of Economics at the Open University in the Netherlands. The book costs € 17,50 and can be ordered here.

The author of this blog is Bob Emmerson, freelance writer and telecoms industry observer

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Book review : Working with blockchain appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

PatientPop brings Google posts to healthcare practices

By Sheetal Kumbhar

PatientPop, the provider in practice growth technology designed for healthcare providers, introduced automatic Google Posts with one-click appointment booking to customers.

“Posting on Google is a new way for doctors to share relevant, fresh content with people who are searching for a healthcare provider, using images and calls to action to engage potential patients,” said Joel Headley, director of local SEO and marketing at PatientPop.

“This enhanced format allows prospective patients to hear directly from the doctor when looking online. PatientPop is the healthcare technology provider to scale Google Posts to thousands of its customers instantly, through our practice growth platform.”

PatientPop brings this feature to all of its customers, and will automatically populate posts on behalf of practices, using attention-grabbing graphics that prompt patients to book an appointment online.

Joel Headley

“PatientPop is continually innovating and updating our practice growth platform to reflect the most recent technology updates,” said Headley. “We will continue to introduce new features like Google Posts to help doctors get the most out of their online listings, and to convert prospects into patients.”

In August, PatientPop was the first healthcare technology provider to offer online appointment scheduling for practices through Google. The capability enables patients to effortlessly book online appointments directly through a practice listing on Google, decreasing the steps needed to convert a prospect into a patient.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post PatientPop brings Google posts to healthcare practices appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/