By Dinis Guarda
Getting Ready For The Tsunami: AI Evolution, Blockchain and Technological Singularity Part 3 Image source Dinis Guarda
When Will it Arrive?
When will tech singularity arrive, if it ever arrives ? Ray Kurzweil, famous for his Singularity optimism, insists that day is in 2029, so in one decade! Many other academics think on the other hand that it will never happen. In a certain way AI development (and human development! Humans are also growing and changing at incredible pace) is a never ending journey, and so is human consciousness. After all technological evolution results from the joint effort of humans designing machines that transform each other.
Yoshua Bengio, a professor at the University of Montreal, says something similar, about what he sees as a kind of hype, concerning AI: “We’re currently climbing a hill, and we are all excited because we have made a lot of progress on climbing the hill, but as we approach the top of the hill, we can start to see a series of other hills rising in front of us.”
Will Machines Crash Humanity or Serve Earth ?
To be worried about the unknown and prone to forecasting threat and dystopia is very understandable. The fear of the machine is deeply rooted in humans. The great scholar and mythologist Joseph Campbell has widely written about it. He was the adviser of George Lucas, for Star Wars, which he read through the mythological perspective. In his view, Star Wars was working out the conflict of machines either crashing humanity, or serving it.
Each industrial revolution stage (we are now entering the 4th one) just reawakens this very understandable anguish: Just think of the Spinning Jenny who led so may to poverty! Alan Turing and John Von Neumann, important pioneers, who were the first to anticipate the potential of AI, also raised queries similar to the ones we nervously tackle today, when anticipating the impact of this tsunami. Those questions resume themselves to a crucial one: is what we are doing going to annihilate us, the well know saga of Frankenstein ?
To the ones who look fearfully at a tech singularity, they can rest a bit more assured, if they look into detail about the state of evolution of the technology. Most scientists are not too impressed with the evolution of neural nets — the approach to AI more popular right now, plus scientists are still uncertain to where the deep learning AI paradigm will take us.
MIT’s Max Tegmark, for example, who participated in the book Possible Minds, estimates that “AI systems will probably (over 50 percent) reach overall human ability by 2040-50, and very likely (with 90 percent probability) by 2075.”
Image by Dinis Guarda
Other views though are more disheartening, tending to claim how AI will wipe out humanity in a very recent future. UC Berkeley’s Stuart Russell, Oxford’s Nick Bostrom, Tegmark, and the inventor of Skype, Jaan Tallinn who founded the Center for the Study of Existential Risk at University of Cambridge are all very worried and doing a big effort to avoid that possibility. Curiously, concern about AI risk is less seen in researchers connected with Tech companies like Facebook, Google Brain, and DeepMind.
Does AI have a Will?
The argument at the core of AI safety is quite a complex one. Its about will. An advanced AI might have an independent will. That will might truly cooperate with humans in a beneficial way, but it can also adopt approaches that humans do not approve of and that go against human ethics.
What sustains the reason for this argument is the idea of “competition” inherent to the survival of the species standpoint. Jayshree Pandya writes:
“Since there is no direct evolutionary motivation for an AI to be friendly to humans, the challenge is in evaluating whether the artificial intelligence driven singularity will — under evolutionary pressure — promote their own survival over ours. The reality remains that artificial intelligence evolution will have no inherent tendency to produce or create outcomes valued by humans– and there is little reason to expect an outcome desired by mankind from any super intelligent machine.”
Again, this is a view, that has embedded in itself a notion of separation and basis itself on Darwin’s evolutionary theories. But Darwin has been quite questioned and some state, he is very misunderstood and misread.
Economic Impact of AI
Regardless of whether or not, and when we will have a technological singularity, we need to look for the real problems we are facing with the evolution of technology. Jobs! The main question for a future that seems to be at our doorstep is: “what happens if jobs based on the current economic system can be automated ? and f people and processes can be replaced by AI in most jobs, how will that shape what we understand as economics?
The economic impact of AI, is now a tangible fact, so we really need to deal with it.
Economic progress has often been driven by some sort of consecutive level of increasing automation, which led to better production and the elimination of jobs across nations, being substituted quickly though, by new ones. We are now facing an upgrade of this process, which has deeper consequences. What is at stake here is a shift not only in terms of technology but also in terms of what type of economics we will have in the future.
For now, all the studies done are based on traditional economic viewpoints. Take the case of a 2015 paper, written by William D. Nordhaus of Yale University, that looked at the impacts of an impending technological singularity. In his view, Nordhaus was studying technological singularity through the point of view of the resources needed for it to happen. For information technology to evolve at the speed and by the date Kurzweil and others suggest, there would have to be significant productivity trade-offs. So, in order to devote more economic resources to producing quantum computers, one would need to decrease the production of non-information technology goods.
We can rest assured: of the seven tests Nordhaus design based on econometric methods, only two indicated that a Singularity was economically possible and both of those two predicted, at minimum, 100 years before it would occur.
Placing Blockchain In the Equation
What is then, the place of blockchain, in this complex equation? As I stated earlier, with the advancement of big data resulting from the increased datafication happening in the world, we will need powerful computers, and a new system of organisation of that data, that is fit to operate and deliver a truly interconnected, immediate and global world. Blockchain’s possible applications are innumerable, from governance to economics, to identity etc, as I have written in various other articles. But we are living the technology’s early days in terms of its true potential, and still dealing with theoretical scenarios.
The way to look at it, anyway, is by analysing AI and IoT together with blockchain. Francesco Corea, gives us a picture of the wide possibilities of what he calls the blockchain-enabled intelligent IoT economy. If done correctly blockchain could actually enable us to avoid some of the scenarios of a gloomy technological singularity. Bear in mind though, that blockchain in itself has its risks as well…
Getting Ready for The Tsunami, with conscious hope!
As I have demonstrated in this article, the debate around tech singularity is. triggering an old fear of the machine. This fear has been explored by artists and storytellers throughout times in powerful narratives. Hal 9000, is one example, the calm computer from the iconic film “2001: A Space Odyssey”, who was discreetly gaining a will of his own, and developing the killer instinct. But stories change, and we have other examples. Steven Spielberg’s “Artificial Intelligence”,for example, tells a different tale, the one of a little cyborg boy, becoming human and feeling love and empathy … for his human mother.
There is no clarity at all about when the massive intelligence explosion will occur in computers. But if we stop thinking of “supercomputers” as something outside us: as squared boxes with flashing lights, out there, the questions change to a different type. But we also need to be careful and ready for the tsunami. In order to do so, it is mandatory to start designing AI that is useful to all, and to the biosphere. For that we need systems thinking and intersubjective analysis that puts into perspective economics, psychology, ethics and technological development. And we need strong legislation. The future is ours, to decide upon what narratives to take on and what to abandon.
Read more here:: www.intelligenthq.com/feed/Posted on: March 15, 2019