At Smart City Expo World Congress, the international event on smart cities, Guangdong Rongwen Energy Technology Group (Rongwen) and Silver Spring Networks, Inc. announced their selection to connect smart LED street lights in Guangzhou, China.
Through its go-to-market partnership with Rongwen, one of the largest smart LED street light providers and operators in China, Silver Spring is planning to network more than 30,000 LED street lights in Guangzhou. According to Rongwen, its efficient and patented LED street lights and outdoor lighting controls, combined with Silver Spring’s StreetLight.
Vision (SLV) Central Management System (CMS), will increase the city’s energy savings by more than 70%. The project is China’s first smart city project using Silver Spring’s secure, reliable, IPv6 platform and Wi- SUN® standards-based mesh technology, built on the IEEE 802.15.4g specification.
Guangzhou, which is China’s third largest economic hub and a major foreign trade port, has committed to reducing its carbon emissions up to 45% by the end of the decade.
“Rongwen is using Silver Spring’s pioneering smart city platform to provide seamless IoT connectivity to more efficiently operate existing city-wide resources to achieve immediate cost savings and speed time to value for smart city initiatives,” said Zhixiong Lee, general manager of Rongwen.
“As one of the early movers in smart street lighting system integration, Rongwen is the first company in China that adapts and deploys internationally recognised smart city technologies. We believe that the scalability and sustainability of this system will allow cities such as Guangzhou to grow their network to millions of devices in the future.”
“Guangzhou is an example of a major hub deploying an IoT network to drive sustainability, create resource efficiency and build a more livable city, which in turn draws in new investments.
We are thrilled to connect smart city devices in our first project in China, as Rongwen deploys our standards-based platform to connect street lights and establish a foundation for additional smart city services for the Guangzhou Development Zone,” said Jeff Ross, VP of Channels, Silver Spring Networks. “By working with our partners, we continue to evolve our standards-based platform’s capabilities, in an effort to address our cities’ biggest challenges, such as traffic congestion, pollution and public safety.”
Accelerating delivery of proven smart city technology with Rongwen D-ONE
Further accelerate the delivery of Silver Spring’s IPv6 platform and solution to the growing smart city industry, today Rongwen and Silver Spring also announced the availability of the Rongwen D-ONE Wireless Outdoor Lighting Controller. The D-ONE integrates Silver Spring’s network interface cards (NICs) into Rongwen’s outdoor lighting controller to help monitor and control the brightness of the lights based on pedestrian and vehicular traffic, time of day and weather.
The D-ONE utilises a standardised 7-Pin NEMA Socket and collects a variety of energy usage information including voltage, current, lamp burning hours, and temperature. The D-ONE also integrates with SLV for seamless configuration, monitoring and real-time control.
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow
Read more here:: www.m2mnow.biz/feed/
Speedcast, a provider in providing highly reliable, fully managed, remote communication and IT solutions, and SES, the satellite operator, announced an agreement to provide several hundreds of Mbps of connectivity into Peru. This is the first agreement between the companies in Latin America to provide Medium Earth Orbit (MEO) capacity with high throughput capabilities and low latency. The agreement marks the fourth MEO national partnership between Speedcast and SES Networks.
The agreement enables Speedcast to provide network services to mobile network operators (MNOs) and enterprise customers in areas of Peru where high performance internet is in high demand and short supply, enabling a host of latency-sensitive and bandwidth-hungry applications.
By utilising SES’s O3b constellation—the only satellite system with the high throughput and low latency required for broadband, 4G/LTE and cloud services — Speedcast will deliver internet performance, customer support, and integration with their customers’ networks on par with terrestrial fibre in the region. Speedcast is able to supply this 24/7/365 day support through its network of more than 250 field engineers.
“Our unique low-latency, high throughput connectivity will help Speedcast achieve its goal of providing fibre-like connectivity and a premium experience to customers,” said Omar Trujillo, vice president of Sales for Latin America at SES Networks. “We are proud of our strong relationship with Speedcast, and pleased to help support the continued growth of its capabilities and infrastructure in the region.”
“Speedcast is happy to deliver a new level of performance for enterprises in Peru,” said Pierre-Jean Beylier, CEO at SpeedCast. “The added support from SES Networks’ services will allow us to provide enterprises with the critical high-demand communications capabilities necessary to operate with speed and efficiency in today’s technology-driven market. Speedcast is building the fibre, the radio links, and Wi-Fi to extend the signal to the end users. It was a pleasure to work with SES Networks on this project as it was a real team effort.”
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow
Read more here:: www.m2mnow.biz/feed/
Farmobile, an Agtech company that collects farm data via its hardware and software solution raised a $18.1M Series B round this week. Notable investors that participated in the funding round include Dutch Agtech venture capital firm Anterra Capital, crop insurance provider AmTrust Agriculture Insurance Services and private investors in Kansas City.
The startup, co-founded by Jason Tatge, Heath Gerlock, and Randy Nuss previously raised $5.5M Series A in Dec 2015 from Anterra Capital. The latest round brings Farmobile’s total equity funding to $23.6M since it launched in 2013.
The IoT component of the Farmobile solution is a small device called PUC that installs on farm machinery. The system uses PUC to listen and wirelessly send machine data to a cloud-based Farmobile account. The PUC implement plugs into the ISOBUS of the tractor, or straight back into the terminal. These data collected from the farm is called EFR, or simply put Electronic Field Record. EFRs can then be shared with 3rd parties via Farmobile’s cloud-based dashboard.
Farmers can take corrective actions based as the system allows 3rd parties to remotely upload prescription and documentation files and directly transfer them to machines and operators for immediate use.
The latest round of funding by Farmobile is by far one of the biggest that Agtech startups have raised this year. The Yield, an Australian Agtech startup closed $6.5M Series A round in April this year followed by Agtech startup Freight Farms that closed $7.3M Series B the following month.
As the Agtech market heats up, IoT-based smart farming startups are battling for customers and investment dollars. View our smart agriculture resource page filter and discover IoT agriculture resources.
Read more here:: feeds.feedburner.com/iot
The fog computing market opportunity will exceed $18 billion (€15.48 billion) worldwide by the year 2022, according to a new report by 451 Research. Commissioned by the OpenFog Consortium, the Size and Impact of Fog Computing Market projects that the largest markets for fog computing will be, in order, energy/utilities, transportation, healthcare and the industrial sectors.
New use cases created by the OpenFog Consortium were also released that showcase how fog works in industry. These use cases provide fog technologists with detailed views of how fog is deployed in autonomous driving, energy, healthcare and smart buildings.
“Through our extensive research, it’s clear that fog computing is on a growth trajectory to play a crucial role in IoT, 5G and other advanced distributed and connected systems,” said Christian Renaud, research director, Internet of Things, 451 Research, and lead author of the report. “It’s not only a technology path to ensure the optimal performance of the cloud-to-things continuum, but it’s also the fuel that will drive new business value.”
Key findings from the report were presented during an opening keynote at the inaugural Fog World Congressconference.
In addition to projecting an $18 billion (€15.48 billion) fog market and identifying the top industry-specific market opportunities, the report also identified:
Key market transitions fueling the growth include investments in energy infrastructure modernisation, demographic shifts and regulatory mandates in transportation and healthcare.
Hardware will have the largest percentage of overall fog revenue (51.6%), followed by fog applications (19.9%) and then services (15.7%). By 2022, spend will shift to apps and services, as fog functionality is incorporated into existing hardware.
Cloud spend is expected to increase 147% to $6.4 billion (€5.50 billion) by 2022.
“This is a seminal moment that not only validates the magnitude of fog, but also provides us with a first-row seat to the opportunities ahead,” said Helder Antunes, chairman of the OpenFog Consortium and senior director, Cisco. “Within the OpenFog community, we’ve understood the significance of fog—but with its growth rate of nearly 500 percent over the next five years—consider it a secret no more.”
The fog market report includes the sizing and impact of fog in the following verticals: agriculture, datacentres, energy and utilities, health, industrial, military, retail, smart buildings, smart cities, smart homes, transportation, and wearables.
Fog computing is the system-level architecture that brings computing, storage, control, and networking functions closer to the data-producing sources along the cloud-to-thing continuum. Applicable across industry sectors, fog computing effectively addresses issues related to security, cognition, agility, latency and efficiency.
Download the full report here.
Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow
The post Fog computing global market will exceed US$18 billion by 2022 appeared first on IoT Now – How to run an IoT enabled business.
Read more here:: www.m2mnow.biz/feed/
By Byron Reese
Today’s leading minds talk AI with host Byron Reese
In this episode, Byron and James talk about jobs, human vs. artificial intelligence, and more.
Byron Reese: Hello, this is Voices in AI, brought to you by Gigaom. I am Byron Reese. Today I am so excited that our guest is James Barrat. He wrote a book called Our Final Invention, subtitled Artificial Intelligence and the End of the Human Era. James Barratt is also a renowned documentary filmmaker, as well as an author. Welcome to the show, James.
James Barrat: Hello.
So, let’s start off with, what is artificial intelligence?
Very good question. Basically, artificial intelligence is when machines perform tasks that are normally ascribed to human intelligence. I have a very simple definition of intelligence that I like. Because ‘artificial intelligence’—the definition just throws the ideas back to humans, and [to] human intelligence, which is the intelligence we know the most about.
The definition I like is: intelligence is the ability to achieve goals in a variety of novel environments, and to learn. And that’s a simple definition, but a lot is packed into it. Your intelligence has to achieve goals, it has to do something—whether that’s play Go, or drive a car, or solve proofs, or navigate, or identify objects. And if it doesn’t have some goal that it achieves, it’s not very useful intelligence.
If it can achieve goals in a variety of environments, if it can do object recognition and do navigation and do car-driving like our intelligence can, then it’s better intelligence. So, it’s goal-achieving in a bunch of novel environments, and then it learns. And that’s probably the most important part. Intelligence learns and it builds on its learning.
And you wrote a widely well-received book, Artificial Intelligence: Our Final Invention. Can you explain to the audience just your overall thesis, and the main ideas of the book?
Sure. Our Final Invention is basically making the argument that AI is a dual-use technology. A dual-use technology is one that can be used for great good, or great harm. Right now we’re in a real honeymoon phase of AI, where we’re seeing a lot of nifty tools come out of it, and a lot more are on the horizon. AI, right now, can find cancer clusters in x-rays better than humans. It can do business analytics better than humans. AI is doing what first year legal associates do, it’s doing legal discovery.
So we are finding a lot of really useful applications. It’s going to make us all better drivers, because we won’t be driving anymore. But it’s a dual-use technology because, for one thing, it’s going to be taking a lot of jobs. You know, there are five million professional drivers in the United States, seven million back-office accountants—those jobs are going to go away. And a lot of others.
So the thesis of my book is that we need to look under the hood of AI, look at its applications, look who’s controlling it, and then in a longer term, look at whether or not we can control it at all.
Let’s start with that point and work backwards. That’s an ominous statement. Can we record it at all? What are you thinking there?
Can we control it at all.
I’m sorry, yes. Control it at all.
Well, let me start, I prefer to start the other way. Stephen Hawking said that the trouble with AI is, in the short term, who controls it, and in the long term, can we control it at all? And in the short term, we’ve already suffered some from AI. You know, the NSA recently was accessing your phone data and mine, and getting your phone book and mine. And it was, basically, seizing our phone records, and that used to be illegal.
Used to be that if I wanted to seize, to get your phone records, I needed to go to a court, and get a court order. And that was to avoid abridging the Fourth Amendment, which prevents illegal search and seizure of property. Your phone messages are your property. The NSA went around that, and grabbed our phone messages and our phone data, and they are able to sift through this ocean of data because of AI, because of advanced data mining software.
One other example—and there are many—one other example of, in the short term, who controls the AI, is, right now there are a lot of countries developing battlefield robots and drones that will be autonomous. And these are robots and drones that kill people without a human in the loop. And these are AI issues. There are fifty-six nations developing battlefield robots.
The most sought after will be autonomous battlefield robots. There was an article just a couple of days ago about how the Marines have a robot that shoots a machinegun on a battlefield. They control it with a tablet, but their goal, as stated there, is to make it autonomous, to work on its own.
In the longer-term we, I’ll put it in the way that Arthur C. Clark put it to me, when I interviewed him. Arthur C. Clark was a mathematician and a physicist before he was a science fiction writer. And he created the HAL 9000 from 2001: A Space Odyssey, probably the most famous homicidal AI. And he said, when I asked him about the control problem of artificial intelligence, he said something like this: He said, “We humans steer the future not because we are the fastest or the strongest creatures, but because we are the most intelligent. And when we share the planet with something that’s more intelligent than we are, it will steer the future.”
So the problem we’re facing, the problem we’re on the cusp of, I can simplify it with a concept called ‘the intelligence explosion’. The intelligence explosion was an idea created by a statistician named I. J. Good in the 1960s. He said, “Once we create machines that do everything as well or better than humans, one of the things they’ll do is create smart machines.”
And we’ve seen artificial intelligence systems slowly begin to do things better than we do, and it’s not a stretch to think about a time to come, when artificial intelligence systems do advanced AI research and development better that humans. And I. J. Good said, “Then, when that happens, we humans will no longer set the pace of intelligence advancement, it will be machines that will set the pace of advancement.”
The trouble of that is, we know nothing about how to control a machine, or a cognitive architecture, that’s a thousand or million times more intelligent than we are. We have no experience with anything like that. We can look around us for analogies in the animal world.
How do we treat things that we’re a thousand times more intelligent than? Well, we treat all animals in a very negligent way. And the smart ones are either endangered, or they’re in zoos, or we eat them. That’s a very human-centric analogy, but I think it’s probably appropriate.
Let’s push on this just a little bit. So do you…
Do you believe… Some people say ‘AI’ is kind of this specter of a term now, that, it isn’t really anything different than any other computer programs we’ve ever run, right? It’s better and faster and all of that, but it isn’t qualitatively anything different than what we’ve had for decades.
And so why do you think that? And when you say that AIs are going to be smarter than us, a million times smarter than us, ‘smarter’ is also a really nebulous term.
I mean, they may be able to do some incredibly narrow thing better than us. I may not be able to drive a car as well as an AI, but that doesn’t mean that same AI is going to beat me at Parcheesi. So what do you think is different? Why isn’t this just incrementally… Because so far, we haven’t had any trouble.
What do you think is going to be the catalyst, or what is qualitatively different about what we are dealing with now?
Sure. Well, there’s a lot of interesting questions packed into what you just said. And one thing you said—which I think is important to draw out—is that there are many kinds of intelligence. There’s emotional intelligence, there’s rational intelligence, there’s instinctive and animal intelligence.
And so, when I say something will be much more intelligent than we are, I’m using a shorthand for: It will be better at our definition of intelligence, it will be better at solving problems in a variety of novel environments, it will be better at learning.
And to put what you asked in another way, you’re saying that there is an irreducible promise and peril to every technology, including computers. All technologies, back to fire, have some good points and some bad points. AI I find qualitatively different. And I’ll argue by analogy, for a second. AI to me is like nuclear fission. Nuclear fission is a dual-use technology capable of great good and great harm.
Nuclear fission is the power behind atom bombs and behind nuclear reactors. When we were developing it in the ‘20s and ‘30s, we thought that nuclear fission was a way to get free energy by splitting the atom. Then it was quickly weaponized. And then we used it to incinerate cities. And then we as a species held a gun at our own heads for fifty years with the arms race. We threatened to make ourselves extinct. And that almost succeeded a number of times, and that struggle isn’t over.
To me, AI is a lot more like that. You said it hasn’t been used for nefarious reasons, and I totally disagree. I gave you an example with the NSA. A couple of weeks ago, Facebook was caught up because they were targeting emotionally-challenged and despairing children for advertising.
To me, that’s extremely exploitative. It’s a rather soulless and exploitative commercial application of artificial intelligence. So I think these pitfalls are around us. They’re already taking place. So I think the qualitative difference with artificial intelligence is that intelligence is our superpower, the human superpower.
It’s the ability to be creative, the ability to invent technology. That was one thing Stephen Hawking brought up when he was asked about, “What are the pitfalls of artificial intelligence?”
He said, “Well, for one thing, they’ll be able to develop weapons we don’t even understand.” So, I think the qualitative difference is that AI is the invention that creates inventions. And we’re on the cusp, this is happening now, and we’re on the cusp of an AI revolution, it’s going to bring us great profit and also great vulnerability.
You’re no doubt familiar with Searle’s “Chinese Room” kind of question, but all of the readers, all of the listeners might not be… So let me set that up, and then get your thought on it. It goes like this:
There’s a person in a room, a giant room full of very special books. And he doesn’t—we’ll call him the librarian—and the librarian doesn’t speak a word of Chinese. He’s absolutely unfamiliar with the language.
And people slide him questions under the door which are written in Chinese, and what he does—what he’s learned to do—is to look at the first character in that message, and he finds the book, of the tens of thousands that he has, that has that on the spine. And in that book he looks up the second character. And the book then says, “Okay, go pull this book.”
And in that book he looks up the third, and the fourth, and the fifth, all the way until he gets to the end. And when he gets to the end, it says “Copy this down.” And so he copies these characters again that he doesn’t understand, doesn’t have any clue whatsoever what they are.
He copies them down very carefully, very faithfully, slides it back under the door… Somebody’s outside who picks it up, a Chinese speaker. They read it, and it’s just brilliant! It’s just absolutely brilliant! It rhymes, it’s Haiku, I mean it’s just awesome!
Now, the question, the kind of ta-da question at the end is: Does the man, does the librarian understand Chinese? Does he understand Chinese?
Now, many people in the computer world would say yes. I mean, Alan Turing would say yes, right? The Chinese room passes the Turing Test. The Chinese speakers outside, as far as they know, they are conversing with a Chinese speaker.
So do you think the man understands Chinese? And do you think… And if he doesn’t understand Chinese… Because obviously, the analogy of it is: that’s all that computer does. A computer doesn’t understand anything. It doesn’t know if it’s talking about cholera or coffee beans or anything whatsoever. It runs this program, and it has no idea what it’s doing.
And therefore it has no volition, and therefore it has no consciousness; therefore it has nothing that even remotely looks like human intelligence. So what would you just say to that?
The Chinese Room problem is fascinating, and you could write books about it, because it’s about the nature of consciousness. And what we don’t know about consciousness, you could fill many books with. And I used to think I wanted to explore consciousness, but it made exploring AI look easy.
I don’t know if it matters that the machine thinks as we do or not. I think the point is that it will be able to solve problems. We don’t know about the volition question. Let me give you another analogy. When Ferrucci, [when] he was the head of Team Watson, he was asked a very provocative question: “Was Watson thinking when it beat all those masters at Jeopardy?” And his answer was, “Does a submarine swim?”
And what he meant was—and this is the twist on on the Chinese Room problem—he meant [that] when they created submarines, they learned principles of swimming from fish. But then they created something that swims farther and faster and carries a huge payload, so it’s really much more powerful than fish.
It doesn’t reproduce and it doesn’t do some of the miraculous things fish do, but as far as swimming, it does it. Does an airplane fly? Well, the aviation pioneers used principles of flight from birds, but quickly went beyond that, to create things that fly farther and faster and carry a huge payload.
I don’t think it matters. So, two answers to your question. One is, I don’t think it matters. And I don’t think it’s possible that a machine will think qualitatively as we do. So, I think it will think farther and faster and carry a huge payload. I think it’s possible for a machine to be generally intelligent in a variety of domains.
We can see intelligence growing in a bunch of domains. If you think of them as rippling pools, ripples in a pool, like different circles of expertise ultimately joining, you can see how general intelligence is sort of demonstrably on its way.
Whether or not it thinks like a human, I think it won’t. And I think that’s a danger, because I think it won’t have our mammalian sense of empathy. It’ll also be good, because it won’t have a lot of sentimentality, and a lot of cognitive biases that our brains are labored with. But you said it won’t have volition. And I don’t think we can bet on that.
In my book, Our Final Invention, I interviewed at length Steve Omohundro, who’s taken upon himself—he’s an AI maker and physicist—and he’d taken it upon himself to create more or less a science for understanding super intelligent machines. Or machines that are more intelligent than we are.
And among the things that he argues for, using rational-age and economic theory—and I won’t go into that whole thing—but it’s in Our Final Invention, it’s also in Steve Omohundro’s many websites. Machines that are self-aware and are self-programming, he thinks, will develop basic drives that are not unlike our own.
And they include things like self-protection, creativity, efficiency with resources,and other drives that will make them very challenging to control—unless we get ahead of the game and create this science for understanding them, as he’s doing.
Right now, computers are not generally intelligent, they are not conscious. All the limitations of the Chinese Room, they have. But I think it’s unrealistic to think that we are frozen in development. I think it’s very realistic to think that we’ll create machines whose cognitive abilities match and then outstrip our own.
But, just kind of going a little deeper on the question. So we have this idea of intelligence, which there is no consensus definition on it. Then within that, you have human intelligence—which, again, is something we certainly don’t understand. Human intelligence comes from our brain, which is—people say—‘the most complicated object in the galaxy’.
We don’t understand how it works. We don’t know how thoughts are encoded. We know incredibly little, in the grand scheme of things, about how the brain works. But we do know that humans have these amazing abilities, like consciousness, and the ability to generalize intelligence very effortlessly. We have something that certainly feels like free will, we certainly have something that feels like… and all of that.
Then on the other hand, you think back to a clockwork, right? You wind up a clock back in the olden days and it just ran a bunch of gears. And while it may be true that the computers of the day add more gears and have more things, all we’re doing is winding it up and letting it go.
And, isn’t it, like… not only a stretch, not only a supposition, not only just sensationalistic, to say, “Oh no, no. Someday we’ll add enough gears that, you wind that thing up, and it’s actually going to be a lot smarter than you.”
Isn’t that, I mean at least it’s fair to say there’s absolutely nothing we understand about human intelligence, and human consciousness, and human will… that even remotely implies that something that’s a hundred percent mechanical, a hundred percent deterministic, a hundred percent… Just wind it and it doesn’t do anything. But…
Well, you’re wrong about being a hundred percent deterministic, and it’s not really a hundred percent mechanical. When you talk about things like will, will is such an anthropomorphic term, I’m not sure if we can really, if we can attribute it to computers.
Well, I’m specifically saying we have something that feels and seems like will, that we don’t understand.
If you look, if you look at artificial neural nets, there’s a great deal about them we don’t understand. We know what the inputs are, and we know what the outputs are; and when we want to make better output—like a better translation—we know how to adjust the inputs. But we don’t know what’s going on in a multilayered neural net system. We don’t know what’s going on in a high resolution way. And that’s why they’re called black box systems, and evolutionary algorithms.
In evolutionary algorithms, we have a sense of how they work. We have a sense of how they combine pieces of algorithms, how we introduce mutations. But often, we don’t understand the output, and we certainly don’t understand how it got there, so that’s not completely deterministic. There’s a bunch of stuff we can’t really determine in there.
And I think we’ve got a lot of unexplained behavior in computers that’s, at this stage, we simply attribute to our lack of understanding. But I think in the longer term, we’ll see that computers are doing things on their own. I’m talking about a lot of the algorithms on Wall Street, a lot of the flash crashes we’ve seen, a lot of the cognitive architectures. There’s not one person who can describe the whole system… the ‘quants’, they call them, or the guys that are programming Wall Street’s algorithms.
They’ve already gone, in complexity, beyond any individual’s ability to really strip them down.
So, we’re surrounded by systems of immense power. Gartner and company think that in the AI space—because of the exponential nature of the investment… I think it started out, and it’s doubled every year since 2009—Gartner estimates that by 2025, that space will be worth twenty-five trillion dollars of value. So to me, that’s a couple of things.
That anticipates enormous growth, and enormous growth in power in what these systems will do. We’re in an era now that’s different from other eras. But it is like other Industrial Revolutions. We’re in an era now where everything that’s electrified—to paraphrase Kevin Kelly, the futurist—everything that’s electrified is being cognitized.
We can’t pretend that it will always be like a clock. Even now it’s not like a clock. A clock you can take apart, and you can understand every piece of it.
The cognitive architectures we’re creating now… When Ferrucci was watching Watson play, and he said, “Why did he answer like that?” There’s nobody on his team that knew the answer. When it made mistakes… It did really, really well; it beat the humans. But comparing [that] to a clock, I think that’s the wrong metaphor.
Well, let’s just poke at it just one more minute, and then we can move on to something else. Is that really fair to say, that because humans don’t understand how it works, it must be somehow working differently than other machines?
Put another way, it is fair to say, because we’ve added enough gears now, that nobody could kind of keep them all straight. I mean nobody understands why the Google algorithm—even at Google—turns up what it does when you search. But nobody’s suggesting anything nondeterministic, nothing emergent, anything like that is happening.
I mean, our computers are completely deterministic, are they not?
I don’t think that they are. I think if they were completely deterministic, then enough brains put together could figure out a multi-tiered neural net, and I don’t think there’s any evidence that we can right now.
Well, that’s exciting.
I’m not saying that it’s coming up with brilliant new ideas… But a system that’s so sophisticated that it defeats Go, and teaches grandmasters new ideas about Go—which is what the grandmaster who it defeated three out of four times said—[he] said, “I have new insights about this game,” that nobody could explain what it was doing, but it was thinking creatively in a way that we don’t understand.
Go is not like chess. On a chess board, I don’t know how many possible positions there are, but it’s calculable. On a Go board, it’s incalculable. There are more—I’ve heard it said, and I don’t really understand it very well—I heard it said there are more possible positions on a Go board than there are atoms in the universe.
So when it’s beating Go masters… Therefore, playing the game requires a great deal of intuition. It’s not just pattern-matching. Like, I’ve played a million games of Go—and that’s sort of what chess is [pattern-matching].
You know, the grandmasters are people who have seen every board you could possibly come up with. They’ve probably seen it before, and they know what to do. Go’s not like that. It requires a lot more undefinable intuition.
And so we’re moving rapidly into that territory. The program that beat the Go masters is called AlphaGo. It comes out of DeepMind. DeepMind was bought four years ago by Google. Going deep into reinforcement learning and artificial neural nets, I think your argument would be apt if we were talking about some of the old languages—Fortran, Basic, Pascal—where you could look at every line of code and figure out what was going on.
That’s no longer possible, and you’ve got Go grandmasters saying “I learned new insights.” So we’re in a brave new world here.
So you had a great part of the book, where you do a really smart kind of roll-up of when we may have an AGI. Where you went into different ideas behind it. And the question I’m really curious about is this: On the one hand, you have Elon Musk saying we can have it much sooner than you think. You have Stephen Hawking, who you quoted. You have Bill Gates saying he’s worried about it.
So you have all of these people who say it’s soon, it’s real, and it’s potentially scary. We need to watch what we do. Then on the other camp, you have people who are equally immersed in the technology, equally smart, equally, equally, equally all these other things… like Andrew Ng, who up until recently headed up AI at Baidu, who says worrying about AGI is like worrying about overpopulation on Mars. You have other people saying the soonest it could possibly happen is five hundred years from now.
So I’m curious about this. Why do you think, among these big brains, super smart people, why do they have… What is it that they believe or know or think, or whatever, that gives them such radically different views about this technology? How do you get your head around why they differ?
Excellent question. I first heard that Mars analogy from, I think it was Sebastian Thrun, who said we don’t know how to get to Mars. We don’t know how to live on Mars. But we know how to get a rocket to the moon, and gradually and slowly, little by little—No, it was Peter Norvig, who wrote the sort of standard text on artificial intelligence, called AI: A Modern Approach.
He said, you know, “We can’t live on Mars yet, but we’re putting the rockets together. Some companies are putting in some money. We’re eventually going to get to Mars, and there’ll be people living on Mars, and then people will be setting another horizon.” We haven’t left our solar system yet.
It’s a very interesting question, and very timely, about when will we achieve human-level intelligence in a machine, if ever. I did a poll about it. It was kind of a biased poll; it was of people who were at a conference about AGI, about artificial general intelligence. And then I’ve seen a lot of polls, and there’s two points to this.
One is the polls go all over the place. Some people said… Ray Kurzweil says 2029. Ray Kurzweil’s been very good at anticipating the progress of technology, he says 2029. Ray Kurzweil’s working for Google right now—this is parenthetically—he said he wants to create a machine that makes three hundred trillion calculations per second, and to share that with a billion people online. So what’s that? That’s basically reverse engineering of a brain.
Making three hundred trillion calculations per second, which is sort of a rough estimate of what a brain does. And then sharing it with a billion people online, which is making superintelligence a service, which would be incredibly useful. You could do pharmacological research. You could do really advanced weather modeling, and climate modeling. You could do weapons research, you could develop incredible weapons. He says 2029.
Some people said one hundred years from now. The mean date that I got was about 2045 for human-level intelligence in a machine. And then my book, Our Final Invention, got reviewed by Gary Marcus in the New Yorker, and he said something that stuck with me. He said whether or not it’s ten years or one hundred years, the more important question is: What happens next?
Will it be integrated into our lives? Or will it suddenly appear? How are we positioned for our own safety and security when it appears, whether it’s in fifty years or one hundred? So I think about it as… Nobody thought Go was going to be beaten for another ten years.
And here’s another way… So those are the two ways to think about it: one is, there’s a lot of guesses; and two, does it really matter what happens next? But the third part of that is this, and I write about it in Our Final Invention: If we don’t achieve it in one hundred years, do you think we’re just going to stop? Or do you think we’re going to keep beating at this problem until we solve it?
And as I said before, I don’t think we’re going to create exactly human-like intelligence in a machine. I think we’re going to create something extremely smart and extremely useful, to some extent, but something we, in a very deep way, don’t understand. So I don’t think it’ll be like human intelligence… it will be like an alien intelligence.
So that’s kind of where I am on that. I think it could happen in a variety of timelines. It doesn’t really matter when, and we’re not going to stop until we get there. So ultimately, we’re going to be confronted with machines that are a thousand or a million times more intelligent than we are.
And what are we going to do?
Well, I guess the underlying assumption is… it speaks to the credibility of the forecast, right? Like, if there’s a lab, and they’re working on inventing the lightbulb, like: “We’re trying to build the incandescent light bulb.” And you go in there and you say, “When will you have the incandescent light bulb?” and they say “Three or four weeks, five weeks. Five weeks tops, we’re going to have it.”
Or if they say, “Uh, a hundred years. It may be five hundred, I don’t know.” I mean in those things you take a completely different view of, do we understand the problem? Do we know what we’re building? Do we know how to build an AGI? Do we even have a clue?
Do you believe… or here, let me ask it this way: Do you think an AGI is just an evolutionary… Like, we have AlphaGo, we have Watson, and we’re making them better every day. And eventually, that kind of becomes—gradually—this AGI. Or do you think there’s some “A-ha” thing we don’t know how to do, and at some point we’re like “Oh, here’s how you do it! And this is how you get a synapse to work.”
So, do you think we are nineteen revolutionary breakthroughs away, or “No, no, no, we’re on the path. We’re going to be there in three to five years.”?
Ben Goertzel, who is definitely in the race to make AGI—I interviewed him in my book—said we need some sort of breakthrough. And then we got to artificial neural nets and deep learning, and deep learning combined with reinforcement learning, which is an older technique, and that was kind of a breakthrough. And then people started to beat—IBM’s Deep Blue—to beat chess, it really was just looking up tables of positions.
But to beat Go, as we’ve discussed, was something different.
I think we’ve just had a big breakthrough. I don’t know how many revolutions we are away from a breakthrough that makes intelligence general. But let me give you this… the way I think about it.
There’s long been talk in the AI community about an algorithm… I don’t know exactly what they call it. But it’s basically an open-domain problem-solver that asks something simple like, what’s the next best move? What’s the next best thing to do? Best being based on some goals that you’ve got. What’s the next best thing to do?
Well, that’s sort of how DeepMind took on all the Atari games. They could drop the algorithm into a game, and it didn’t even know the rules. It just noticed when it was scoring or not scoring, and so it was figuring out what’s the next best thing to do.
Well if you can drop it into every Atari game, and then you drop it into something that’s many orders of magnitude above it, like Go, then why are we so far from dropping that into a robot and setting it out into the environment, and having it learn the environment and learn common sense about the environment—like, “Things go under, and things go over; and I can’t jump into the tree; I can climb the tree.”
It seems to me that general intelligence might be as simple as a program that says “What’s the next best thing to do?” And then it learns the environment, and then it solves problems in the environment.
So some people are going about that by training algorithms, artificial neural net systems and defeating games. Some people are really trying to reverse-engineer a brain, one neuron at a time. That’s sort of, in a nutshell—to vastly overgeneralize—that’s called the bottom-up, and the top-down approach for creating AGI.
So are we a certain number of revolutions away, or are we going to be surprised? I’m surprised a little too frequently for my own comfort about how fast things are moving. Faster than when I was writing the book. I’m wondering what the next milestone is. I think the Turing Test has not been achieved, or even close. I think that’s a good milestone.
It wouldn’t surprise me if IBM, which is great at issuing itself grand challenges and then beating them… But what’s great about IBM is, they’re upfront. They take on a big challenge… You know, they were beaten—Deep Blue was beaten several times before it won. When they took on Jeopardy, they weren’t sure they were going to win, but they had the chutzpah to get out there and say, “We’re gonna try.” And then they won.
I bet IBM will say, “You know what, in 2020, we’re going to take on the Turing Test. And we’re going to have a machine that you can’t tell that it’s a machine. You can’t tell the difference between a machine and a human.”
So, I’m surprised all the time. I don’t know how far or how close we are, but I’d say I come at it from a position of caution. So I would say, the window in which we have to create safe AI is closing.
Yes, no… I’m with you; I was just taking that in. I’ll insert some ominous “Dun, dun, dun…” Take that a little further.
Everybody has a role to play in this conversation, and mine happens to be canary in a coal mine. Despite the title of my book, I really like AI. I like its potential. Medical potential. I don’t like its war potential… If we see autonomous battlefield robots on the battlefield, you know what’s going to happen. Like every other piece of used military equipment, it’s going to come home.
Well, the thing is, about the military… and the thing about technology is…If you told my dad that he would invite into his home a representative of Google, and that representative would sit in a chair in a corner of the house, and he would take down everything we said, and would sell that data to our insurance company, so our insurance rates might go up… and it would sell that data to mortgage bankers, so they might cut off our ability to get a mortgage… because dad talks about going bankrupt, or dad talks about his heart condition… and he can’t get insurance anymore.
But if we hire a corporate guy, and we pay for it, and put him in our living room… Well, that’s exactly what we’re doing with Amazon Echo, with all the digital assistants. All this data is being gathered all the time, and it’s being sold… Buying and selling data is a four billion dollar-a-year industry. So we’re doing really foolish things with this technology. Things that are bad for our own interests.
So let me ask you an open-ended question… prognostication over shorter time frames is always easier. Tell me what you think is in store for the world, I don’t know, between now and 2030, the next thirteen years. Talk to me about unemployment, talk to me about economics, all of that. Tell me the next thirteen years.
Well, brace yourself for some futurism, which is a giant gamble and often wrong. To paraphrase Kevin Kelly again, everything that’s electrical will be cognitized. Our economy will be dramatically shaped by the ubiquity of artificial intelligence. With the Internet of Things, with the intelligence of everything around us—our phones, our cars…
I can already talk to my car. I’m inside my car, I can ask for directions, I can do some other basic stuff. That’s just going to get smarter, until my car drives itself. A lot of people… MIT did a study, that was quoting a Cambridge study, that said: “Forty-five percent of our jobs will be able to be replaced within twenty years.” I think they downgraded that to like ten years.
Not that they will be replaced, but they will be able to be replaced. But when AI is a twenty-five trillion dollar—when it’s worth twenty-five trillion dollars in 2025—everybody will be able to do anything, will be able to replace any employee that’s doing anything that’s remotely repetitive, and this includes doctors and lawyers… We’ll be able to replace them with the AI.
And this cuts deep into the middle class. This isn’t just people working in factories or driving cars. This is all accountants, this is a lot of the doctors, this is a lot of the lawyers. So we’re going to see giant dislocation, or giant disruption, in the economy. And giant money being made by fewer and fewer people.
And the trouble with that is, that we’ve got to figure out a way to keep a huge part of our population from starving, from not making a wage. People have proposed a basic minimum income, but to do that we would need tax revenue. And the big companies, Amazon, Google, Facebook, they pay taxes in places like Ireland, where there’s very low corporate tax. They don’t pay taxes where they get their wealth. So they don’t contribute to your roads.
Google is not contributing to your road system. Amazon is not contributing to your water supply, or to making your country safe. So there’s a giant inequity there. So we have to confront that inequity and, unfortunately, that is going to require political solutions, and our politicians are about the most technologically-backward people in our culture.
So, what I see is, a lot of unemployment. I see a lot of nifty things coming out of AI, and I am willing to be surprised by job creation in AI, and robotics, and automation. And I’d like to be surprised by that. But the general trend is… When you replace the biggest contract manufacturer in the world… Foxconn just replaced thirty-thousand people in Asia with thirty-thousand robots.
And all those people can’t be retrained, because if you’re doing something that’s that repetitive, and that mechanical… what can you be retrained to do? Well, maybe one out of every hundred could be a floor manager in a robot factory, but what about all the others? Disruption is going to come from all the people that don’t have jobs, and there’s nothing to be retrained to.
Because our robots are made in factories where robots make the robots. Our cars are made in factories where robots make the cars.
Isn’t that the same argument they used during the Industrial Revolution, when they said, “You got ninety percent of people out there who are farmers, and we’re going to lose all these farm jobs… And you don’t expect those farmers are going to, like, come work in a factory, where they have to learn completely new things.”
Well, what really happened in the different technology revolutions, back from the cotton gin onward is, a small sector… The Industrial Revolution didn’t suddenly put farms out of business. A hundred years ago, ninety percent of people worked on farms, now it’s ten percent.
But what happened with the Industrial Revolution is, sector by sector, it took away jobs, but then those people could retrain, and could go to other sectors, because there were still giant sectors that weren’t replaced by industrialization. There was still a lot of manual labor to do. And some of them could be trained upwards, into management and other things.
This, as the author Ford wrote in The Rise of Robots—and there’s also a great book called The Fourth Industrial Age. As they both argue, what’s different about this revolution is that AI works in every industry. So it’s not like the old revolutions, where one sector was replaced at a time, and there was time to absorb that change, time to reabsorb those workers and retrain them in some fashion.
But everybody is going to be… My point is, all sectors of the economy are going to be hit at once. The ubiquity of AI is going to impact a lot of the economy, all at the same time, and there is going to be a giant dislocation all at the same time. And it’s very unclear, unlike in the old days, how those people can be retrained and retargeted for jobs. So, I think it’s very different from other Industrial Revolutions, or rather technology revolutions.
Other than the adoption of coal—it went from generating five percent to eighty percent of all of our power in twenty years—the electrification of industry happened incredibly fast. Mechanization, replacement of animal power with mechanical power, happened incredibly fast. And yet, unemployment remains between four and nine percent in this country.
Other than the Depression, without ever even hiccupping—like, no matter what disruption, no matter what speed you threw at it—the economy never couldn’t just use that technology to create more jobs. And isn’t that maybe a lack of imagination that says “Well, no, now we’re out. And no more jobs to create. Or not ones that these people who’ve been displaced can do.”
I mean, isn’t that what people would’ve said for two hundred years?
Yes, that’s a somewhat persuasive argument. I think you’ve got a point that the economy was able to absorb those jobs, and the unemployment remained steady. I do think this is different. I think it’s a kind of a puzzle, and we’ll have to see what happens. But I can’t imagine… Where do professional drivers… they’re not unskilled, but they’re right next to it. And it’s the job of choice for people who don’t have a lot of education.
What do you retrain professional drivers to do once their jobs are taken? It’s not going to be factory work, it’s not going to be simple accounting. It’s not going to be anything repetitive, because that’s going to be the job of automation and AI.
So I anticipate problems, but I’d love to be pleasantly surprised. If it worked like the old days, then all those people that were cut off the farm would go to work in the factories, and make Ford automobiles, and make enough money to buy one. I don’t see all those driverless people going off to factories to make cars, or to manufacture anything.
A case in point of what’s happening is… Rethink Robotics, which is Rodney Brooks’ company, just built something called Baxter; and now Baxter is a generation old, and I can’t think of what replaced it. But it costs about twenty-two thousand dollars to get one of these robots. These robots cost basically what a minimum wage worker makes in a year. But they work 24/7, so they really replace three shifts, so they really are replacing three people.
Where do those people go? Do they go to shops that make Baxter? Or maybe you’re right, maybe it’s a failure of imagination to not be able to anticipate the jobs that would be created by Baxter and by autonomous cars. Right now, it’s failing a lot of people’s imagination. And there are not ready answers.
I mean, if it were 1995 and the Internet was, you’re just hearing about it, just getting online, just hearing it… And somebody said, “You know what? There’s going to be a lot of companies that just come out and make hundreds of billions of dollars, one after the other, all because we’ve learned how to connect computers and use this hypertext protocol to communicate.” I mean, that would not have seemed like a reasonable surmise.
No, and that’s a great example. If you were told that trillions of dollars of value are going to come out of this invention, who would’ve thought? And maybe I personally, just can’t imagine the next wave that is going to create that much value. I can see how AI and automation will create a lot of value, I only see it going into a few pockets though. I don’t see it being distributed in any way that the Silicon Valley startups, at least initially, were.
So let’s talk about you for a moment. Your background is in documentary filmmaking. Do you see yourself returning to that world? What are you working on, another book? What kind of thing is keeping you busy by day right now?
Well, I like making documentary films. I just had one on PBS last year… If you Google “Spillover” and “PBS” you can see it is streaming online. It was about spillover diseases—Ebola, Zika and others—and it was about the Ebola crisis, and how viruses spread. And then now I’m working on a film about paleontology, about a recent discovery that’s kind of secret, that I can’t talk about… from sixty-six million years ago.
And I am starting to work on another book that I can’t talk about. So I am keeping an eye on AI, because this issue is… Despite everything I talk about, I really like the technology; I think it’s pretty amazing.
Well, let’s close with, give me a scenario that you think is plausible, that things work out. That we have something that looks like full employment, and…
Good, Byron. That’s a great way to go out. I see people getting individually educated about the promise and peril of AI, so that we as a culture are ready for the revolution that’s coming. And that forces businesses to be responsible, and politicians to be savvy, about developments in artificial intelligence. Then they invest some money to make artificial intelligence advancement transparent and safe.
And therefore, when we get to machines that are as smart as humans, that [they] are actually our allies, and never our competitors. And that somehow on top of this giant wedding cake I’m imagining, we also manage to keep full employment, or nearly-full employment. Because we’re aware, and because we’re working all the time to make sure that the future is kind to humans.
Alright, well, that is a great place to leave it. I am going to thank you very much.
Well, thank you. Great questions. I really enjoyed the back-and-forth.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.
Read more here:: gigaom.com/feed/
The GSMA announced that mobile operators deploying new Mobile IoT networks will be able to benefit from the European connected energy market estimated to be worth US$26 billion(€21.99 billion) by 2026. Data shared by analyst house Machina Research highlights the huge growth opportunity in the emerging connected energy market that could connect approximately 158 million new smart meters on LPWA networks across Europe. The total number of connections in Europe could be further increased if the 60 million cellular connections are also included with LPWA.
“The Internet of Things is fundamentally disrupting the smart utility market by providing ubiquitous connectivity and real-time, actionable data. Mobile IoT networks will take this further by offering energy providers a cost-effective solution to connect millions of smart meters,” said Alex Sinclair, chief technology officer, GSMA.
“There is a real sense of momentum behind the roll-out of Mobile IoT networks with multiple global launches, however, there is still a huge runway for growth. We encourage operators to act now to capitalise on this clear market opportunity and further accelerate the development of the IoT.”
The current connected energy market, which includes applications related to the generation and transportation of energy, microgeneration, smart grid and distribution monitoring and smart metering, is worth an estimated US$11.7 billion(€9.90 billion). The European connected energy market represents approximately 21% of all global revenues, with APAC claiming 54% and the Americas 21%.
The European Commission recently published a proposal indicating that approximately 200 million electricity smart meters and 45 million gas meters will be rolled out by 2020. The Commission also estimates that by 2020, approximately 72% of Europeans customers will have a smart meter for electricity and about 40% for gas.
“In the coming years we will see an important change in the way natural gas networks operate. The need for more efficient operations, improved safety and better quality of service will be paramount and we can do this through the roll-out of smart gas metering systems. We are moving towards the digitalisation of gas networks, a transformation from “pipe-centric” systems to “data-centric” systems.
To make this happen, reliable communication means are a must and the arrival of NB-IoT and LTE-M represents an acceleration of this evolution. These new technologies offer everything necessary, such as long battery life, penetration and data security, as well as licensed spectrum,” commented Gianfranco De Feo, executive director, Shanghai Fiorentini Ltd.
Mobile IoT networks supporting growth of connected energy
Mobile IoT networks are designed to support mass-market IoT applications across a wide variety of use cases including connected energy solutions such as water and gas metering, smart grids, electricity and energy monitoring. They support IoT applications that are low-cost, use low data rates, require long battery lives and often operate in remote and hard to reach locations making them ideal for the connected energy sector.
Mobile networks are already supporting the smart electric metering market, but now other sectors such as water and gas metering are turning their attention to the benefits of adopting NB-IoT and LTE-M networks due to low power and better […]
Read more here:: www.m2mnow.biz/feed/
The Pacific Coast Highway AMS Project is a critical element of the Caltrans Dynamic Corridor Congestion Management program
(PRWeb October 25, 2017)
Read the full story at http://www.prweb.com/releases/2017/10/prweb14836485.htm
Read more here:: www.prweb.com/rss2/technology.xml
Lyceum Capital has been in the news lately. On 16 August, www.IoTGlobalNetwork.com reported that the company was an investor in customer contact specialist Sabio, which has amassed a €35 million warchest. CEO, Andy Roberts told the website exclusively what the company would spend it on. One week later, Lyceum was back in the headlines announcing another investment, this time in Wireless Innovation.
Here, Jeremy Cowan talks to Simon Hitchcock, partner at Lyceum, about what has sparked this strong new financial interest in the Internet of Things (IoT).
IoT Global Network: What was your first investment in the Internet of Things?
Simon Hitchcock: A number of our portfolio companies are in the wider IoT ecosystem including TotalMobile and Isotrak, but Wireless Innovation is our first investment purely dedicated to connectivity in the IoT market.
IoT Global Network: What persuaded you to invest now?
SH: We have been following the development of the IoT connectivity market for a number of years, looking for the right investment opportunity. Wireless Innovation represented an attractive opportunity to back an existing team to continue to grow organically at double digits and to accelerate that through acquisitions, in what remains a fragmented market.
IoT Global Network: What was it about Wireless Innovation that singled the company out for you?
SH: The CEO and founder, Phil Rouse, has achieved very impressive organic growth, which the business is well positioned to continue. Other factors included its high quality and well invested connectivity management platform, and comprehensive managed service offering, as well as the wide range of clients and end markets it serves.
IoT Global Network: How much have you invested and what’s it for? Acquisitions? If so, what kind?
SH: Lyceum Capital has committed £20 million (€21.7 million) of funding to the buy and build. We plan to acquire complimentary businesses to expand the geographic and sector reach of the investment.
IoT Global Network: Is this a fixed-term or open-ended investment? If fixed, for how long? If open, what are your objectives?
SH: Lyceum Capital is a long-term investor, typically holding our portfolio companies for a minimum of five years. Our plan is to develop Wireless into a leading player in the market.
IoT Global Network: What do you look for in IoT investments? Are there any golden rules?
SH: We focus on businesses with high recurring revenues, mission-critical use cases, strong organic growth and strong gross margins. We like businesses that have developed a level of their own intellectual property and real value-add in their service wrap.
IoT Global Network: Do you plan further IoT/M2M (machine-to-machine) investments in the near term?
SH: We plan to make a number of acquisitions for Wireless Innovation, both in the UK and internationally in the coming years.
IoT Global Network: Do you see other investors looking at the Internet of Things? If so, are they larger or smaller investors?
SH: The growth in the IoT market makes it attractive for investors, we would expect a range of different investors to be attracted to it for this reason.
IoT Global Network: How are IoT/M2M services and technologies viewed by investors?
SH: We view […]
Read more here:: www.m2mnow.biz/feed/
With industry analysts forecasting unprecedented change for the Financial Services and Insurance (FSI) industry over the next three years, companies need a highly flexible, open and secure communications foundation with which they can confidently embrace the future. To that end, Avaya Customer Engagement Solutions hold the key for successfully navigating digital transformation initiatives, enhancing mobile services and incorporating emerging technologies.
The FSI industry already faces a plethora of challenges, including decreased loyalty especially with younger consumers, complex operations, security and regulatory requirements that slow the pace of change, and increased competition from non-traditional companies such as Apple, Google and others.
These challenges and the rapid growth in mobile transactions have FSI companies racing to implement a wide range of digital transformation strategies. As with many major technology transformations, however, there’s untold risk in choosing technology or implementing a strategy that ultimately constrains future business initiatives, results in lost revenue or worse, customers, as well as abandoned investments.
Institutions such as O-Bank, a Taiwan-based, all digital bank and Mashreq Bank in the UAE exemplify the changing financial services industry today. O-Bank selected Avaya Customer Engagement technologies for the first, Digital-From-Day-One financial services company in the country, with a 24/7 video center that enables the company to be available to customers any time of day, anywhere they need to bank.
The company has plenty of room to grow its customers and to incorporate new technologies as needs arise. Mashreq Bank is creating the Branch of the Future and expanding its mobile banking capabilities with Avaya Customer Engagement solutions, integrating the latest technology trends, including robotics, analytics, cloud and e-channels, into existing Mashreq Bank’s digital services.
Digital transformation strategies planned by traditional or non-traditional financial institutions need to accommodate ongoing development of new and emerging technologies. Biometrics, artificial intelligence, mixed reality, IoT and analytics, as well as Blockchain already promise to have significant impacts on how banking is done. The technological underpinnings have never been more important to enable rapid, highly secure, cost-effective transformation and avoid delays and dead ends.
The Avaya portfolio of software solutions and services – including Avaya Breeze, Avaya Oceana, Avaya Oceanalytics — provides a platform that enables unique differentiation while future-proofing intellectual property. This flexible, secure foundation enables ongoing transformation with minimal disruption as new demands and technologies arise.
In addition, Avaya Professional Services can assess current infrastructure, develop strategic plans and customise applications to help identify and ensure successful outcomes throughout the evolution of the business.
With Avaya, FSI companies can comfortably leverage:
Biometrics addresses an increasing need for security while streamlining the customer experience. Working with companies such as Nuance, Verbio and others, Avaya has been integrating biometrics into its customer experience solutions to help FSI companies enhance security for mobile transactions while delivering faster, seamless authentication.
Artificial Intelligence is alive and well in Avaya’s intelligent messaging automation solution that enables chat-bot and automated response capabilities for SMS and web chat conversations, and integrates with popular social media platforms such as Facebook Messenger, Twitter, Instagram, Kik and WeChat. AI can serve a number of use […]
Read more here:: www.m2mnow.biz/feed/
2017 has been the year of Wonder Woman, at least in the realm of pop culture, and now there’s a fascinating behind-the-scenes tale of the people who dreamed up the Amazonian superhero who stands for love. Professor Marston and the Wonder Women is about William Moulton Marston (Luke Evans), Elizabeth Holloway Marston (Rebecca Hall), and Olive Byrne (Bella Heathcote), three psychology researchers at Tufts University who fell in love during the liberated 1920s. Eventually they had four children (each woman bore two) and lived together for their whole adult lives. Along the way, they invented Wonder Woman together, though only William Marston (under the pen name William Moulton) was given credit for it.
It’s one of the most unusual love stories ever to be told on film, and it illuminates a time in history that most have forgotten. Between roughly 1910 and the mid-1930s, there was a flowering of feminist and sexual liberation movements in Europe and the US, leading to birth-control clinics, women’s suffrage, the infamous Kinsey Reports, and even a 1919 German film called Different from the Others, about the urgent need for gay rights. Marston, who championed women’s right to vote, was deeply involved in these movements with his partners. Byrne was the daughter of feminist activist Ethel Byrne, who cofounded the organization that later became Planned Parenthood with her sister Margaret Sanger. Elizabeth Marston was one of the first women to earn a law degree in the US and had a master’s degree in psychology.
Read more here:: feeds.arstechnica.com/arstechnica/index?format=xml