course

Mobile tariffs are in a continual state of change, as providers reshape and repackage mobile offers, says Strategy Analytics

By Zenobia Hegde

Mobile tariffs are in a continual state of change, as providers reshape and repackage mobile offers. Coupled with the sheer breadth and depth of information that these offers create, service providers, regulators and consumers struggle to compare, rank or benchmark different propositions and average price points.

The Teligen division of Strategy Analytics has been tracking this pricing minefield globally for over twenty years, within its OECD Mobile Voice Price Benchmarking Service. The quarterly-updated Excel-based system incorporates the internationally acknowledged OECD methodology, and is based on the top two providers across 36 countries.

In an extension to this already comprehensive coverage, it has now launched a premium version of the service, which includes additional providers in five European markets; France, Germany, Italy, Spain and the UK, as well as in the US, to cover providers with at least 80% combined market share (typically 4-6 providers per country).

Analysis of the extended coverage within these six markets shows that for a user making 100 calls a month (just over 180 minutes), sending 140 text messages, and using 2GB data can save, on average, over USD PPP 200 a year, depending on choice of provider. The graph below shows how costs in these extended markets compare for this basket.

Source: OECD Mobile Voice Premium Service, November 2017 (custom country & provider selection)

“Italian provider Tre is an example of where major cost savings can be found. It is currently offering an incredibly competitive tariff – ALL-IN Start, which gives users 500 minutes, 500 texts and 5GB of data a month for €5, with a six month promotional offer of 4G LTE speed at no extra cost (thereafter €1 per month). Without the 4G option, this works out at just under USD PPP 80 per year.

This is half the cost of the cheapest offer from WIND, which merged with Tre in 2016, and over 75% cheaper than its closest priced competitor, TIM. Of course, it is important to look at the specifics of each offer and consider it in the context of the specific usage profile, but in any event, this represents a significant cost saving” stated Josie Sephton, director of Strategy Analytics’ Price Benchmarking division, Teligen.

According to Angela Toal, senior tariff analyst with Teligen “while the Italian example is especially dramatic, large differences in costs can be found in other countries. The new Premium service gives access to much more in-depth coverage of providers in selected markets, allowing for a much greater insight into what the key competitors are really up to”

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Mobile tariffs are in a continual state of change, as providers reshape and repackage mobile offers, says Strategy Analytics appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Cybeats raises seed round of investment led by iNovia Capital to create ‘immune system’ for IoT devices

By Zenobia Hegde

Cybersecurity company Cybeats, which provides continuous cyber defence solution for Enterprise IoT infrastructure, announced it has raised a seed round of investment from iNovia Capital, Maple Leaf Angels (MLA48 II), and cybersecurity industry insiders Dr. Richard Reiner, Brian O’Higgins, and Brian Bourne.

“iNovia is excited to invest in Cybeats and its brilliant team, alongside a great group of investors. IoT and cybersecurity is a space that we have been looking into for a while, we believe Cybeats’ technology is solving a real problem, with the potential to become a global leader in the space”, says Magaly Charbonneau, principal at iNovia Capital.

“Maple Leaf Angels is pleased to participate in this round with iNovia, Richard and other angels, through its MLA48 Fund II. The cyber security problem is a growing global issue, particularly in IoT devices. We believe that Cybeats has the right team and technology to tackle this problem,” said managing director, Prathna Ramesh.

Brian Bourne added, “The Cybeats team has built a unique, efficient and highly effective solution to the IoT security problem. They have developed their technology with a keen eye to the needs of enterprise and I am excited to be involved with such a talented team.”

“IoT essentially equips the internet with arms and legs, and brings about a whole new level of concern with safety and security. New technology is needed to address the problem, and Cybeats is right on course to deliver solutions” said Brian O’Higgins.

New chairman for Cybeats

Cybeats is also announcing that Dr. Reiner, a renowned serial cybersecurity entrepreneur and executive with a long string of successful exits, is joining as chairman of the company. Dr. Reiner brings to Cybeats many years of experience at the cutting edge of the cybersecurity industry – his insights will enable Cybeats to better identify business opportunities in the IoT cybersecurity market and to deliver better products faster.

“Cybeats solves a critical problem for enterprises deploying IoT technology, and for the manufacturers of these devices”, said Dr. Reiner. “IoT devices, until now, have been vulnerable and easy targets for hackers. Cybeats protects IoT devices throughout their lifecycle, so enterprises can benefit from the value of IoT technologies without increasing their risk profile.”

The funding will be used to scale Cybeats’ sales team, to further develop new Artificial Intelligence and Machine Learning capabilities for the Cybeats platform, and to expand the company’s services into APAC and EU markets. Cybeats builds an immune system-like cybersecurity software for Enterprise IoT (EoT), Industrial IoT (IIoT), and Critical Infrastructure IoT devices, such as those used in energy grids.

Taking an “inside out” approach to cybersecurity, Cybeats’ software is embedded into the devices to provide continuous protection, allowing devices to detect the most sophisticated threats instantly and gather data to help security professionals neutralise them. Cybeats’ Cloud service then analyses data from the infected device, and provides a full threat diagnosis and treatment plan.

With an estimated 50 billion connected devices by the year 2020, the Internet of Things (IoT) is quickly becoming an integral part of our world. But with the increased opportunity the IoT brings, so too come a multitude of new threats to cybersecurity […]

The post Cybeats raises seed round of investment led by iNovia Capital to create ‘immune system’ for IoT devices appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Samsung is Betting Big on the Internet of Things. What Does That Mean For You?

By Chris Morris

Samsung is going all in on the Internet of things, betting that connected appliances and faster Internet speeds will result in happier customers.

Executives for the electronics giant, speaking at the 2018 CES technology show in Las Vegas on Monday, reconfirmed a vow made two years ago that all of the company’s products will be IOT-compatible by 2020–adding that 90% already are as of today. And it plans to use its existing SmartThings app to ensure that those devices can all talk to each other–from the TV to the phone to the refrigerator to the washing machine.

Samsung promised that the initiative would debut in the spring.

What that means for users will depend on which Samsung appliances they own, of course. One example is people who buy a new Samsung TV will no longer have to worry about entering user names and passwords for services like Netflix, Hulu, and Spotify when they initially set up their TVs. That information will automatically entered into the TV by checking other systems in which the customer is logged in, making it a more seamless experience.

TV sets will also have personalized recommendations for movies and shows, based on a user’s overall viewing habits on all their devices. TVs will also include Bixby, Samsung’s voice-controlled digital assistant, and will be able to double as a central hub for smart products around the home, letting users do everything from see who is at the front door to adjust the thermostat.

The goal of the push, says Tom Baxter, president and CEO of Samsung’s North American division is to create an “eco-system of devices working together to produce unique experiences.”

As part of the initiative, Samsung will expand the number of smart refrigerator models it sells by introducing 14 new models that come with its integrated screen and newly expanded FamilyHub technology that lets owners stream music, leave notes to each other, and view the contents of the fridge in real time. Through Bixby, the FamilyHub will differentiate between users’ based on their voices and give custom information to them, such as their schedule for the day or commute times to school or work.

“IOT is still frustrating to a lot of people, but it doesn’t need to be,” said Yoon Lee, senior vice president at Samsung Electronics.

Samsung’s not stopping with its own products, either. The company is working closely with the Open Connectivity Forum, the world’s largest IOT standardization body, to have all SmartThings-compatible products work with the app. And the company said it has signed an agreement with a “leading European auto manufacturer” to extend this IOT integration to vehicles (letting, for instance, people check if they’re out of milk as they drive by the store).

Samsung has previously tried and failed to make its ecosystem more connected. To ensure this effort is more successful, Baxter said, the company has invested $14 billion in research and development over the past year.

Read more here:: fortune.com/tech/feed/

Can Blockchain ‘transform’ healthcare? Simple answer: no.

By Jon Collins

We’d all love to see technology improve patient care, reduce diagnosis and therapy times and otherwise help us live longer, wouldn’t we? So could distributed ledger technology Blockchain hold the key? In a recent article on PoliticsHome, Member of Parliament John Mann highlighted a couple of areas where Blockchain might offer “transformative potential” to the UK’s National Health Service (NHS):

“By enabling ambulance workers, paramedics, and A&E staff instant access to medical records updated in real-time, medical care could be carefully targeted to a person’s specific needs. The ability to upload results of scans, blood samples and test results and have them accessed by the next practitioner near-instantly, without the risk of error offers the chance to improve survival rates in emergency care and improve care standards across our health service,” he wrote.

While the Rt. Hon. Mr Mann (or his advisors) may be correct in principle, he is falling into an age-old trap by issuing this kind of statement without caveat. While it is a powerful tool (as I noted in my 2018 predictions), it isn’t true that Blockchain enables anything, any more than a chisel enables a sculptor to sculpt. Sure, it could help, but it needs to be in the right hands and used in the right way.

Of course, some might say, this point should be taken as read. If that were the case however, we would not see repeated money being thrown at technology as a singular solution to otherwise insurmountable problems, in healthcare and beyond. And then failing to deliver, to general wailing and gnashing of teeth.

To continue the sculptor analogy, if the poor fellow is asked to deliver the thing in impossible timescales, designed by committee and with conflicting expectations of what it will look like, the result will probably be a mess. As sculpture, so technology, whatever the chisel manufacturers or IT vendors might have us think.

A massively complex organisation seemingly ruled (depending on who you ask) by metrics, efficiency, litigation prevention and so on will see such criteria impact any solution — either by design, or in consequence. I’m not critiquing the NHS here, just observing that IT will always take a subordinate role to its context. It’s the same in the US or any other country.

Perhaps Blockchain could help, but so could any number of technologies — if used in the right way. Indeed, you could take the above quote and inert it into any data management capability or service from the past four decades, or indeed, mobile, IoT and so on, and it would still make sense. Indeed, as healthcare writer Dan Munro notes on HealthStandards.com,

“The technical reality is that all of the features of a blockchain – except double spending – can easily be created with other tools that are readily available – and cheap – without actually being a “blockchain.” ”

While I’m well aware of both the many potential uses of blockchain in healthcare and the dangers of simply being one of “those armed with spears” (according to my old colleague and healthcare author Jody Ranck), the horse needs to be put before the cart: tech will not, by itself, solve any problems for anyone. This is more than a glib riposte to a quote taken out of context. For Blockchain to work in the way Mr Mann suggests, it would have to be rolled out widely, across a health service. Flagging up technologies is easy, but delivering transformation is astonishingly hard: our representatives need to understand, accept and design this in from the outset, for any “transformative potential” to be achieved.

(Jon was CTO of healthcare startup MedicalPath2Safety)

Read more here:: gigaom.com/feed/

Root of Trust-based automatic registration to the AWS cloud

By Zenobia Hegde

Going to important conferences tends to concentrate the mind on what you want to talk about and what you want to demonstrate. This was definitely the case with the recent ARM TechCon in sunny Santa Clara. My team has been very busy working with our IoT products during 2017, and we were really excited when we managed to finalise a really exciting demo just in time to show it at the conference.

The cool thing about this demo was that we showed how to use our well-proven Root of Trust (RoT) injection, that we’ve repeated well over a billion times in smart phones, in a new environment: the micro controller space. Using this RoT, we have an established trust anchor in the device which can be used to sign things, says Chris Loreskar of Trustonic.

At any later date, this can then be attested by a remote entity… and this is what underpins the demo that we showed. Our demo showed a secure thermometer enrolling with its OEM cloud for the first time and then pushing sensor data to it, with the services being hosted by Amazon Web Services (AWS).

Before I describe the demo, it is worth emphasising that the RoT is typically generated in a factory, although for this demo, that step was merely simulated. The demo begins with the device being powered on for the first time and it creates a Certificate Signing Request (CSR), which the device subsequently signs with the RoT key it possesses. The CSR is needed because AWS uses MQTT for communication with edge nodes and requires that the edge devices are enrolled with the AWS X.509 PKI. For this reason, the device needs a client certificate for the TLS communication.

The signed CSR is then transmitted to AWS for an enrolment request with the fictional OEM’s Virtual Private Cloud (VPC). The OEM’s VPC forwards this signed statement to a Trustonic VPC (which hosts records [public statements] of all created devices), for device attestation purposes. The Trustonic VPC validates the RoT-signed CSR and returns this verdict to the caller, which subsequently asks for the device to become enrolled. Once this has happened, the certificate is created and returned to the device, where it is stored and used for subsequent secured communication of the sensor data.

Of course, this was simply a way to demonstrate the possibilities of a secured endpoint pushing sensor data to the cloud and could equally have been a fitness tracker or similar device. The fact that the device had an injected RoT enables OEMs to be sure that devices requesting to enrol with their services are actually genuine and not emulators or counterfeit.

However, having a RoT is usually not enough. Because a legitimate OEM and a counterfeit one, both using chips from the same SiP would equally pass our device attestation checks, but fear not! We solve this using our patent pending solution we call digital holograms. Read more about it here.

And for those of you who really want to know how we did this: we used […]

The post Root of Trust-based automatic registration to the AWS cloud appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

How to Avoid Data Overload in Web Personalization

By admin

How to Avoid Data Overload in Web Personalization

How to Avoid Data Overload in Web Personalization

The internet of things (IoT) and Big Data have added a whole new dimension to digital marketing and web personalization. The more people use their devices for check ins, tracking their activity and data, and surfing the web, a larger set of data is created.

The issue then becomes not about the amount of data created, because we have more than enough available to personalize any website, but how to not become overwhelmed by the sheer volume of the data, and analyze and categorize it into usable data.

So how do we avoid this data overload, and make big data useful for web personalization.

Analyze Your Current Website

Before you embark on any web personalization campaign, it is important to take a look at your current website and the data you are already gathering, even if you are doing so passively at the moment.

Your domain and where it is located and how the analytics are structured can tell you a great deal about how far you have to go to achieve optimal personalization. Are you running on a WordPress platform? What plugins or tools are you currently using to gather data about your customers? Are you using Google analytics or a more robust program?

The ability of your users to create an account they can sign into every time they return to the site not only makes things more convenient and easier for them, but it makes data gathering and the amount of data you receive greater as well.

Allowing a user to register or log in with a social media account enables you to gather their public information from their profile as well. This can provide you with a great deal of data: age, location, marital status, likes and dislikes, and interactions they may have with your competition.

Of course, this adds data to the already large pile you have gathered. This means you must decide what data is relevant to your website personalization efforts, and what data you can ignore.

Analyze Social Media

Who are your followers on social media? These are questions that go beyond simply what their names are or where they are from. What income bracket are they from? Do they own or rent? What is the gender mix, and age range you are reaching?

The most important of these questions is if you are reaching the target audience you were aiming for. Does the follower you are reaching on social media match your marketing persona?

You can find all of this information through the analytics of the social media network yourself, including Twitter, Facebook, and others. You can also use programs like Tweepsmap for Twitter and Hootsuite and others for Facebook and other networks to gather and categorize this data.

Much like the data you have gathered from your website, you need to combine and categorize that data to make it usable. What do you really need to know about your customers, and what information is not nearly as relevant?

Analyze Your Goals

What are you trying to accomplish with personalization? Generally speaking, the goal is to meet the user where they are, and shorten the buyer’s journey by directing them to what they want or need without a lot of needless searching and discovery.

However, there are two things you need to target as precisely as possible. First, what is the marketing persona you have created, and do the customers you are attracting the same? If they are not, either your targeting and messaging are wrong, or perhaps you have not chosen the correct target. Either way, you need to shift these efforts before you work too hard on personalization.

The second question is what information you need to reach that persona. If you are selling shoes, directing the person to the appropriate landing page may involve knowing their gender, so you can direct them to men’s or women’s shoes initially, and let them search from there.

At the same time, in the same scenario knowing the web visitor’s location will help you direct them to the shoes popular in their region. While socks and sandals may be popular in the Northwestern United States, the same shoes will not be popular in New York City or Atlanta. Winter boots will not sell as well in Phoenix as they do in Colorado.

No matter what your product or who you’re target audience is, certain information will help you personalize their journey to buying your product, while other information will not. Only collect and analyze the date you need to prevent overload.

Web personalization is one of the keys to successful marketing, but the data available from the Internet of Things and other sources can be staggering. Gathering only what you need and analyzing it properly can help prevent data overload, and make your web personalization efforts more successful.

The post How to Avoid Data Overload in Web Personalization appeared first on Intelligent Head Quarters.

Read more here:: www.intelligenthq.com/feed/

One small step for 5G, one giant step for wireless

By Zenobia Hegde

Today marks a momentous milestone for the wireless industry: the finalisation of the 3GPP Non-Standalone (NSA) 5G New Radio (NR) Standard. The formalisation of the NSA standard anchors the coming Standalone (SA) version and represents a remarkable step forward for the industry.

With NSA 5G NR, players across the ecosystem for the first time have rallied around a single internationally recognised specification for 5G radio systems. It provides the technological foundation for the industry to begin testing and commercialising the next generation of wireless services and devices, says Asha Keddy of Intel.

I’d like to congratulate the parties that contributed to the development of this standard, setting the foundation for an interoperable global marketplace ripe with economic opportunity and technological possibility. As I commented in the related news release, Intel participated in the process, working closely with mobile industry leaders to support the standard and accelerate the first NR trials.

Many of the early use cases will fall into the realms of enhanced Mobile Broadband (eMBB), the Internet of Things (IoT), as well as Vehicle-to-Everything Communications (V2X). Intel has already been innovating in many of these segments, conducting field trials with the Intel 5G Mobile Trial Platform (MTP) and the Intel GO 5G Automotive Platform. To prepare for the specification finalisation, we have to be two to three years ahead of the standards when it comes to gathering key learnings about use cases and their related performance requirements.

Intel is proud to have contributed to the development of NSA 5G NR with proprietary research, reference designs and insights from a range of trials. Intel’s many contributions spanned the specification, including coding, error correction, modulation, spatial sub-channelisation, beamforming, reference symbol designs, radio link adaptation and more. Intel also produced prototypes for use in the testing of pre-5G standards, including 5GTF.

Those system stacks were, in essence, the parents of NSA and SA 5G NR – providing evidence of what was possible and informing the standard’s specifications. During this process, we collaborated with industry innovators like Ericsson and Nokia, and leading operators like AT&T, Korea Telecom, NTT Docomo and Verizon.

We’ve rapidly evolved our MTP in lockstep with progress of the NR specification, preparing it for the finalisation of the standard. It’s NSA 5G NR-ready, giving equipment manufacturers the platform they need to test interoperability and operators the ability to simulate real-world use cases. Powered by high-performance Intel FPGAs and Intel® Core™ processors, the MTP will have a key role in informing the pending SA standard within Release 15 in June 2018.

We already have several NSA 5G NR trials lined up using the MTP alongside our 5G RFIC supporting sub-6 GHz and mmWave, and our 5G RFFE for operations in the 28 GHz and 39 GHz bands. The learnings from our interoperability testing and real-world trials are foundational for our first commercial NSA/SA 5G NR-capable multimode solutions, the Intel® XMM™ 8000 series modem, with customer devices expected in 2019. These solutions will support a variety of use cases, including PCs, mobile phones, fixed wireless CPE and even vehicles.

Of course, it’s important to remember that as momentous as this occasion is, it’s really just […]

The post One small step for 5G, one giant step for wireless appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Riot starts with a claim to industry’s lowest power NB-IoT and eMTC baseband chip

By Jeremy Cowan

Internet of Things (IoT) newcomer, Riot Micro is claiming that a radical design approach applying BLE/Wi-Fi architecture has delivered a new cellular IoT solution with cost/power levels that are characteristic of short-range wireless systems. Here Peter Wong, CEO, describes the development to Jeremy Cowan.

Semiconductor start-ups are rare things these days. Vancouver, Canada-based Riot Micro has made its market debut in the IoT sector with what it claims is the industry’s lowest power baseband modem chip for cellular IoT.

The company began life a decade ago working on LTE IP technology to licence to the general market. Then three years ago, in search of faster growth Riot changed direction and brought in Peter Wong as CEO. “We retooled and refinanced,” he tells IoT Now, “grew to about 30 people, and developed a chip for cellular IoT, based on LTE NB1 and eMTC specifications.”

Peter Wong, Riot Micro’s CEO

“Why IoT?” we ask.

“Because that’s where the majority of growth was. If you look at other cellular technologies – you know Cat 3, 4 , 5, 6 – as you go higher and higher for the smartphones and tablets of the world it gets harder and harder for a start-up to compete realistically. The key differentiators are integration and powerful processors with Snapdragons etc., and going up against the Qualcomms of the world didn’t make a whole lot of sense,” says Wong.

“When the standards started to evolve for M2M (machine-to-machine communications) it looked like there could be a significant inflexion point where the requirements changed significantly and where processor technology was not the King of the Game. It was about optimising for more performance and lower power, and of course much, much lower cost.”

“Cost being a huge factor in services with low ARPUs (average revenues per user),” IoT Now suggests.

“Exactly. That drove why we formed the team that we did. LTE is a relatively sophisticated protocol and technology relative to other wireless technologies like BLE and WiFi. But when you break it right down and look at NB1 and eMTC we felt there was a ton of simplification you could do technically and implementatikon-wise and speed-wise. When you’re driving 200kbps or even 1Mb you can take certain design approaches that are extremely power-efficient and really help drive the cost down. The memory is an example. We optimised the LTE protocol stack so that it only does NB1 and MTC. We could minimise the amount of memory required. Our protocol stack operates entirely within the memory within our chip.”

So the Riot Micro RM1000 has been built using Bluetooth Low Energy (BLE) and Wi-Fi architecture techniques to deliver a cellular IoT solution with the low power and cost levels of short-range wireless systems. The RM1000 is now being offered to module manufacturers and OEMs designing narrowband IoT (NB-IoT) and eMTC systems that can include automotive, asset management, home automation, industrial, point-of-sale, smart energy, and vending applications.

Asked who the company sees as its key rivals, Peter Wong tells IoT Now it would be companies like Sequans, and Altair […]

The post Riot starts with a claim to industry’s lowest power NB-IoT and eMTC baseband chip appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Face authentication and the future of security

By Zenobia Hegde

Apple’s iPhone X has given us a glimpse into the future of personal data security. By 2020 we’ll see billions of smart devices being used as mobile face authentication systems, albeit with varying degrees of security. The stuff of science fiction for years, face recognition will surpass other legacy biometric login solutions,such as fingerprint and iris scans, because of a new generation of AI-driven algorithms, says Kevin Alan Tussy, CEO of FaceTec.

The face recognition space had never received more attention than after the launch of Face ID, but with the internet now home to dozens of spoof videos fooling Face ID with twins, relatives and even olives for eyes, the expensive hardware solution has left many questioning if this is just another missed opportunity to replace passwords.

Face Recognition is a biometric method of identifying an authorised user by comparing the user’s face to the biometric data stored in the original enrolment. Once a positive match is made and the user’s liveness is confirmed the system grants account access.

A step up in security, Face Authentication (Identification + Liveness Detection), offers important and distinct security benefits: no PIN or password memorisation is required, there is no shared secret that can be stolen from a server, and the certainty the correct user is logging in is very high.

Apple’s embrace of Face ID has elevated face recognition into the public consciousness, and when compared to mobile fingerprint recognition, face recognition is far superior in terms of accuracy. According to Apple, their new face scanning technology is 20-times more secure than the fingerprint recognition currently used in the iPhone 8 (Touch ID) and Samsung S8. Using your face to unlock your phone is, of course, a great step forward, but is that all a face biometric can do? Not by a long shot.

While the goal of every new biometric has been to replace passwords, none have succeeded because most rely on special hardware that lacks liveness detection. Liveness detection, the key attribute of Authentication, verifies the correct user is actually present and alive at the time of login.

True 3D face authentication requires: identity verification plus depth sensing plus liveness detection. This means photos or videos cannot spoof the system, nor animated images like those created by CrazyTalk; and even 3D representations of a user like projections on foam heads, custom masks, and wax figures are rebuffed.

With the average price of a smartphone hovering around £150 (€170.58), expensive hardware-based solutions, no matter how good they get, won’t ever see widespread adoption. For a face authentication solution to be universally adopted it must be a 100% software solution that runs on the billions of devices with standard cameras that are already in use, and it must be be more secure than current legacy options (like fingerprint and 2D face).

A software solution like ZoOm from FaceTec can be quickly and easily integrated into nearly any app on just about any existing smart device. ZoOm can be deployed to millions of mobile users literally overnight, and provides […]

The post Face authentication and the future of security appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Voices in AI – Episode 23: A Conversation with Pedro Domingos

By Byron Reese

Today’s leading minds talk AI with host Byron Reese

In this episode Byron and Pedro Domingos talk about the master algorithm, machine creativity, and the creation of new jobs in the wake of the AI revolution.



0:00


0:00


0:00

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Pedro Domingos, a computer science professor at the University of Washington, and the author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake our World. Welcome to the show, Pedro.

Pedro Domingos: Thanks for having me.

What is artificial intelligence?

Artificial intelligence is getting computers to do things that traditionally require human intelligence, like reasoning, problem solving, common sense knowledge, learning, vision, speechand language understanding, planning, decision making and so on.

And is it artificial in the sense that artificial turf is artificial—in that it isn’t really intelligence, it just looks like intelligence? Or is it actually truly intelligent, and it’s just the “artificial” demarks that we created it?

That’s a fun analogy. I hadn’t heard that before. No, I don’t think AI is like artificial turf. I think it’s real intelligence. It’s just intelligence of a different kind. We’re used to thinking of human intelligence, or maybe animal intelligence, as the only intelligence on the planet.

What happens now is a different kind of intelligence. It’s a little bit like, does a submarine really swim? Or is it faking that it swims? Actually, it doesn’t really swim, but it can still travel underwater using very different ideas. Or, you know, does a plane fly even though it doesn’t flap its wings? Well, it doesn’t flap its wings but it does fly. AI is a little bit like that. In some ways, actually, artificial intelligence is intelligent in ways that human intelligence isn’t.

There are many areas where AI exceeds human intelligence, so I would say that they’re different forms of intelligence, but it is very much a form of intelligence.

And how would you describe the state-of-the-art, right now?

In science and technology progress often happens in spurts. There are long periods of slow progress and then there are periods of very sudden, very rapid progress. And we are definitely in one of those periods of very rapid progress in AI, which was a long time in the making.

AI is a field that’s fifty years old, and we had what was called the “AI spring” in the ‘80s, where it looked like it was going to really take off. But then that didn’t really happen at the end of the day, and the problem was that people back then were trying to do AI using what’s called “knowledge engineering.” If I wanted an AI system to do medical diagnosis, I had to interview doctors and program the doctor’s knowledge of diagnosis in the form of rules into the computer, and that didn’t scale.

The thing that has changed recently is that we have a new way to do AI, which is machine learning. Instead of trying to program the computers to do things, the computers program themselves by learning from data. So now what I do for medical diagnosis is I give the computer a database of patient records, what their symptoms and test results were, and what the diagnosis was—and from just that, in thirty seconds, the computer can learn, typically, to do medical diagnosis better than human doctors.

So, thanks to that, thanks to machine learning, we are now seeing a phase of very rapid progress. Also, because the learning algorithms have gotten better—and very importantly: the beauty of machine learning is that, because the intelligence comes from the data, as the data grows exponentially, the AI systems get more intelligent with essentially no extra work from us. So now AI is becoming very powerful. Just on the back of the weight of data that we have.

The other element, of course, is computing power. We need enough computing power to turn all that data into intelligent systems, but we do have those. So the combination of learning algorithms, a lot of data, and a lot of computing power is what is making the current progress happen.

And, how long do you think we can ride that wave? Do you think that machine learning is the path to an AGI, hypothetically? I mean, do we have ten, twenty, thirty, forty more years of running with, kind of, the machine learning ball? Or, do we need another kind of breakthrough?

I think machine learning is definitely the path to artificial general intelligence. But I think there are a few people in AI who would disagree with that. You know, your computer can be as intelligent as you want. If it can’t learn, you know, thirty minutes later it will be falling behind humans.

So, machine learning really is essential to getting to intelligence. In fact, the whole idea of the singularity—it was I. J. Good, back in the ‘50s, who had this idea of a learning machine that could make a machine that learned better than it did. As a result of which, you would have this succession of better and better, more and more intelligent machines until they left humans in the dust.

Now, how long will it take? That’s very hard to predict, precisely because progress is not linear. I think the current bloom of progress at some point will probably plateau. I don’t think we’re on the verge of having general AI. We’ve come a thousand miles, but there’s a million miles more to go. We’re going to need many more breakthroughs, and who knows where those breakthroughs will come from.

In the most optimistic view, maybe this will all happen in the next decade or two, because things will just happen one after another, and we’ll have it very soon. In the more pessimistic view, it’s just too hard and it’ll never happen. If you poll the AI experts, they never just say it’s going to be several decades. But the truth is nobody really knows for sure.

What is kind of interesting is not that people don’t know, and not that their forecasts are kind of all over the map, but that if you look at the extreme estimates, five years are the most aggressive, and then the furthest out are like five hundred years. And what does that suggest to you?

You know, if I went to my cleaners and I said, “Hey, when is my shirt going to be ready?” and they said, “Sometime between five and five hundred days” I would be like, “Okay… something’s going on here.”

Why do you think the opinions are so variant on when we get an AGI?

Well, the cleaners, when they clean your shirt, it’s a very well-known, very repeatable process. They know how long it takes and it’s going to take the same thing this time, right? There are very few unknowns. The problem in AI is that we don’t even know what we don’t know.

We have no idea what we’re missing, so some people think we’re not missing that much. There are the optimists that say, “Oh, we just need more data.” Right? Back in the ‘80s they said, “Oh, we just need more knowledge,” and then, that wasn’t the case. So that’s the optimistic view. The more pessimistic view is that this is a really, really hard problem, and we’ve only scratched the surface. So the uncertainty comes from the fact that we don’t even know what we don’t know.

We certainly don’t know how the brain works, right? We have vague ideas of what different parts of it do, but in terms of how a thought is encoded, we don’t know. Do you think we need to know more about our own intelligence to make an AGI, or is it like, “No, that’s apples and oranges. It doesn’t really matter how the brain works. We‘re building an AGI differently”?

Not necessarily. So, there are different schools of thought in AI, and this is part of what I talk about in my book. There is one school of thought in AI, the Connectionists, whose whole agenda is to reverse-engineer the brain. They think that’s the shortest path, you know, “Here’s the competition, go reverse-engineer it, figure out how it works, build it on the computer, and then we’ll have intelligence.” So that is definitely a plausible approach.

I think it’s actually a very difficult approach, precisely because we understand so little about how the brain works. In some ways maybe it’s trying to solve a problem by way of solving the hardest of problems.

And then there are other AI types, namely the Symbolists, whose whole idea is, “No, we don’t need to understand things at that low level. In fact, we’re just going to get lost in the weeds if we try to do that. We have to understand intelligence at a higher-level abstraction, and we’ll get there much sooner that way. So forget how the brain works, that’s really not important.”

Again, the analogy with the brains and airplanes is a good one. What the Symbolists say is, “If we try to make airplanes by building machines that will flap their wings, we’ll never have them. What we need to do is understand the laws of physics and aerodynamics, and then build machines based on that.”

So there are different schools of thought. And I actually think it’s good that there are different schools of thought—and we’ll see who gets there first.

So, you mentioned your book, The Master Algorithm, which is of course required reading in this field. Can you give the listener, who may not be as familiar with it, an overview of what is The Master Algorithm? What are we looking for?

Yeah, sure. So the book is essentially an introduction to machine learning for a general audience. So not just for technical people, but business people, policy makers, just citizens and people who are curious. It talks about the impact that machine learning is already having in the world.

A lot of people think that these things are science fiction, but they are already in their lives and they just don’t know it. It also looks at the future, and what we can expect coming down the line. But mainly, it is an introduction to what I was just describing—that there are five main schools of thought in machine learning.

There are the people who want to reverse-engineer the brain; the ones who want to simulate evolution; the ones who do machine learning by automating the scientific method; the ones who use Bayesian statistics; and the ones who do reasoning by analogy, like people do in everyday life. And then I look at what these different methods can and can’t do.

The name The Master Algorithm comes from this notion that a machine learning algorithm is a master algorithm, in the same sense that a master key opens all doors. A learning algorithm can do all sorts of different things while being the same algorithm.

The is really what’s extraordinary about machine learning… In traditional computer science, if I want the computer to play chess, I have to write a program explaining how to play chess. And if I want the computer to drive a car, I had to write a program explaining how to drive a car. With machine learning, the same learning algorithm can learn to play chess, or drive a car, or do a million different other things—just by learning from the appropriate data.

And each of these tribes of machine learning has its own master algorithm. The more optimistic members of that tribe believe that you can do everything with that master algorithm. My contention in the book is that each of these algorithms is only solving part of the problem. What we need to do is unify them all into a grand theory of machine learning, in the same way that physics has a standard model and biology has a central dogma. And then, that will be the true master algorithm. And I suggest some paths towards that algorithm, and I think we’re actually getting pretty close to it.

One thing I found empowering in the book—and you state it over and over at the beginning—is that the master algorithm is aspirationally accessible for a wide range of people. You basically said, “You, listening to the book, this is still a field where the layman can still have some amount of breakthrough.” Can you speak to that for just a minute?

Absolutely. In fact, that’s part of what got me into machine learning is that—unlike physics or mathematics or biology, which are very mature fields, and you really can only contribute once you have at least a PhD—computer science and AI and machine learning are still very young. So, you could be a kid in a garage and have a great idea that will be transformative. And I hope that that will happen.

I think, even after we find this master algorithm that’s the unification of the five current ones, as we were talking about, we will still be missing some really important, really deep ideas. And I think in some ways, someone coming from outside the field is more likely to find those, than those of us who are professional machine learning researchers, and are already thinking along these tracks of these particular schools of thought.

So, part of my goal in writing the book was to get people who are not machine learning experts thinking about machine learning, and possibly having the next great ideas that will get us closer to AGI.

And, you also point out in the book why you believe that we know that such a thing is possible, and one of your proof points is our intelligence.

Exactly.

Can you speak to that?

Yeah. So this is, of course, one of those very ambitious goals that people should be at the outset a little suspicious of, right? Is this, like the philosopher’s stone or the perpetual motion machine, is it really possible? And again, some people don’t think it’s possible.

I think there’s a number of reasons why I’m pretty sure it is possible, one of which is that we already have existing proofs. One existing proof is our brain, right? As long as you believe in reductionism, which all scientists do, then the way your brain works can be expressed as an algorithm.

And if I program that algorithm into a computer, then that algorithm can learn everything that your brain can. Therefore, in that sense at least, one version of the master algorithm already exists.

Another one is evolution. Evolution created us and all life on Earth. And it is essentially an algorithm, and we roughly understand how that algorithm works; so there is another existing instance of the master algorithm.

Then there are also—besides these more empirical reasons—theoretical reasons which tell us that a master algorithm exists. One of which is that, for each of the five tribes, for their master algorithm there’s a theorem that says: If you give enough data to this algorithm, it can learn any function.

So, at least at that level, we already know that master algorithms exist. Now the question is, how complicated will it be? How hard will it be to get us there? How broadly good would that algorithm be, in terms of learning from a reasonable amount of data in a reasonable amount of time?

You just said all scientists are reductionist. Is that necessarily the case? Like, can you not be a scientist and believe in something like strong emergence, and say, “Actually, you can’t necessarily take the human mind down to individual atoms and kind of reconstruct I mean you don’t have to appeal to mysticism to

Yeah, yeah, absolutely. So, what I mean… This is a very good point. In fact, in the sense that you’re talking about, we cannot be reductionists in AI. So what I mean by “reductionist” is just the idea that we can decompose a complex system into simpler, smaller parts that interact and that make up the system.

This is how all of the sciences and engineering works. But this does not preclude the existence of emergent properties. So, the system can be more than the sum of its parts, if it’s non-linear. And very much the brain is a non-linear system. And that’s what we have to do to reach AI. You could even say that machine learning is the science of emergent properties.

In fact, one of the names by which it has been known in some quarters is “self-organizing systems.” And in fact, what makes AI hard, the reason we haven’t already solved it, is that the usual divide-and-conquer strategy which scientists and engineers follow—of dividing problems into smaller and smaller sub-problems, and then solving the sub-problems, and putting the solutions together—tends not to work in AI, because the sub-systems are very strongly coupled together. So, there are emergent properties, but that does not mean that you can’t reduce it to these pieces; it’s just a harder thing to do.

Marvin Minsky, I remember, talked about how we kind of got tricked a little bit by the fact that it takes very few fundamental laws of the universe to understand most of physics. The same with electricity. The same with magnetism. There are very few simple laws to explain everything that happens. And so the hope had been that intelligence would be like that. Are we giving up on that notion?

Yes, so again, there are different views within AI on this. I think at one end there are people who hope we will discover a few laws of AI, and those would solve everything. At the other end of the spectrum there are people like Marvin Minsky who just think that intelligence is a big, big pile of hacks.

He even has a book that’s like, one of these tricks per page. And who knows how many more there are. I think, and most people in AI believe, that it’s somewhere in between. If AI is just a big pile of hacks, we’re never going to get there. And it can’t really be just a pile of hacks, because if the hacks were so powerful as to create intelligence, then you can’t really call them hacks.

On the other hand, you know, you can’t reduce it to a few laws, like Newton’s laws. So this idea of the master algorithm is that, at the end of the day, we will find one algorithm that does intelligence, but that algorithm is not going to be a hundred lines of code. It’s not going to be millions of lines of code either. You know, if the algorithm is thousands or maybe tens of thousands of lines of code, that would be great. It’ll still be a complex theory—much more complex than the ones we have in physics—but it’ll be much, much simpler than what people like Marvin Minsky envisioned.

And if we find the master algorithm, is that good for humanity?

Well, I think it’s good or bad depending on what we do with it. Like all technology, machine learning just gives us more power. You can think of it as a superpower, right? Telephones let us speak at a distance, airplanes let us fly, and machine learning lets us predict things and lets technology adapts automatically to our needs. All of this is good if we use it for good. If we use it for bad, it will be bad, right? The technology itself doesn’t know how it’s going to be used.

Part of my reason for writing this book is that everybody needs to be aware of what machine learning is, and what it can do, so that they can control it. Because, otherwise, machine learning will just give more control to those few who actually know how to use it.

I think if you look at the history of technology, over time, in the end, the good tends to prevail over the bad, which is why we live in a better world today than we did two hundred or two thousand years ago. But we have to make it happen, right? It just doesn’t fall from the tree like that.

And so, in your view, the master algorithm is essentially synonymous with AGI, in the sense that it can figure anything out—it’s a general artificial intelligence.

Would it be conscious?

Yeah, so, by the way: I wouldn’t say the master algorithm is synonymous with AGI. I think it’s the enabler of AGI. Once we have the master algorithm, we’re still going to need to apply it to vision, and language, and reasoning, and all these things. And then we’ll have AGI.

So, one way to think about this is that it’s an 80/20 rule. The master algorithm is the twenty percent of the work that gets you eighty percent of the way, but you still need to do the rest, right? So maybe this is a better way to think about it.

Fair enough. So, I’ll just ask the question a little more directly. What do you think consciousness is?

That’s a very good question. The truth is, what makes consciousness simultaneously so fascinating and so hard is that, at the end of the day, if there is one thing that I know it’s that I’m conscious, right? Descartes said, “I think, therefore I am,” but maybe he should’ve said “I’m conscious, therefore I am.”

The laws of physics, who knows, they might even be wrong. But the fact that I’m conscious right now is absolutely unquestionable. So, everybody knows that about themselves. At the same time, because consciousness is a subjective experience, it doesn’t lend itself to the scientific method. What are reproducible experiments when it comes to consciousness? That’s one aspect.

The other one is that consciousness is a very complex, emergent phenomenon. So, nobody really knows what it is, or understands it, even at a fairly shallow level. Now, the reason we believe others have consciousness… You believe that I have consciousness because you’re a human being, and I’m a human being, so since you have consciousness, I probably have consciousness as well. And this is really the extent of it. For all you know, I could be a robot talking to you right now, passing the Turing test, and not be conscious at all.

Now, what happens with machines? How can we tell whether a machine is conscious or not? This has been grist for the mill of a lot of philosophers over the last few decades. I think the bottom line is that once a computer starts to act like it’s consciousness, we will treat it as if it’s conscious, we will grant it consciousness.

In fact, we already do that, even with very simple chatbots and what not. So, as far as everyday life goes, it actually won’t be long. In some ways, it’ll happen that people treat computers as being conscious, sooner than they treat the computers as being truly intelligent. Because that’s all we need, right? We project these human properties onto things that act humanly, even in the slightest way.

Now, at the end of the day, if you gaze down into that hardware and those circuits, is there really consciousness there? I don’t know if we will ever be able to really answer that question. Right now, I actually don’t see a good way. I think there will come a point at which we understand consciousness well enough—because we understand the brain well enough—that we are fairly confident that we can tell whether something is conscious or not.

And then at that point I think we will apply this criteria to these machines; and these machines—at least the ones that have been designed to be conscious—will pass the tests. So, we will believe that machines have consciousness. But, you know, we can never be totally sure.

And do you believe consciousness is required for a general intellect?

I think there are many kinds of AI, and many AI applications which do not require consciousness. So, for example, if I tell a machine learning system to go solve cancer—that’s one of the things we’d like to do, cure cancer, and machine learning is a very big part of the battle to cure cancer—I don’t think it requires consciousness at all. It requires a lot of searching, and understanding molecular biology, and trying different drugs, maybe designing drugs, etc. So, ninety percent of AI will involve no consciousness at all.

There are some applications of AI, and some types of AI, that will require consciousness, or something indistinguishable from of it. For example, housebots. We would like to have a robot that cooks dinner and does the dishes and makes the bed and what not.

In order to do all those things, the robot has to have all the capabilities of a human, has to integrate all of these senses: vision, and touch, and perception, and hearing and what not; and then make decision based on it. I think this is either going to be consciousness or something indistinguishable from it.

Do you think there will be problems that arise if that happens? Let’s say you build Rosie the Robot, and you don’t know if the robot is conscious or merely acting as if it is. Do you think at that point we have to have this question of, “Are we fine enslaving what could be a conscious machine to plunge our toilet for us?”

Well, that depends on what you consider enslaving, right? So, one way to look at this—and it’s the way I look at it—is that these are still just machines, right? Just because they have consciousness doesn’t mean that they have human rights. Human rights are for humans. I don’t think there’s such thing as robot rights.

The deeper question here is, what gives something rights? One school of thought is that it’s the ability to suffer that gives you rights, and therefore animals should have rights. But, if you think about it historically, the idea of having animal rights… even fifty years ago would’ve seemed absurd. So, by the same standard, maybe fifty years from now, people will want to have robot rights. In fact, there are some people already talking about it.

I think it’s a very strange idea. And often people talk about, “Oh, well, will the machines be our friends or will they be our slaves? Will they be our equals? Will they be inferior?” Actually, I think this whole way of framing things is mistaken. You know, the robots will be neither our equals nor our slaves. They will be our extensions, right?

Robots are technology, they augment us. I think it’s not so much that the machines will be conscious, but that through machines we will have a bigger consciousness—in the same way that, for example, the Internet already gives us a bigger consciousness than we had when there was no Internet.

So, discussing robots leads us to a topic that’s on the news literally every day, which is the prospect that automation and technological advances will eliminate jobs faster than it can create new ones. Or, it will eliminate jobs and replace them with inaccessible kinds of jobs. What do you think about that? What do you think the future holds?

I think we have to distinguish between the near term, by which I mean the next ten years or so, and the long term. In the near term, I think some jobs will disappear, just like jobs have disappeared to automation in the past. AI is really automation on steroids. So I think what’s going to happen in the near term is not so different from what has happened in the past.

Some jobs will be automated, so some jobs will disappear, but many new jobs will appear as well. It’s always easier to see the jobs which disappear than the ones that appear. Think for example of being an app developer. There’s millions of people today who make a living today being an app developer.

Ten years ago that job didn’t exist. Fifty years ago you couldn’t even imagine that job. Two hundred years ago, ninety-something percent of Americans were farmers, and then farming got automated. Now today only two percent of Americans work in agriculture. That doesn’t mean that the other ninety-eight percent are unemployed. They’re just doing all these jobs that people couldn’t even imagine before.

I think a lot of that is what’s going to happen here. We will see entirely new job categories appear. We will also see, on a more mundane level, more demand for lots of existing jobs. For example, I think truck drivers should be worried about the future of their jobs, because self-driving trucks are coming, so there will be an endpoint.

There are many millions of truck drivers in the US alone. It’s one of the most widespread occupations. But now, what will they do? People say, “Oh, you can’t turn truck drivers into programmers.” Well, you don’t have to turn them into programmers. Think about what’s going to happen…

Because trucks are now self-driving, goods will cost less. Goods will cost less, so people will have more money in their pockets, and they will spend it on other things—like, for example, having bigger, better houses. And therefore, there will be more demand for construction workers, and some of these truck drivers will become construction workers and so on.

You know, having said all that, I think that in the near term the most important thing that’s going to happen to jobs is actually—neither the ones that will disappear, nor the ones that will appear—most jobs will be transformed by AI. The way I do my job will change because some parts will become automated. But now I will be able to do more things better, or more than I could do before, when I didn’t have the automation. So, really the question everybody needs to think about is, what parts of my job can I automate? Really, the best way to protect your job from automation is to automate it yourself, and then ask, “What can I do using these machine learning tools?”

Automation is like having a horse. You don’t try to outrun a horse; you ride the horse. And we have to ride automation, to do our jobs better and in more ways than we can now.

So, it doesn’t sound like you’re all that pessimistic about the future of employment?

I’m optimistic, but I also worry. I think that’s a good combination. I think if we’re pessimistic we’ll never do anything. Again, if you look at the history of technology, the optimists at the end of the day are the ones who made the world a better place, not the pessimists.

But at the same time, naïve optimism is very dangerous, right? We need to worry continuously about all the things that could go wrong, and make sure that they don’t go wrong. So I think that a combination of optimism and worry is the right one to have.

Some people say we’ll find a way to merge, mentally, with the AI. Is that even a valid question? And if so, what do you think of it?

I think that’s what’s going to happen. In fact, it’s already happening. We are going to merge with our machines step-by-step. You know, like a computer is a machine that is closer to us than a television. A smartphone is closer to us than a desktop is, and the laptop is somewhere in between.

And we’re already starting to see these things such as Google Glass and augmented reality, where in essence the computer is extending our senses, and extending our part to do things. Elon Musk has this company that is going to create an interface between neurons and computers, and in fact, in research labs this already exists.

I have colleagues that work on that. They’re called brain-computer interfaces. So, step-by-step, right? The way to think about this is, we are cyborgs, right? Human beings are actually the cyborg species. From day one, we were of one with our technology.

Even our physiology would be different if we couldn’t do things like light fires and throw spears. So this has always been an ongoing process. Part of us is technology, and that will become more and more so in the future. Also with things like the Internet, we are connecting ourselves into a bigger, you know… Humanity itself is an emergent phenomenon, and having the Internet and computers allows a greater level to emerge.

And I think exactly how this happened and when, of course, is up for grabs; but that’s the way things are going.

You mentioned in passing a minute ago the singularity. Do you believe that that is what will happen, as it’s commonly thought? That there is going to be this kind of point, in the reasonably near future, from which we cannot see anything beyond it? Because we don’t have any frame of reference?

I don’t believe that the singularity will happen in those terms. So this idea of exponentially increasing progress that goes on forever… that’s not going to happen, because it’s physically impossible, right? No exponential goes on forever. It always flattens out sooner or later.

All exponentials are really what are called “S curves” in disguise. They go up faster and faster—and this is how all previous technology waves have looked—but then they flatten out, and finally they plateau.

Also, this notion that at some point things will become completely incomprehensible for us… I don’t believe that either, because there will always be parts that we understand, number one; and there are limits to what any intelligence can do, human or non-human.

By that stance, the singularity has already happened. A hundred years ago, the most advanced technology was maybe something like a car, right? And I could understand every part of how a car works, completely. Today we already have technology, like the computer systems that we have today, and nobody understands that whole system. Different people understand different parts.

With machine learning in particular, the thing that’s notable about machine learning algorithms is that they can do very complex things very well, and we have no idea how they’re doing them. And yet, we are comfortable with that, because we don’t necessarily care about the details of how it is accomplished, we just care whether the medical diagnosis was correct, or the patient’s cancer was cured, or the car is driving correctly. So I think this notion of the singularity is a little bit off.

Having said that, we are currently in the middle of one of these S curves. We are seeing very rapid progress, and by the time this has run its course, the world will be a very, very different place from what it is today.

How so?

All these things that we’ve been talking about. We will have intelligent machines surrounding us. Not just humanoid machines but intelligence on tap, right? In the same way that today you can use electricity for whatever you want just by plugging into a socket, you will be able to plug into intelligence.

And indeed, the leading tech companies are already trying to make this happen. So there will be all these things which the greater intelligence enables. Everybody will have a home robot in the same way that they have a car. We will have this whole process that the Internet is enabling, and that the intelligence on top of the Internet is enabling, and the Internet of things, and so on.

There will something like this larger emergent being, if you will, that’s not just individual human beings or just societies. But again, it’s hard to picture exactly what that would be, but this is going to happen.

You know, it always makes the news when an artificial intelligence masters some game, right? We all know the list: you had chess, and then you had Jeopardy, of course, and then you had AlphaGo, and then recently you had poker. And I get that games are kind of a natural place, because I guess it’s a confined universe with very rigid, specific rules, and a lot of training data for teaching it how to function in that.

Are there types of problems that machine learning isn’t suited to solve? I mean, just kind of philosophically—it doesn’t matter how good your algorithms are, or how much data you have, or how fast a computer is—this is not the way to solve that particular problem.

Well, certainly some problems are much harder than others, and—as you say—games are easier in the sense that they are these very constrained, artificial universes. And that’s why AI can do so well in them. In fact, the summary of what machine learning and AI are good for today, is that they are good for these tasks which are somewhat well-defined and constrained.

What people are much better at are things that require knowledge of the world, they require common sense, they require integrating lots of different information. We’re not there yet. We don’t have the learning algorithms that can do that.

So the learning algorithms that we have today are certainly good for some things, but not others. But again, if we have the master algorithm then we will be able to do all these things, and we are making progress towards that, so, we’ll see.

Any time I see a chatbot or something that’s trying to pass the Turing test, I always type the same first question, which is: “Which is bigger, a nickel or the sun?” And not a single one of them has ever answered it correctly.

Well, exactly, because they don’t have common sense knowledge. It’s amazing what computers can do in some ways, and it’s amazing what they can’t do in others—like these really simple pieces of common sense logic. In a way, one of the big lessons that we’ve learned in AI is that automating the job of a doctor or a lawyer is actually easy.

What is very hard to do with AI is what a three-year-old can do. If we could have a robot baby that can do what a one-year-old can do, and learn the same way, we would have solved AI. It’s much, much harder to do those things; things that we take for granted, like picking up an object, for example, or like walking around without tripping. We take this for granted because evolution spent five hundred million years developing it. It’s extremely sophisticated, but for us it’s below the conscious level.

The things for us that we are conscious of, and that we have to go to college for, well, we’re not very good at them; we just learned to do them recently. Those, the computers can do much better. So, in some ways in AI, it’s the hard things that are easy and the easy things that are hard.

Does it mean anything if something finally passes the Turing test? And if so, when do you think that might happen? When will it say, “Well, the sun is clearly bigger than a nickel?

Well, with all due respect to Alan Turing—who was a great genius and an AI pioneer—most people in AI, including me, believe that the Turing test is actually a bad idea. The reason the Turing test is a bad idea is that it confuses being intelligent with being human. This idea that you can prove that you’re intelligent by fooling a human into thinking you’re a human is very weird, if you think about it. It’s like saying an airplane doesn’t fly until it can fool birds into thinking it’s a bird. That doesn’t make any sense.

True intelligence can take many forms, not necessarily the human form. So, in some ways we don’t need to pass the Turing test to have AI. And in other ways, the Turing test is too easy to pass, and by some standards has already been passed by systems that no one would call intelligent. Talking with someone for five minutes and fooling them into thinking you’re a human is actually not that hard, because humans are remarkably adept at projecting humanity into anything that acts human.

In fact, even in the ‘60s there was this famous thing called ELIZA, that basically just picked up keywords in what you said and gave back these canned responses. And if you talked to ELIZA for five minutes, you’d actually think that it was a human.

Although Weizenbaum’s observation was, even when people knew ELIZA was just a program, they still formed emotional attachments to it, and that’s what he found so disturbing.

Exactly, so human beings have this uncanny ability to treat things as human, because that’s the only reference point that we have, right? It’s this whole idea of reasoning by analogy. If we have something that behaves even a little bit like a human—because there’s nothing else in the universe to compare it to—we start treating it more like a human and project more human qualities into it.

And, by the way, this is something that, once companies start making bots—this is already happening with chatbots like Siri and Cortana and what not, and it’ll happen even more so with home robots—there’s going to be a race to make the robots more and more humanlike. Because if you form an emotional attachment to my product, that’s what I want, right? I’ll sell more of it, and for a higher price, and so on and so forth. So, we’re going to see uncannily human-like robots and AIs—whether this is a good or bad things is another matter.

What do you think creativity is? And would an AGI, by definition, be creative, right? It could write a sonnet, or…

Yeah, an AGI, by definition, would be creative. One thing that you hear a lot these days, and that unfortunately is incorrect, is that, “Oh, we can automate these menial, routine jobs, but creativity is this deeply human thing that will never be automated.” And, this is kind of like a superficially-plausible notion, but, in fact, there are already examples of, for example, computers that can compose music.

There is this guy, David Cope, a professor at UC Santa Cruz—he has a computer program that will create music in the style of the composer of your choice. And he does this test where he plays a piece by Mozart, a piece by a human composer imitating Mozart, and a piece by his computer—by his system. And he did this at a conference that I was in, and he asked people to vote for which one was the real Amadeus, and the real one won, but the second place was actually the computer. So a computer can already write Mozart better than a professional, highly-educated human composer can.

Computers have made paintings that are actually quite beautiful and striking, many of them. Computers these days write news stories. There’s this company called Narrative Fiction that will write news stories for you. And the likes of Forbes or Fortune—I forget which one it is—actually published some of the things that they write. So it’s not a novel yet, but we will get there.

And also, in other areas, like for example chess and AlphaGo are notable examples… Both Kasparov and Lee Sedol, when they were beaten by the computer, had this remarkable reaction saying, “Wow, the computer was so creative. It came up with these moves that I would never have thought of, that seemed dumb at first but turned out to be absolutely brilliant.”

And computers have done things in mathematics, theorems and proofs and etc., all of which, if done by humans, would be considered highly creative. So, automating creativity is actually not that hard.

It’s funny, when Kasparov first said it seemed creative, what he was implying was that IBM cheated, that people had intervened. And IBM hadn’t cheated. But, that’s a testament to just how—

—There were actually two phases, right? He said that at first, so he was suspicious; because, again, how could something not human actually be doing that? But then later, after the match when he had lost and so on, if you remember, there was this move that Deep Blue made that seemed like a crazy move, and Kasparov said, like, “I could smell a new kind of intelligence playing against me.”

Which is very interesting for us AI-types, because we know exactly what was going on, right? It was these, you know, search algorithms and a whole bunch of technology that we understand fairly well. It’s interesting that from the outside this just seemed like a new kind of intelligence, and maybe it is.

He also said, “At least it didn’t enjoy beating me.” Which I guess someday, though, it may, right?

Oh, yeah, yeah! And you know that could happen depending on how we build them, right? The other very interesting thing that happened in that match—and again, I think it’s symptomatic—is that Kasparov is someone who always won by basically intimidating his opponents into submission. They just got scared of him, and then he beat them.

But the thing that happened with Deep Blue, was that Deep Blue couldn’t be intimidated by him; it was just a machine, right? As a result of which, Kasparov himself—suddenly, for the first time in his life, probably—became insecure. And then, after he lost that game, in the following game, he actually made these mistakes that he would never make, because he had suddenly become insecure.

Foreboding, isn’t it? We talked about emergence a couple of times. There’s the Gaia hypothesis that maybe all of the life on our planet has an emergent property: some kind of an intelligence that we can’t perceive, any more than our cells can perceive us.

Do you have any thoughts on that? And do you have any thoughts on if, eventually, the Internet could just become emergent—an emergent consciousness?

Right. Like most scientists, I don’t believe in the Gaia hypothesis, in the sense that the Earth, as it is, does not have enough self-regulating ability to achieve the homeostasis that living beings do. In fact, sometimes you get these negative feedback cycles where things actually go very wrong. So, most scientists don’t believe in the Gaia hypothesis for Earth today.

Now, what I think—and a lot of other people think this is the case—is that maybe the Gaia hypothesis will be true in the future. Because as the Internet expands, and the Internet of Things—with sensors all over the place, literally all over the planet—and a lot of actions continue being taken based on those sensors to, among other things, preserve us and presumably other kinds of life on Earth… I think if we fast-forward a hundred years, there’s a very good chance that Earth will look like Gaia, but it will be a Gaia that is technological, as opposed to just biological.

And in fact, I don’t think that there’s an opposition between technology and biology. I think technology will just be the extension of biology by other means. It’s biology that’s made by us. I mean, we’re creatures, and so the things that we make are also biology, in that sense.

So if you look at it that way, maybe what has happened is that since the very beginning, Earth has been evolving towards Gaia, we just haven’t gotten there yet. But technology is very much part of getting there.

What do you think of the OpenAI initiative?

The OpenAI initiative’s goal is to do AI for the common good. Because, you know, people like Elon Musk and Sam Altman were afraid that because the biggest quantity of AI research is being done inside companies—like Google and Facebook and Microsoft and Amazon and what not—it would be owned by them. And AI is very powerful, so it’s dangerous if AI is just owned by these companies.

So, their goal is to do AI research that is going to be open, hence the name, and available to everybody. I think this is a great agenda, so I very much agree with trying to do that. I think there’s nothing wrong with having a lot of AI research in companies, but I think it’s important that there also be AI research that is in the public domain. Universities are one aspect of doing that, something like OpenAI is another example, something like the Allen Institute for AI is another example of doing AI for the public good in this way. So, I think this is a good agenda.

What they’re going to do exactly, and what their chances of succeeding are, and how their style of AI will compare to the styles of AI that are being produced by these other labs, whether industry or academia, is something that remains to be seen. But I’m curious to see what they get out of it.

The worry from some people is that… They make it analogous to a nuclear weapon, in that if you say, “We don’t know how to build one, but we can get 99% of the way there, and we’re going to share that with everybody on the planet.” And then you hope that the last little bit that makes it an AGI isn’t a bad actor of some kind. Does that make sense to you?

Yeah, yeah… I understand the analogy, but you have to remember that AI and nuclear weapons are very different for a couple of reasons. One is that nuclear weapons are essentially destructive things, right? Yeah, you can turn them into nuclear power, but they were invented to blow things up.

Whereas AI is a tool that we use to do all sorts of things, like diagnose diseases and place ads on webpages, and things from big to small. The thing is, the knowledge to build a nuclear bomb is actually not that hard to come by. Fortunately, what is very hard to come by is the enriched uranium, or plutonium, to build the bomb.

That’s actually what keeps any terrorist group from building a bomb. It’s not the lack of knowledge, it’s the lack of the materials. Now, in AI it’s actually very different. You just need computing power, and you can just plug into the cloud and get that computing power. AI is just algorithms. It’s already accessible. Lots of people can use it for whatever they want.

In a way, the safety lies in actually having AI in the hands of everybody, so that it’s not in the hands of a few. If only one person or one company had access to the master algorithm, they would be too powerful. If everybody has access to the master algorithm then there will be competition, there will be collaboration. There will be like a whole ecosystem of things that happen, and we will be safer that way, just as we are with the economy as it is. But, having said that, we will need something like an AI police.

William Gibson in Neuromancer had this thing called the Turing police, right? The Turing police are AIs whose job is to police the other AIs, to make sure that they don’t go bad, or that they get stopped when they go bad. And this is no different from what already happens. We have highways, and bank robbers can use the highways to get away. That’s no reason to not have highways, but of course the police also need to have cars so they can catch the robbers, so I think it’s going to be a similar thing with AI.

When I do these chats with people in AI, science fiction writers always come up. They always reference them, they always have their favorites and what not. Do you have any books, movies, TV shows or anything like that that you watch them and you go, “Yes, that could happen”?

Unfortunately, a lot of the depictions of AI and robots in movies and TV shows is not very realistic, because the computers and robots are really just humans in disguise. This is how you make an interesting story, is by making the robots act like humans. They have evil plans to take over the world, or somebody falls in love with them, and things like that—and that’s how you make an interesting movie.

But real AIs, as we were talking about, are very different than that. A lot of the movies that people associate with AI—like Terminator, for example—are really not stuff that will happen, but with a provision that science fiction is a great source of self-fulfilling prophecies, right? People read those things and then they try to make them happen. So, who knows.

Having said that, what is an example of a movie depicting AI that I think could happen, and is fairly interesting and realistic? Well, one example is the movie Her. The movie Her is basically about a virtual assistant that is very human-like, and ten years ago that would’ve been a very strange movie. These days we already have things like Siri, and Cortana, and Google Now, which are, of course, still a far cry from Her. But I think we’re going to get closer and closer to that.

And final question: What are you working on, and are you going to write another book? What keeps you busy?

Two things: I think we are pretty close to unifying those five master algorithms, and I’m still working on that. That’s what I’ve been working on for the last ten years. And I think we’re almost there. I think once we’re there, the next thing is that, as we’ve been talking about, that’s not going to be enough. So we need something else.

I think we need something beyond the existing five paradigms we have, and I’m working on a new type of learning that I hope will actually take us beyond what those five could do. Some people have jokingly called it the sixth paradigm, and maybe my next book will be called The Sixth Paradigm. That makes it sound like a Dan Brown novel, but that’s definitely something that I’m working on.

When you say you think the master algorithm is almost ready… Will there be a “ta-da” moment, like, here it is? Or, is it kind of a gradualism?

It’s a gradual thing. Look at physics, they’ve unified three of the forces—electromagnetism and the strong and weak forces, but they still haven’t unified gravity with them. There are proposals like string theory to do that.

These “a-ha” moments often only happen in retrospect. People propose a theory, and then maybe it gets tested, and then maybe it gets revised, and then finally when all the pieces are in place people go, “Oh, wow.” And I think it’s going to be like that with the master algorithm as well.

We have candidates, we have ways of putting these pieces together. It still remains to be seen whether they can do all the things that we want, and how well they will scale. Scaling is very important, because if it’s not scalable then it’s not really solving the problem, right? So, we’ll see.

All right, well thank you so much for being on the show.

Thanks for having me, this was great!

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.

Voices in AI

Visit VoicesInAI.com to access the podcast, or subscribe now:

Read more here:: gigaom.com/feed/