ietf rfc 2460

End-to-end IoT platform Particle banks $20M Series B

Particle, an end-to-end enterprise IoT platform has raised $20 Million in Series B Funding. Spark Capital led the round.

Particle is a full-stack IoT device platform.

The company entered into the IoT platform market as a hardware connectivity solution deploying its platform in a wide variety of industrial and municipal IoT applications, such as in gas stations, on oil rigs, in storm water drains, and in manufacturing equipment.

However, it has evolved into a full-stack suite having the hardware, firmware, software and the Particle cloud. An appealing tagline “prototype-to-production platform for developing an Internet of Things product” and an equally robust product has helped Particle’s growth. 120,000 people and companies, 50% of which are from S&P 500, currently use it to develop their IoT products.

“While the media’s IoT focus has been primarily on consumer electronics, wearables, and ‘smart home’, we’ve learned from our customers that the future of IoT is within the enterprise market,” said Zach Supalla, CEO at Particle.

Forbes estimated that more than 450 IoT companies have mushroomed in the past two years and 25% of the IoT companies founded in 2016 closed their shop. Particle’s growth and traction were noteworthy compared to other IoT products.

Particle full-stack platform has a cloud offering, device management console, cellular IoT SIMs, SDKs and IDE (integrated development environment). The hardware components it sells include Wi-Fi connected microcontrollers, internet buttons, and asset tracking hardware kits.

The company has been consistently raising funds throughout its five years journey. It raised $567,000 of crowd-funding in 2013 followed by a Seed Round of $4.2M in July 2014 and $10.4M Series A from Root Ventures, O’Reilly Alpha Tech Ventures (OATV), and Rincon Venture Partners in Nov 2016.

Read more here:: feeds.feedburner.com/iot

End-to-end IoT platform Particle banks $20M Series B

By News Aggregator

Particle, an end-to-end enterprise IoT platform has raised $20 Million in Series B Funding. Spark Capital led the round.

Particle is a full-stack IoT device platform.

The company entered into the IoT platform market as a hardware connectivity solution deploying its platform in a wide variety of industrial and municipal IoT applications, such as in gas stations, on oil rigs, in storm water drains, and in manufacturing equipment.

However, it has evolved into a full-stack suite having the hardware, firmware, software and the Particle cloud. An appealing tagline “prototype-to-production platform for developing an Internet of Things product” and an equally robust product has helped Particle’s growth. 120,000 people and companies, 50% of which are from S&P 500, currently use it to develop their IoT products.

“While the media’s IoT focus has been primarily on consumer electronics, wearables, and ‘smart home’, we’ve learned from our customers that the future of IoT is within the enterprise market,” said Zach Supalla, CEO at Particle.

Forbes estimated that more than 450 IoT companies have mushroomed in the past two years and 25% of the IoT companies founded in 2016 closed their shop. Particle’s growth and traction were noteworthy compared to other IoT products.

Particle full-stack platform has a cloud offering, device management console, cellular IoT SIMs, SDKs and IDE (integrated development environment). The hardware components it sells include Wi-Fi connected microcontrollers, internet buttons, and asset tracking hardware kits.

The company has been consistently raising funds throughout its five years journey. It raised $567,000 of crowd-funding in 2013 followed by a Seed Round of $4.2M in July 2014 and $10.4M Series A from Root Ventures, O’Reilly Alpha Tech Ventures (OATV), and Rincon Venture Partners in Nov 2016.

Read more here:: feeds.feedburner.com/iot

The post End-to-end IoT platform Particle banks $20M Series B appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Checklist for Getting a Grip on DDOS Attacks and the Botnet Army

By Industry Perspectives

Heitor Faroni is Director of Solutions Marketing for Alcatel-Lucent Enterprise.

Distributed Denial of Service (DDoS) attacks jumped into the mainstream consciousness last year after several high-profile cases – one of the largest and most widely reported being the Dyn takedown in Fall 2016, an interesting example as it used poorly secured IoT devices to coordinate the attack. While not necessarily a new threat, they have in fact been around since the late ’90s.

When you consider that Gartner predicts that by 2020 it is predicted there will be 20 billion connected devices as part of the growing Internet of Things, the need to implement the right network procedures and tools to properly secure all these devices is only going to grow.

The New Battleground – Rent-a-bots on the Rise

Put simply, DDoS attacks occur when an attacker attempts to make a network resource unavailable to legitimate users by flooding the targeted network with superfluous traffic until it simply overwhelms the servers and knocks the service offline. Thousands and thousands of these attacks happen every year, and are increasing both in number and in scale. According to some reports, 2016 saw a 138 percent year-over-year increase in the total number of attacks greater than 100Gbps.

The Dyn attack used the Mirai botnet which exploits poorly secured, IP-enabled “smart things” to swell its ranks of infected devices. It is programmed to scan for IoT devices that are still only protected by factory-set defaults or hard-coded usernames and passwords. Once infected, the device becomes a member of a botnet of tens of thousands of IoT devices, which can then bombard a selected target with malicious traffic.

This botnet and others are available for hire online from enterprising cybercriminals; and as their functionalities and capabilities are expanded and refined, more and more connected devices will be at risk.

So what steps can businesses take to protect themselves now and in the in the future?

First: Contain the Threat

With the rise of IoT at the heart of digital business transformation and its power as an agent for leveraging some of the most important technological advances – such as big data, automation, machine learning and enterprise-wide visibility – new ways of managing networks and their web of connected devices are rushing to keep pace.

A key development is IoT containment. This is a method of creating virtual isolated environments using network virtualization techniques. The idea is to group connected devices with a specific functional purpose, and the respective authorized users into a unique IoT container. You still have all users and devices in a corporation physically connected to a single converged network infrastructure, but they are logically isolated by these containers.

Say, for example, the security team has 10 IP-surveillance cameras at a facility. By creating an IoT container for the security team’s network, IT staff can create a virtual, isolated network which cannot be accessed by unauthorized personnel – or be seen by other devices outside the virtual environment. If any part of the network outside of this environment is compromised, it will not spread to the surveillance network. This can be replicated for payroll systems, R&D or any other team within the business.

By creating a virtual IoT environment you can also ensure the right conditions for a group of devices to operate properly. Within a container, quality of service (QoS) rules can be enforced, and it is possible to reserve or limit bandwidth, prioritize mission critical traffic and block undesired applications. For instance, the surveillance cameras that run a continuous feed may require a reserved amount of bandwidth, whereas critical-care machines in hospital units must get the highest priority. This QoS enforcement can be better accomplished by using switches enabled with deep-packet inspection, which see the packets traversing the network as well as what applications are in use – so you know if someone is accessing the CRM system, security feeds or simply watching Netflix.

Second: Protection at the Switch

Businesses should ensure that switch vendors are taking the threat seriously and putting in place procedures to maximize hardware protection. A good approach can be summed up in a three-pronged strategy.

  • A second pair of eyes – make sure the switch operating system is verified by third-party security experts. Some companies may shy away from sharing source code to be verified by industry specialists, but it is important to look at manufacturers that have ongoing relationships with leading industry security experts.
  • Scrambled code means one switch can’t compromise the whole network. The use of open source code as part of operating systems is common in the industry, which does come with some risk as the code is “common knowledge”. By scrambling object code within the switch’s memory, even if a hacker could locate sections of open source code in one switch each would be scrambled uniquely, so the same attack would not work on multiple switches.
  • How is the switch operating system delivered? The IT industry has a global supply chain, with component manufacturing, assembly, shipping and distribution having a worldwide footprint. This introduces the risk of the switch being tampered with before it gets to the end-customer. The network installation team should always download the official operating systems to the switch directly from the vendor’s secure servers before installation.

Third: Do the Simple Things to Secure Your Smart Things

As well as establishing a more secure core network, there are precautions you can take right now to enhance device protection. It is amazing how many businesses miss out these simple steps.

  • Change the default password One very simple and often overlooked procedure is changing the default password. In the Dyn case, the virus searched for default settings of the IP devices to take control.
  • Update the software As the battle between cybercriminals and security experts continues, the need to stay up-to-the-minute with the latest updates and security patches becomes more important. Pay attention to the latest updates and make it part of the routine to stay on top.
  • Prevent remote management Disable the remote management protocol, such as telnet or http, that provide control from another location. The recommended remote management secure protocols are via SSH or https.

Evolve Your Network

The Internet of Things has great transformative potential for businesses in all industries, from manufacturing and healthcare to transportation and education. But with any new wave of technical innovation comes new challenges. We are at the beginning of the IoT era, which is why it’s important to get the fundamental network requirements in place to support not only the increase in data traversing our networks, but enforcing QoS rules and minimizing risk from cyberattacks.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

Checklist for Getting a Grip on DDOS Attacks and the Botnet Army

By News Aggregator

By Industry Perspectives

Heitor Faroni is Director of Solutions Marketing for Alcatel-Lucent Enterprise.

Distributed Denial of Service (DDoS) attacks jumped into the mainstream consciousness last year after several high-profile cases – one of the largest and most widely reported being the Dyn takedown in Fall 2016, an interesting example as it used poorly secured IoT devices to coordinate the attack. While not necessarily a new threat, they have in fact been around since the late ’90s.

When you consider that Gartner predicts that by 2020 it is predicted there will be 20 billion connected devices as part of the growing Internet of Things, the need to implement the right network procedures and tools to properly secure all these devices is only going to grow.

The New Battleground – Rent-a-bots on the Rise

Put simply, DDoS attacks occur when an attacker attempts to make a network resource unavailable to legitimate users by flooding the targeted network with superfluous traffic until it simply overwhelms the servers and knocks the service offline. Thousands and thousands of these attacks happen every year, and are increasing both in number and in scale. According to some reports, 2016 saw a 138 percent year-over-year increase in the total number of attacks greater than 100Gbps.

The Dyn attack used the Mirai botnet which exploits poorly secured, IP-enabled “smart things” to swell its ranks of infected devices. It is programmed to scan for IoT devices that are still only protected by factory-set defaults or hard-coded usernames and passwords. Once infected, the device becomes a member of a botnet of tens of thousands of IoT devices, which can then bombard a selected target with malicious traffic.

This botnet and others are available for hire online from enterprising cybercriminals; and as their functionalities and capabilities are expanded and refined, more and more connected devices will be at risk.

So what steps can businesses take to protect themselves now and in the in the future?

First: Contain the Threat

With the rise of IoT at the heart of digital business transformation and its power as an agent for leveraging some of the most important technological advances – such as big data, automation, machine learning and enterprise-wide visibility – new ways of managing networks and their web of connected devices are rushing to keep pace.

A key development is IoT containment. This is a method of creating virtual isolated environments using network virtualization techniques. The idea is to group connected devices with a specific functional purpose, and the respective authorized users into a unique IoT container. You still have all users and devices in a corporation physically connected to a single converged network infrastructure, but they are logically isolated by these containers.

Say, for example, the security team has 10 IP-surveillance cameras at a facility. By creating an IoT container for the security team’s network, IT staff can create a virtual, isolated network which cannot be accessed by unauthorized personnel – or be seen by other devices outside the virtual environment. If any part of the network outside of this environment is compromised, it will not spread to the surveillance network. This can be replicated for payroll systems, R&D or any other team within the business.

By creating a virtual IoT environment you can also ensure the right conditions for a group of devices to operate properly. Within a container, quality of service (QoS) rules can be enforced, and it is possible to reserve or limit bandwidth, prioritize mission critical traffic and block undesired applications. For instance, the surveillance cameras that run a continuous feed may require a reserved amount of bandwidth, whereas critical-care machines in hospital units must get the highest priority. This QoS enforcement can be better accomplished by using switches enabled with deep-packet inspection, which see the packets traversing the network as well as what applications are in use – so you know if someone is accessing the CRM system, security feeds or simply watching Netflix.

Second: Protection at the Switch

Businesses should ensure that switch vendors are taking the threat seriously and putting in place procedures to maximize hardware protection. A good approach can be summed up in a three-pronged strategy.

  • A second pair of eyes – make sure the switch operating system is verified by third-party security experts. Some companies may shy away from sharing source code to be verified by industry specialists, but it is important to look at manufacturers that have ongoing relationships with leading industry security experts.
  • Scrambled code means one switch can’t compromise the whole network. The use of open source code as part of operating systems is common in the industry, which does come with some risk as the code is “common knowledge”. By scrambling object code within the switch’s memory, even if a hacker could locate sections of open source code in one switch each would be scrambled uniquely, so the same attack would not work on multiple switches.
  • How is the switch operating system delivered? The IT industry has a global supply chain, with component manufacturing, assembly, shipping and distribution having a worldwide footprint. This introduces the risk of the switch being tampered with before it gets to the end-customer. The network installation team should always download the official operating systems to the switch directly from the vendor’s secure servers before installation.

Third: Do the Simple Things to Secure Your Smart Things

As well as establishing a more secure core network, there are precautions you can take right now to enhance device protection. It is amazing how many businesses miss out these simple steps.

  • Change the default password One very simple and often overlooked procedure is changing the default password. In the Dyn case, the virus searched for default settings of the IP devices to take control.
  • Update the software As the battle between cybercriminals and security experts continues, the need to stay up-to-the-minute with the latest updates and security patches becomes more important. Pay attention to the latest updates and make it part of the routine to stay on top.
  • Prevent remote management Disable the remote management protocol, such as telnet or http, that provide control from another location. The recommended remote management secure protocols are via SSH or https.

Evolve Your Network

The Internet of Things has great transformative potential for businesses in all industries, from manufacturing and healthcare to transportation and education. But with any new wave of technical innovation comes new challenges. We are at the beginning of the IoT era, which is why it’s important to get the fundamental network requirements in place to support not only the increase in data traversing our networks, but enforcing QoS rules and minimizing risk from cyberattacks.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

The post Checklist for Getting a Grip on DDOS Attacks and the Botnet Army appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Could your IoT solution survive on Silk Road 4.0? Submit your ideas now

By Jeremy Cowan

Regular readers will remember our exclusive article on the madcap, multinational, multilingual scheme to ride deep into the heart of China along the old Silk Road. (See: Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you?) Well, now it’s getting serious, says Jeremy Cowan. And what’s more, it’s time […]

The post Could your IoT solution survive on Silk Road 4.0? Submit your ideas now appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Could your IoT solution survive on Silk Road 4.0? Submit your ideas now

By News Aggregator

By Jeremy Cowan

Regular readers will remember our exclusive article on the madcap, multinational, multilingual scheme to ride deep into the heart of China along the old Silk Road. (See: Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you?) Well, now it’s getting serious, says Jeremy Cowan. And what’s more, it’s time […]

The post Could your IoT solution survive on Silk Road 4.0? Submit your ideas now appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Could your IoT solution survive on Silk Road 4.0? Submit your ideas now appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

10 Benefits of Analyzing Data at the Edge in an IoT Environment

There is a tremendous amount of data being generated at the sources in IoT environments. eWEEK offers 10 reasons why real-time analytics at the data source is critical.

Read more here:: www.eweek.com/rss.xml

AgilePoint to Attend Microsoft Inspire 2017

Low-code leader to highlight AI and IoT readiness and ability to create apps, forms, and workflows that will run on any version of SharePoint and Salesforce

(PRWeb July 10, 2017)

Read the full story at http://www.prweb.com/releases/2017/07/prweb14491570.htm

Read more here:: www.prweb.com/rss2/technology.xml

ION Costa Rica: The future is IPv6

By Kevin Meynell

The Deploy360 team organised the second ION Conference of the year on 3 July 2017 at the Intercontinental Hotel in San José, Costa Rica. This was co-located with the TICAL Conference 2017, the annual event for Latin American National Research and Education Networks, as well as the Latin American eScience Meeting 2017. It attracted 85 participants and we again thank our sponsor Afilias for making this possible.

It was the turn of Megan Kruse to chair this event, and she opened proceedings with an overview of the Deploy360 programme, before handing over to Kevin Meynell who discussed what was happening at the IETF and how to get involved. He encouraged the Latin American networking community to check out the IETF Fellowship and IETF Policy programmes, and pointed out this had provided opportunities for participants from Costa Rica at both the last and forthcoming IETF meetings.

We were lucky enough to have Fred Baker, the Co-Chair of the IETF IPv6 Operations Working Group and former IETF Chair, to talk about the results of the Internet Society report on the State of IPv6 that was published in June. He pointed out that all Regional Internet Registries were now approaching IPv4 exhaustion, with only small quantities of addresses available to new entrants, whilst there had been rapid IPv6 growth over the past year. This was especially the case in the Latin American region where around 37% of AS numbers were now announcing IPv6 address prefixes, IPv6 traffic was over 10%, and reached nearly 20% in some countries.

It was clear that IPv4 would not be able to accommodate future growth in the Internet, and whilst surplus IPv4 addresses were being traded, the cost was expected to reach USD 20 per address over the next couple of years before dropping substantially as IPv6 deployment approaches 50%. This cannot be considered an long-term investment, so question marks were now being raised by accounts departments as to why they were paying for something that could be provided for free. In fact, MIT had just sold a surplus IPv4 /9 in order to fund their IPv6 deployment, major service providers were moving to IPv6 dominant data centres, and there was also substantial IPv6 deployment in mobile networks.

So the takeaway is that network operators need to be deploying IPv6 now, in order to ensure their equipment and applications have been tested and are able to support it, as well as giving their staff experience of using it. Is paying for something you can provision for free a good business model, and are you willing to sustain the ever greater complexity and cost of Carrier Grade NAT to meet future growth?

This message was reinforced by Guillermo Cicileo (LACNIC) who provided an overview of IPv6 Deployment in Costa Rica and Latin America (in Spanish only). Several countries in the region were amongst the world leaders in IP6 deployment, including Trinidad and Tobago (21%), Brazil (18%), Ecuador (18%) and Peru (17%), but most of the others substantially lagged behind. Unfortunately. Costa Rica had very low rates of IPv6 deployment, although the example of Trinidad and Tobago that went from 0% to 21% in only 3 years demonstrated what was possible in small countries.

Following the break, Kevin led a panel discussion on MANRS and Routing Security that included Erika Vega (RENATA) and Glenn Peace (ix.CR). The Boundary Gateway Protocol (BGP) underpins the Internet routing system, but is substantially based on global trust and there is little validation of the legitimacy of routing updates. So the panel discussed techniques to help improve the security and resilience of the global routing system, as well as how to promote a culture of collective responsibility.

Kevin firstly presented the MANRS initiative and Routing Resilience Manifesto that encourages network operators to subscribe to four actions including filtering, anti-spoofing, coordination and address prefix validation, and has developed resources to help them implement these. This includes the MANRS Best Current Operational Practice which is a technical document providing step-by-step instructions, along with a set of online training modules.

Erika followed-up with a presentation on a LACNIC-sponsored collaboration with RENATA, the Columbian NREN. RPKI is a specialised Public Key Infrastructure that allows cryptographic verification of the holders of particular AS numbers and IP addresses, and therefore provides a framework for securing the routing infrastructure. RENATA is aiming to deploy RPKI to at least 50% of its connected institutions, in order to provide a demonstration of how extensive deployment can improve routing security, and potentially offer a large testbed for BGPSEC when this becomes available.

Turning to a different subject, Mauricio Oviedo (NIC.CR) offered an introduction to DNSSEC and why we need it. He outlined the problems that DNSSEC aims to solve, whereby end users are assured that information returned from a DNS query is the same as that provided by the domain name holder; running through examples of how the DNS can be compromised such as cache poisoning and query interception. These assurances are established using cryptographic principles through a chain-of-trust originating from the root DNS servers, and propagated through signed Top-Level Domain (TLD) and subsequent sub-domain zones.

All major DNS resolvers support DNSSEC validation and 87% of TLDs were now signed, including .cr which validated around 31% of queries. However, very few Second-Level Domains (SLDs) were validated in the country, which meant there was substantial room for improvement amongst DNS operators.

Rounding off the conference was a panel discussion on IPv6 success stories chaired by our colleague Christian O’Flaherty from ISOC’s Latin America & Caribbean Bureau. This involved Fred Baker, Claudio Chacon (CEDIA) and Elidier Moya (Costa Rican Ministry of Telecommunications) who discussed topics such as how the CEDIA research and education network was an early adopter of IPv6 which encouraged deployment elsewhere in Ecuador, the deployment experiences of the CERNET2 IPv6-only network in China, and the project to promote IPv6 in Costa Rica. Fred also outlined how the IETF was putting IPv6 examples into RFCs and Internet Drafts to encourage uptake, and highlighted the Chinese experience of running more than 256 users per IPv4 addresses that had a measurable detrimental influence on performance.

The very positive outcome of the conference was the launch of the Costa Rican Network Operators Group (NOGCR). This aims to bring together the approximately 40 active ISPs in the country for the first time, and an IPv6 workshop was organised the following day at the NIC.CR premises with Fred Baker and the Deploy360 team that involved 25 representatives of the ISPs.

Deploy360 would like to thank TICAL for hosting and supporting this ION. Thanks also to the speakers and everyone else who contributed towards making the event a successful and productive one.

Further Information

The proceedings from ION Costa Rica are available here, and the webcast will also be available on our YouTube channel shortly.

If you’re inspired by what you see and read, then please check out our Start Here page to understand how you can get started with IPv6 and DNSSEC.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

TensorFlow to Hadoop By Way of Datameer

By Alex Woodie

Companies that want to use TensorFlow to execute deep learning models on big data stored in Hadoop may want to check out the new SmartAI offering unveiled by Datameer today.

Deep learning has emerged as one of the hottest technique for turning massive sets of unstructured data into useful information, and Google‘s Tensorflow is arguably the most popular programming and runtime framework for enabling it. So it made sense that Datameer, which was one of the first vendors to develop a soup-to-nuts Hadoop application for big data anatlyics, has now added support for TensorFlow into its Hadoop-based application.

With today’s unveiling of SmartAI, Datameer is providing a way to execute and operationalize TensorFlow models. “The objective here is to take the stuff that mad scientists are coming up with, and actually take it to the business,” Datameer’s Senior Director of Product Marketing John Morrell tells Datanami.

SmartAI, which is still in technical preview, is not helping data scientists to create the models. They will still do that in their favorite coding environment. Nor is it set up to train the models. If you’re interested in learning about how that can be accomplished on Hadoop, Hortonworks has a good blog post on integrating TensorFlow assemblies into YARN.

Rather, Datameer’s new app is all about solving some of the thorny “last mile” problems that organizations often encounter as they’re moving a trained TensorFlow model from the lab into production.

“AI today has had some problems in terms of operationalization,” Morrell says. “When a data scientist come up with a formula using their data science tools, they just chuck it over the wall to IT guy, who then tries to turn it into code, and custom code the whole thing.”

Datameer seeks to help operationalize TensorFlow models with SmartAI

Instead of using scripts and custom coding, SmartAI aims to codify the TensorFlow work into its Hadoop application. Not only does Datameer provide a way to distribute TensorFlow algorithms to nodes in a Hadoop cluster by way of YARN, but it also hooks it into its workflow to help solve some of the thorny issues around code re-use, data governance, and security.

“It allows you to take an AI model that you created in TensorFlow, plug it into Datameer, and then Datameer can operationalize those models,” Morrell continues. “It can operationalize those insights, directly on top of your data lake, and give you all the scale and security and governance and integration with your business systems that is lacking in the data science world.”

AI is only as good as the data that feeds it, says Datameer CTO Peter Voss. “We’re thrilled to connect the dots by allowing enterprises to bring together massive amounts of disparate data, prepare and design the data pipeline, and now ultimately feed the data into models that have the potential to radically optimize business models,” he says.

Deep learning is a form of unsupervised machine learning that’s grown rapidly in popularity over the past year. The approach was initially used by Web giants like Google, Yahoo, and Microsoft to turbo-charge image recognition, voice recognition, and natural language processing (NLP) systems. This is typically done by training very large neural networks, with hundreds or thousands of layers, atop speedy GPUs processors.

As deep learning racks up the wins and demonstrates better accuracy compared to other machine learning techniques, it’s starting to branch out into the broader market. Today data scientists are looking for other ways to leverage the enormous power of this form of unstructured data analysis. In particular, organizations are examining ways to use deep learning in areas like fraud detection, recommendation systems, healthcare analytics, and analysis of time-series IoT data.

Deep learning’s main advantage lies in speed and simplicity. Many data scientists are looking to use TensorFlow to replace models originally developed with Spark’s MLlib, as TensorFlow can be an order of magnitude faster than Spark, Morrell says, “You can train things about four to 100 times faster and you can put together model with 10 to 12 lines of coding,” he says.

One of the great things about AI and deep learning in particular, Morrell adds, is that it takes feature engineering out of the equation, “because the deep learning model can automatically figure out what attributes are important,” he says. “This will dramatically speed up their cycles in terms of producing predictive types of models, and it will allow them to tackle many, many more problems.”

Datameer was one of the first vendors offering an end-to-end analytics application for Apache Hadoop that delivered many of the capabilities organizations need to operationalize their big data investments on the distributed platform. As technology evolved, so did Datameer, which added support for Apache Spark to boost the speed and provide access to more data science tools.

TensorFlow was the first deep learning framework added to Datameer’s application, but the company expects to add more frameworks over time, Morrell says.

Related Items:

Machine Learning, Deep Learning, and AI: What’s the Difference?

Why Deep Learning, and Why Now

Spark’s New Deep Learning Tricks

The post TensorFlow to Hadoop By Way of Datameer appeared first on Datanami.

Read more here:: www.datanami.com/feed/