internet of things devices

Farmers to Congress: We Need Broadband, Too

By Jack Corrigan Agriculture and energy companies in rural areas want to find efficiencies with internet of things devices, but lack of high-speed access is a problem

Read more here:: www.nextgov.com/rss/all/

Farmers to Congress: We Need Broadband, Too

By News Aggregator

By Jack Corrigan Agriculture and energy companies in rural areas want to find efficiencies with internet of things devices, but lack of high-speed access is a problem

Read more here:: www.nextgov.com/rss/all/

The post Farmers to Congress: We Need Broadband, Too appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Top network monitoring software and visibility tools

By Brandon Butler

Networking performance monitoring and diagnostics (NPMD) software, whether running as an independent appliance or embedded in networking equipment, can help stave off productivity issues for internal corporate users as well as those interacting with the network from the outside.

But with ever-increasing traffic on corporate networks, users attempting to optimize connections to the cloud and new Internet of Things devices bombarding the network, enterprises and network performance monitoring vendors face growing challenges.

+ ALSO ON NETWORK WORLD: 7 must-have network tools +

To read this article in full or to leave a comment, please click here

Read more here:: www.networkworld.com/category/lan-wan/index.rss

Top network monitoring software and visibility tools

By News Aggregator

By Brandon Butler

Networking performance monitoring and diagnostics (NPMD) software, whether running as an independent appliance or embedded in networking equipment, can help stave off productivity issues for internal corporate users as well as those interacting with the network from the outside.

But with ever-increasing traffic on corporate networks, users attempting to optimize connections to the cloud and new Internet of Things devices bombarding the network, enterprises and network performance monitoring vendors face growing challenges.

+ ALSO ON NETWORK WORLD: 7 must-have network tools +

To read this article in full or to leave a comment, please click here

Read more here:: www.networkworld.com/category/lan-wan/index.rss

The post Top network monitoring software and visibility tools appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Packet Says Your Data Center Shouldn’t Have an Opinion

By Christine Hall

Packet, the New York-based startup that provides dedicated high-performance bare-metal servers as a cloud service similar to AWS or Azure, has added 11 data center locations around the world to its previously existing four and launched a new edge computing service aimed at applications that require low-latency communication between end devices and compute infrastructure.

The new service is called Edge Compute, and reason for such rapid expansion is to make the company’s services more attractive to companies with those latency-sensitive workloads. But in general, Packet — which last year raised a $9.4 million Series A round led by the Japanese telco and technology investment powerhouse SoftBank — is on a mission to give developers access to unopinionated infrastructure. We’ll explain what that means later in the article.

“It’s no longer the case that having infrastructure in one or two locations is reasonable,” Zachary Smith, Packet’s co-founder and CEO, explained to Data Center Knowledge. “Having infrastructure in 20 locations might be table stakes for a latency-focused application. We consider this a chasm that’s been created between the people who can do this and those who can’t.”

See also: Vapor IO to Sell Data Center Colocation Services at Cell Towers

The locations that became available today are Los Angeles, Seattle, Dallas, Chicago, Ashburn (Virginia), Atlanta, Toronto, Frankfurt, Singapore, Hong Kong, and Sydney. This is in addition to the company’s existing locations in New York, Sunnyvale, Amsterdam, and Tokyo. Another expansion is scheduled in October, which will add Paris, London, São Paulo, and Mumbai to the mix.

“What we’re doing is taking our platform that’s already in four locations globally and expanding it to within five or ten milliseconds of all the major population centers,” he said.

Edge Computing for IoT, Self-Driving Cars, and More

As the name suggests, the new edge computing service is about better serving the outer perimeter of the internet, which is expected to see rapid growth with 5G, the upcoming high-speed mobile network standard expected to enable everything from more sophisticated — and-bandwidth hungry — Internet of Things devices to self-driving cars and augmented and virtual reality systems.

“While edge compute is still in its infancy, new experiences are driving demand for distributed infrastructure, especially as software continues its relentless pursuit down the stack,” Smith said. “We believe that the developers building these new experiences are hungry for a distributed, unopinionated, and yet fully automated compute infrastructure, and that’s what we’re bringing to the market today.”

For the time being, the new Edge Compute locations feature a single server configuration, Type 1E, which features an Intel E3-1578L v5 processor, Intel IRIS GPU, 32GB of RAM, 240GB of SSD storage, and 10 Gbps network interfaces. Packet is planning to add more configurations later, including servers powered by ARMv8 processors. The Type 1E lists at $0.50 an hour, but is also available through Packet’s new spot market, similar to AWS’s Spot Instances, offering marketplace-style pricing in all of Packet’s 15 locations.

For users requiring customized hardware, Packet has a program it calls Private Deployment which has been extended to the new data center locations. Through this program, users can deploy custom configurations while benefiting from Packet’s platform and network automation.

Equinix and Interxion

For the record, in North America and Asia Pacific, Packet generally houses its servers in data centers operated by Equinix, and in Europe it partners primarily with Interxion. Smith said the amount of floor space the company occupies varies from data center to data center.

“In some of our major facilities we’re obviously buying wholesale by the megawatt, because we’re in the cloud business and have thousands of servers. Some of the edge locations are fairly primitive, starting with small cages and allowing us to expand from there.”

Many Locations, No Opinions

Packet’s unique model helps remove infrastructure restrictions and the layers of abstraction that are inherent in most public clouds, Smith explained. “For any sort of innovative startup, it’s almost an impossibility for you to deploy infrastructure at 15 or 20 locations around the globe [on your own]. You’ve got to rely on somebody else’s infrastructure. At that point, you’re really consuming their opinion, and their viewpoint, and their abstraction away from the hardware.”

Indeed, Packet isn’t your father’s cloud company.

Unlike AWS, Azure, and GCP, which are based to a great extent on proprietary technologies and come with a degree of vendor lock-in, Packet is focused on the hardware. Software is available — plenty of operating systems, OpenStack, and more — but users can take the BYO approach and bring their own software — including operating systems. All instances are spun up on bare metal, with customers getting a fully isolated dedicated server with no shared resources. Packet supplies an extensive set of automation tools, but otherwise hands customers the cloud equivalent of an on-prem server, without vendor lock-in.

“At its simplest, we give you a physical server as if you went into a data center and racked it,” Smith said.

This means it’s not for everybody. It’s designed for experienced DevOps folks. Those who want to deploy containers with a couple of clicks without really knowing what they’re doing should probably look for another solution. According to Smith, about a third of the workloads on Packet are cloud native, specifically Kubernetes, Docker, Mesosphere, or automated DevOps services. Another large portion is enterprise IT, and includes applications running on OpenStack, VMware, CloudStack, and Hadoop.

“We offer, effectively, primitives,” he said. “The ability to automate hardware, the ability to choose your own operating system (or build your own), or the ability to choose your own networking stack. We’re doing so in a way that’s very developer-friendly, so you can use DevOps tools such as Ansible, Terraform, Libcloud, jclouds, or whatever you want — or your own scripting — to automate that hardware as if it was in your own data center.”

That’s exactly the way Smith and his team want to keep it.

“We have no visions of moving up the stack to do a proprietary or opinionated service. There are enough options in the marketplace for that.”

Read more here:: datacenterknowledge.com/feed/

Packet Says Your Data Center Shouldn’t Have an Opinion

By News Aggregator

By Christine Hall

Packet, the New York-based startup that provides dedicated high-performance bare-metal servers as a cloud service similar to AWS or Azure, has added 11 data center locations around the world to its previously existing four and launched a new edge computing service aimed at applications that require low-latency communication between end devices and compute infrastructure.

The new service is called Edge Compute, and reason for such rapid expansion is to make the company’s services more attractive to companies with those latency-sensitive workloads. But in general, Packet — which last year raised a $9.4 million Series A round led by the Japanese telco and technology investment powerhouse SoftBank — is on a mission to give developers access to unopinionated infrastructure. We’ll explain what that means later in the article.

“It’s no longer the case that having infrastructure in one or two locations is reasonable,” Zachary Smith, Packet’s co-founder and CEO, explained to Data Center Knowledge. “Having infrastructure in 20 locations might be table stakes for a latency-focused application. We consider this a chasm that’s been created between the people who can do this and those who can’t.”

See also: Vapor IO to Sell Data Center Colocation Services at Cell Towers

The locations that became available today are Los Angeles, Seattle, Dallas, Chicago, Ashburn (Virginia), Atlanta, Toronto, Frankfurt, Singapore, Hong Kong, and Sydney. This is in addition to the company’s existing locations in New York, Sunnyvale, Amsterdam, and Tokyo. Another expansion is scheduled in October, which will add Paris, London, São Paulo, and Mumbai to the mix.

“What we’re doing is taking our platform that’s already in four locations globally and expanding it to within five or ten milliseconds of all the major population centers,” he said.

Edge Computing for IoT, Self-Driving Cars, and More

As the name suggests, the new edge computing service is about better serving the outer perimeter of the internet, which is expected to see rapid growth with 5G, the upcoming high-speed mobile network standard expected to enable everything from more sophisticated — and-bandwidth hungry — Internet of Things devices to self-driving cars and augmented and virtual reality systems.

“While edge compute is still in its infancy, new experiences are driving demand for distributed infrastructure, especially as software continues its relentless pursuit down the stack,” Smith said. “We believe that the developers building these new experiences are hungry for a distributed, unopinionated, and yet fully automated compute infrastructure, and that’s what we’re bringing to the market today.”

For the time being, the new Edge Compute locations feature a single server configuration, Type 1E, which features an Intel E3-1578L v5 processor, Intel IRIS GPU, 32GB of RAM, 240GB of SSD storage, and 10 Gbps network interfaces. Packet is planning to add more configurations later, including servers powered by ARMv8 processors. The Type 1E lists at $0.50 an hour, but is also available through Packet’s new spot market, similar to AWS’s Spot Instances, offering marketplace-style pricing in all of Packet’s 15 locations.

For users requiring customized hardware, Packet has a program it calls Private Deployment which has been extended to the new data center locations. Through this program, users can deploy custom configurations while benefiting from Packet’s platform and network automation.

Equinix and Interxion

For the record, in North America and Asia Pacific, Packet generally houses its servers in data centers operated by Equinix, and in Europe it partners primarily with Interxion. Smith said the amount of floor space the company occupies varies from data center to data center.

“In some of our major facilities we’re obviously buying wholesale by the megawatt, because we’re in the cloud business and have thousands of servers. Some of the edge locations are fairly primitive, starting with small cages and allowing us to expand from there.”

Many Locations, No Opinions

Packet’s unique model helps remove infrastructure restrictions and the layers of abstraction that are inherent in most public clouds, Smith explained. “For any sort of innovative startup, it’s almost an impossibility for you to deploy infrastructure at 15 or 20 locations around the globe [on your own]. You’ve got to rely on somebody else’s infrastructure. At that point, you’re really consuming their opinion, and their viewpoint, and their abstraction away from the hardware.”

Indeed, Packet isn’t your father’s cloud company.

Unlike AWS, Azure, and GCP, which are based to a great extent on proprietary technologies and come with a degree of vendor lock-in, Packet is focused on the hardware. Software is available — plenty of operating systems, OpenStack, and more — but users can take the BYO approach and bring their own software — including operating systems. All instances are spun up on bare metal, with customers getting a fully isolated dedicated server with no shared resources. Packet supplies an extensive set of automation tools, but otherwise hands customers the cloud equivalent of an on-prem server, without vendor lock-in.

“At its simplest, we give you a physical server as if you went into a data center and racked it,” Smith said.

This means it’s not for everybody. It’s designed for experienced DevOps folks. Those who want to deploy containers with a couple of clicks without really knowing what they’re doing should probably look for another solution. According to Smith, about a third of the workloads on Packet are cloud native, specifically Kubernetes, Docker, Mesosphere, or automated DevOps services. Another large portion is enterprise IT, and includes applications running on OpenStack, VMware, CloudStack, and Hadoop.

“We offer, effectively, primitives,” he said. “The ability to automate hardware, the ability to choose your own operating system (or build your own), or the ability to choose your own networking stack. We’re doing so in a way that’s very developer-friendly, so you can use DevOps tools such as Ansible, Terraform, Libcloud, jclouds, or whatever you want — or your own scripting — to automate that hardware as if it was in your own data center.”

That’s exactly the way Smith and his team want to keep it.

“We have no visions of moving up the stack to do a proprietary or opinionated service. There are enough options in the marketplace for that.”

Read more here:: datacenterknowledge.com/feed/

The post Packet Says Your Data Center Shouldn’t Have an Opinion appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Armis Debuts IoT Security Platform to Improve Visibility and Control

Startup led by Unit 8200 veterans aims to help discover Internet of Things devices and limit potential security risks.

Read more here:: www.eweek.com/rss.xml

Level 3: Lessons From Recent DDoS Attacks

By Mitch Wagner Internet of Things devices are becoming attack vectors, and the situation will only get worse with the proliferation of devices and malware, as well as the spread of toolkits that hackers can use to create attacks.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

Level 3: Lessons From Recent DDoS Attacks

By News Aggregator

By Mitch Wagner Internet of Things devices are becoming attack vectors, and the situation will only get worse with the proliferation of devices and malware, as well as the spread of toolkits that hackers can use to create attacks.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

The post Level 3: Lessons From Recent DDoS Attacks appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Nvidia’s Huang Sees AI ‘Cambrian Explosion’

By George Leopold

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing platforms and data framework, insists Nvidia CEO Jensen Huang.

One consequence of this AI revolution will be “a Cambrian explosion of autonomous machines” ranging from billions of AI-power Internet of Things devices to autonomous vehicles, Huang forecasts.

Along with a string of AI-related announcements coming out of the GPU powerhouse’s annual technology conference, Huang used a May 24 blog post to tout the rollout of Google’s (NASDAQ: GOOGL) latest iteration of its TensorFlow machine-learning framework, the Cloud Tensor Processing Unit, or TPU.

The combination of Nvidia’s (NASDAQ: NVDA) new Volta GPU architecture and Google’s TPU illustrates how—in a variation on a technology theme—”AI is eating software,” Huang asserted.

Arguing that GPUs are defying the predicted end of Moore’s Law, Huang further argued: “AI developers are racing to build new frameworks to tackle some of the greatest challenges of our time. They want to run their AI software on everything from powerful cloud services to devices at the edge of the cloud.”

Nvidia CEO Jensen Huang

Along with the muscular Volta architecture, Nvidia earlier this month also unveiled a GPU-accelerated cloud platform geared toward deep learning. The AI development stack runs on the company’s distribution of Docker containers and is touted as “purpose built” for developing deep learning models on GPUs.

That dovetails with Google’s “AI-first” strategy that includes the Cloud TPU initiative aimed at automating AI development. The new TPU is a four-processor board described as a machine- learning “accelerator” that can be accessed from the cloud and used to train machine-learning models.

Google said its Cloud TPU could be mixed-and-matched with the Volta GPU or Skylake CPUs from Intel (NASDAQ: INTC).

Cloud TPUs were designed to be clustered in datacenters, with 64 stacked processors dubbed “TPU pods” capable of 11.5 petaflops, according to Google CEO Sundar Pichai. The cloud-based Tensor processors are aimed at computer-intensive training of machine learning models as well as real-time tasks like making inferences about images

Along with TensorFlow, Huang said Nvidia’s Volta GPU would be optimized for a range of machine-learning frameworks, including Caffe2 and Microsoft Cognitive Toolkit.

Nvidia is meanwhile releasing as open source technology its version of a “dedicated, inferencing TPU” called the Deep Learning Accelerator that has been designed into its Xavier chip for AI-based autonomous vehicles.

In parallel with those efforts, Google has been using its TPUs for the inference stage of a deep neural network since 2015. TPUs are credited with helping to bolster the effectiveness of various AI workloads, including language translation and image recognition programs, the company said.

The combination of processing power, cloud access and machine-learning training models are combining to fuel Huang’s projected “Cambrian explosion” of AI technology: “Deep learning is a strategic imperative for every major tech company,” he observed. “It increasingly permeates every aspect of work from infrastructure, to tools, to how products are made.”

Recent items:

New AI Chips to Give GPUs a Run For Deep Learning Money

‘Cloud TPU’ Bolsters Google’s ‘AI-First’ Strategy

The post Nvidia’s Huang Sees AI ‘Cambrian Explosion’ appeared first on Datanami.

Read more here:: www.datanami.com/feed/