internet stream protocol

Nicola Yates starts work as UK urban innovation chief executive

By Sheetal Kumbhar

Nicola Yates OBE has begun her role as chief executive officer of the UK’s centre of excellence in urban innovation, Future Cities Catapult. As a thinker and practitioner on public service delivery, Nicola has reportedly been at the forefront of urban regeneration for the past 10 years, holding CEO positions at Bristol City Council, Hull City […]

The post Nicola Yates starts work as UK urban innovation chief executive appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Android Automotive hands-on: Google is finally ready to talk about its car OS

By Ron Amadeo

MOUNTAIN VIEW, Calif.—After years of rumors and speculation, Google finally announced a plan to put a Google-blessed, car version of Android in a production vehicle. Audi and Volvo have both signed up to have their “next gen” vehicles powered by Google’s OS. Previously we’ve seen “concept” Android-as-a-car-OS displays from Google in the form of a “stock Android” car OS in a Maserati and an FCA/Google concept for a “skinned android” infotainment system. With Audi and Volvo, the “concepts” are over, and we’re finally seeing a work-in-progress product that will actually make it to market. And while previous concepts were quietly shown off with no one willing to comment, Google finally seems ready to talk about how Android in the car will work.

First off, there isn’t really a name for the project—internally it’s called “Android Automotive;” externally it doesn’t really have a name other than “Android.” This is a little weird since the smartwatch, TV, and IoT variants of Android all have special names to distinguish them from the smartphone UI. In this case, the most obvious name is already taken by Android Auto, a smartphone-based projected car interface.

Read 10 remaining paragraphs | Comments

Read more here:: feeds.arstechnica.com/arstechnica/index?format=xml

Android Automotive hands-on: Google is finally ready to talk about its car OS

By News Aggregator

By Ron Amadeo

MOUNTAIN VIEW, Calif.—After years of rumors and speculation, Google finally announced a plan to put a Google-blessed, car version of Android in a production vehicle. Audi and Volvo have both signed up to have their “next gen” vehicles powered by Google’s OS. Previously we’ve seen “concept” Android-as-a-car-OS displays from Google in the form of a “stock Android” car OS in a Maserati and an FCA/Google concept for a “skinned android” infotainment system. With Audi and Volvo, the “concepts” are over, and we’re finally seeing a work-in-progress product that will actually make it to market. And while previous concepts were quietly shown off with no one willing to comment, Google finally seems ready to talk about how Android in the car will work.

First off, there isn’t really a name for the project—internally it’s called “Android Automotive;” externally it doesn’t really have a name other than “Android.” This is a little weird since the smartwatch, TV, and IoT variants of Android all have special names to distinguish them from the smartphone UI. In this case, the most obvious name is already taken by Android Auto, a smartphone-based projected car interface.

Read 10 remaining paragraphs | Comments

Read more here:: feeds.arstechnica.com/arstechnica/index?format=xml

The post Android Automotive hands-on: Google is finally ready to talk about its car OS appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

TELMEX rolls out #IPv6, new data from APNIC

By Mat Ford

TELMEX IPv6 Deployment

It’s always exciting to see a new operator start turning up IPv6 connectivity for their customers. Recently eyes have been turning to TELMEX, a Mexican telecommunications company headquartered in Mexico City that provides telecommunications products and services in Mexico, Argentina, Chile, Colombia, Brazil, Ecuador, Peru, Venezuela and other countries in Latin America. As you can see from the chart below, their IPv6 deployment has been growing rapidly of late.

TELMEX IPv6 Deployment

You can view the full listing of IPv6 network operator measurements for this month.

We are delighted to announce that APNIC are now supporting World IPv6 Launch as a new data source. APNIC recruits random measurements of IPv6 capability and preference through advertisements placed in web sites worldwide. The advert runs specially crafted HTML5/Javascript and measures a range of properties across wired, wireless and cellular networks. Per-Economy and Per-ASN daily totals are calculated through the RIR delegation stats and daily BGP dumps to map origin-as and economy of registration of each tested clients IP addresses. The measurement has run continuously since 2010 and currently collects around 10 million samples per day. Further information is available here.

If you’re a network operator deploying IPv6 and would like to join TELMEX and the other networks that make up the ranks of World IPv6 Launch participants, please register your network for measurement.

Read more here:: www.worldipv6launch.org/feed/

TELMEX rolls out #IPv6, new data from APNIC

By News Aggregator

TELMEX IPv6 Deployment

By Mat Ford

It’s always exciting to see a new operator start turning up IPv6 connectivity for their customers. Recently eyes have been turning to TELMEX, a Mexican telecommunications company headquartered in Mexico City that provides telecommunications products and services in Mexico, Argentina, Chile, Colombia, Brazil, Ecuador, Peru, Venezuela and other countries in Latin America. As you can see from the chart below, their IPv6 deployment has been growing rapidly of late.

TELMEX IPv6 Deployment

You can view the full listing of IPv6 network operator measurements for this month.

We are delighted to announce that APNIC are now supporting World IPv6 Launch as a new data source. APNIC recruits random measurements of IPv6 capability and preference through advertisements placed in web sites worldwide. The advert runs specially crafted HTML5/Javascript and measures a range of properties across wired, wireless and cellular networks. Per-Economy and Per-ASN daily totals are calculated through the RIR delegation stats and daily BGP dumps to map origin-as and economy of registration of each tested clients IP addresses. The measurement has run continuously since 2010 and currently collects around 10 million samples per day. Further information is available here.

If you’re a network operator deploying IPv6 and would like to join TELMEX and the other networks that make up the ranks of World IPv6 Launch participants, please register your network for measurement.

Read more here:: www.worldipv6launch.org/feed/

The post TELMEX rolls out #IPv6, new data from APNIC appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

NVIDIA CEO: AI Workloads Will “Flood” Data Centers

By Yevgeniy Sverdlik

During a keynote at his company’s big annual conference in Silicon Valley last week, NVIDIA CEO Jensen Huang took several hours to announce the chipmaker’s latest products and innovations, but also to drive home the inevitability of the force that is Artificial Intelligence.

NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, currently the part of the AI field where most action is happening. GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them.

“Machine Learning is one of the most important computer revolutions ever,” Huang said. “The number of [research] papers in Deep Learning is just absolutely explosive.” (Deep Learning is a class of Machine Learning algorithms where innovation has skyrocketed in recent years.) “There’s no way to keep up. There is now 10 times as much investment in AI companies since 10 years ago. There’s no question we’re seeing explosive growth.”

While AI and Machine Learning are one of Gartner’s top 10 strategic technology trends for 2017, most other trends on the list – such as conversational systems, virtual and augmented reality, Internet of Things, and intelligent apps – are accelerating in large part because of advances in Machine Learning.

“Over the next 10 years, virtually every app, application and service will incorporate some level of AI,” Gartner fellow and VP David Cearley said in a statement. “This will form a long-term trend that will continually evolve and expand the application of AI and machine learning for apps and services.”

No Longer Just for Hyper-Scalers

Growth in Machine Learning means a “flood” of AI workloads is headed for the world’s data center floors, Huang said. Up until now, the most impactful production applications of Deep Learning have been developed and deployed by a handful of hyper-scale cloud giants – such as Google, Microsoft, Facebook, and Baidu – but NVIDIA sees the technology starting to proliferate beyond the massive cloud data centers.

“AI is just another kind of computing, and it’s going to hit many, many markets,” Ian Buck, the NVIDIA VP in charge of the company’s Accelerated Computing unit, told Data Center Knowledge in an interview. While there’s no doubt that Machine Learning will continue growing as portion of the total computing power inside cloud data centers, he expects to see it in data centers operated by everybody in the near future — from managed service providers to banks. “It’s going to be everywhere.”

In preparation for this flood, data center managers need to answer some key basic questions: Will it make more sense for my company to host Deep Learning workloads in the cloud or on-premises? Will it be a hybrid of the two? How much of the on-prem infrastructure will be needed for training Deep Learning algorithms? How much of it will be needed for inference? If we’ll have a lot of power-hungry training servers, will we go for maximum performance or give up some performance in exchange for higher efficiency of the whole data center? Will we need inference capabilities at the edge?

Cloud or On-Premises? Probably Both

Today, many companies large and small are in early research phases, looking for ways Deep Learning can benefit their specific businesses. One data center provider that specializes in hosting infrastructure for Deep Learning told us most of their customers hadn’t yet deployed their AI applications in production.

This drives demand for rentable GPUs in the cloud, which Amazon Web Services, Microsoft Azure, and Google Cloud Platform are happy to provide. By using their services, researchers can access lots of GPUs without having to spend a fortune on on-premises hardware.

“We’re seeing a lot of demand for it [in the] cloud,” Buck said. “Cloud is one of the reasons why all the hyper-scalers and cloud providers are excited about GPUs.”

A common approach, however, is combining some on-premises systems with cloud services. Berlin-based AI startup Twenty Billion Neurons, for example, synthesizes video material to train its AI algorithm to understand the way physical objects interact with their environment. Because those videos are so data-intensive, twentybn uses an on-premises compute cluster at its lab in Toronto to handle them, while outsourcing the actual training workloads to cloud GPUs in a Cirrascale data center outside San Diego.

Read more: This Data Center is Designed for Deep Learning

Cloud GPUs are also a good way to start exploring Deep Learning for a company without committing a lot of capital upfront. “We find that cloud is a nice lubricant to getting adoption up for GPUs in general,” Buck said.

Efficiency v. Performance

If your on-premises Deep Learning infrastructure will do a lot of training – the computationally intensive applications used to teach neural networks things like speech and image recognition – prepare for power-hungry servers with lots of GPUs on every motherboard. That means higher power densities than most of the world’s data centers have been designed to support (we’re talking up to 30kW per rack).

Read more: Deep Learning Driving Up Data Center Power Density

However, it doesn’t automatically mean you’ll need as high-density cooling infrastructure as possible. Here, the tradeoff is between performance and the number of users, or workloads, the infrastructure can support simultaneously. Maximum performance means the highest-power GPUs money can buy, but it’s not necessarily the most efficient way to go.

NVIDIA’s latest Volta GPUs, expected to hit the market in the third quarter, deliver maximum performance at 300 watts, but if you slash the power in half you will still get 80 percent of the number-crunching muscle, Buck said. If “you back off power a little bit, you still maintain quite a bit of performance. It means I can up the number of servers in a rack and max out my data center. It’s just an efficiency choice.”

What about the Edge?

Inferencing workloads – applications neural networks use to apply what they’ve been trained to do – require fewer GPUs and less power, but they have to perform extremely fast. (Alexa wouldn’t be much fun to use if it took even 5 seconds to respond to a voice query.)

While not particularly difficult to handle on-premises, one big question to answer about inferencing servers for the data center manager is how close they have to be to where input data originates. If your corporate data centers are in Ashburn, Virginia, but your Machine Learning application has to provide real-time suggestions to users in Dallas or Portland, chances are you’ll need some inferencing servers in or near Dallas and Portland to make it actually feel close to real-time. If your application has to do with public safety — analyzing video data at intersections to help navigate autonomous vehicles for example – it’s very likely that you’ll need some inferencing horsepower right at those intersections.

“Second Era of Computing”

Shopping suggestions on Amazon.com (one of the earliest uses of Machine Learning in production), Google search predictions, neither of these capabilities was written out as sequences of specific if/then instructions by software engineers, Huang said, referring to the rise of Machine Learning as a “second era of computing.”

And it’s growing quickly, permeating all industry verticals, which means data center managers in every industry have some homework to do.

Read more here:: datacenterknowledge.com/feed/

NVIDIA CEO: AI Workloads Will “Flood” Data Centers

By News Aggregator

By Yevgeniy Sverdlik

During a keynote at his company’s big annual conference in Silicon Valley last week, NVIDIA CEO Jensen Huang took several hours to announce the chipmaker’s latest products and innovations, but also to drive home the inevitability of the force that is Artificial Intelligence.

NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, currently the part of the AI field where most action is happening. GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them.

“Machine Learning is one of the most important computer revolutions ever,” Huang said. “The number of [research] papers in Deep Learning is just absolutely explosive.” (Deep Learning is a class of Machine Learning algorithms where innovation has skyrocketed in recent years.) “There’s no way to keep up. There is now 10 times as much investment in AI companies since 10 years ago. There’s no question we’re seeing explosive growth.”

While AI and Machine Learning are one of Gartner’s top 10 strategic technology trends for 2017, most other trends on the list – such as conversational systems, virtual and augmented reality, Internet of Things, and intelligent apps – are accelerating in large part because of advances in Machine Learning.

“Over the next 10 years, virtually every app, application and service will incorporate some level of AI,” Gartner fellow and VP David Cearley said in a statement. “This will form a long-term trend that will continually evolve and expand the application of AI and machine learning for apps and services.”

No Longer Just for Hyper-Scalers

Growth in Machine Learning means a “flood” of AI workloads is headed for the world’s data center floors, Huang said. Up until now, the most impactful production applications of Deep Learning have been developed and deployed by a handful of hyper-scale cloud giants – such as Google, Microsoft, Facebook, and Baidu – but NVIDIA sees the technology starting to proliferate beyond the massive cloud data centers.

“AI is just another kind of computing, and it’s going to hit many, many markets,” Ian Buck, the NVIDIA VP in charge of the company’s Accelerated Computing unit, told Data Center Knowledge in an interview. While there’s no doubt that Machine Learning will continue growing as portion of the total computing power inside cloud data centers, he expects to see it in data centers operated by everybody in the near future — from managed service providers to banks. “It’s going to be everywhere.”

In preparation for this flood, data center managers need to answer some key basic questions: Will it make more sense for my company to host Deep Learning workloads in the cloud or on-premises? Will it be a hybrid of the two? How much of the on-prem infrastructure will be needed for training Deep Learning algorithms? How much of it will be needed for inference? If we’ll have a lot of power-hungry training servers, will we go for maximum performance or give up some performance in exchange for higher efficiency of the whole data center? Will we need inference capabilities at the edge?

Cloud or On-Premises? Probably Both

Today, many companies large and small are in early research phases, looking for ways Deep Learning can benefit their specific businesses. One data center provider that specializes in hosting infrastructure for Deep Learning told us most of their customers hadn’t yet deployed their AI applications in production.

This drives demand for rentable GPUs in the cloud, which Amazon Web Services, Microsoft Azure, and Google Cloud Platform are happy to provide. By using their services, researchers can access lots of GPUs without having to spend a fortune on on-premises hardware.

“We’re seeing a lot of demand for it [in the] cloud,” Buck said. “Cloud is one of the reasons why all the hyper-scalers and cloud providers are excited about GPUs.”

A common approach, however, is combining some on-premises systems with cloud services. Berlin-based AI startup Twenty Billion Neurons, for example, synthesizes video material to train its AI algorithm to understand the way physical objects interact with their environment. Because those videos are so data-intensive, twentybn uses an on-premises compute cluster at its lab in Toronto to handle them, while outsourcing the actual training workloads to cloud GPUs in a Cirrascale data center outside San Diego.

Read more: This Data Center is Designed for Deep Learning

Cloud GPUs are also a good way to start exploring Deep Learning for a company without committing a lot of capital upfront. “We find that cloud is a nice lubricant to getting adoption up for GPUs in general,” Buck said.

Efficiency v. Performance

If your on-premises Deep Learning infrastructure will do a lot of training – the computationally intensive applications used to teach neural networks things like speech and image recognition – prepare for power-hungry servers with lots of GPUs on every motherboard. That means higher power densities than most of the world’s data centers have been designed to support (we’re talking up to 30kW per rack).

Read more: Deep Learning Driving Up Data Center Power Density

However, it doesn’t automatically mean you’ll need as high-density cooling infrastructure as possible. Here, the tradeoff is between performance and the number of users, or workloads, the infrastructure can support simultaneously. Maximum performance means the highest-power GPUs money can buy, but it’s not necessarily the most efficient way to go.

NVIDIA’s latest Volta GPUs, expected to hit the market in the third quarter, deliver maximum performance at 300 watts, but if you slash the power in half you will still get 80 percent of the number-crunching muscle, Buck said. If “you back off power a little bit, you still maintain quite a bit of performance. It means I can up the number of servers in a rack and max out my data center. It’s just an efficiency choice.”

What about the Edge?

Inferencing workloads – applications neural networks use to apply what they’ve been trained to do – require fewer GPUs and less power, but they have to perform extremely fast. (Alexa wouldn’t be much fun to use if it took even 5 seconds to respond to a voice query.)

While not particularly difficult to handle on-premises, one big question to answer about inferencing servers for the data center manager is how close they have to be to where input data originates. If your corporate data centers are in Ashburn, Virginia, but your Machine Learning application has to provide real-time suggestions to users in Dallas or Portland, chances are you’ll need some inferencing servers in or near Dallas and Portland to make it actually feel close to real-time. If your application has to do with public safety — analyzing video data at intersections to help navigate autonomous vehicles for example – it’s very likely that you’ll need some inferencing horsepower right at those intersections.

“Second Era of Computing”

Shopping suggestions on Amazon.com (one of the earliest uses of Machine Learning in production), Google search predictions, neither of these capabilities was written out as sequences of specific if/then instructions by software engineers, Huang said, referring to the rise of Machine Learning as a “second era of computing.”

And it’s growing quickly, permeating all industry verticals, which means data center managers in every industry have some homework to do.

Read more here:: datacenterknowledge.com/feed/

The post NVIDIA CEO: AI Workloads Will “Flood” Data Centers appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you?

By Jeremy Cowan

You probably remember the Ancient Silk Road from geography and history lessons at school. And you may be wondering what this has to do with the Internet of Things. Well, grab a coffee says Jeremy Cowan, and settle back for a few minutes while I tell you about two delightfully wacky modern adventurers and their […]

The post Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you? appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you?

By News Aggregator

By Jeremy Cowan

You probably remember the Ancient Silk Road from geography and history lessons at school. And you may be wondering what this has to do with the Internet of Things. Well, grab a coffee says Jeremy Cowan, and settle back for a few minutes while I tell you about two delightfully wacky modern adventurers and their […]

The post Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you? appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Two motorcycles, 10,000 km along Silk Road 4.0, the Internet of Things … and you? appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

BKNIX Peering Forum 2017 Next Week

By Aftab Siddiqui

The BKNIX Peering Forum 2017 will be held next week on 15-16 May in Bangkok. Once again the agenda is packed with very interesting talks from various international and local speakers. 143 people have already registered for the event and hopefully, the numbers will rise.

From Deploy360, team member Aftab Siddiqui will present on MANRS (Mutually Agreed Norms for Routing Security) and also an update on the Internet Society’s IXP initiative globally. If you are planning to attend the event then don’t forget to catch up.

BKNIX Peering Forum (BPF) setting consists of two parts:

  • Part I: Panel Discussions, Meetings and Seminars by distinguished speakers from multiple regions to deliver valuable talks and exchange experiences in IXP connection and peering, including the Internet service and content services, in order to enhance perspectives and to build business opportunities for participants.

  • Part II: Business Meeting and Discussion between Participants There were open opportunities for the participants to meet and discuss business matters.

About BKNIX

Bangkok Internet Exchange (BKNIX) was inaugurated in February 2015, after long collaborative efforts by various organisations such as Internet Society, NSRC, IIJ, Google, Nokia/Alcatel-Lucent, Netnod etc. The neutral Internet Exchange Point was started with the idea that it will add significant social and economic value to Southeast Asia by providing a neutral community IXP, and further drive operational efficiency for Internet service providers (ISPs) and content providers will be able to diversify local and regional peering and data exchange by offering services through BKNIX, allowing wider access and more services to the general public.

After almost a year BKNIX Peering Forum 2016 celebrated the steady growth of BKNIX. It was attended by more than 140 participants from local and international technical community and got good media coverage. BKNIX Peering Forum (BPF) 2016 was a two-day event which was held on 9-10 May 2016 with free registration.

As of today, there are 16 members who are part of the IX and sharing more than 15Gbps of IPv4 traffic and almost 600Mbps of IPv6 traffic.

We are looking forard to BKNIX Peering Forum 2017. Let us know if you’ll be there!

We don’t have any information that it will be webcast, but video recordings of all the presentations will be uploaded after the event.

Read more here:: www.internetsociety.org/deploy360/blog/feed/