dnssec news

Nvidia’s Huang Sees AI ‘Cambrian Explosion’

By News Aggregator

By George Leopold

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing platforms and data framework, insists Nvidia CEO Jensen Huang.

One consequence of this AI revolution will be “a Cambrian explosion of autonomous machines” ranging from billions of AI-power Internet of Things devices to autonomous vehicles, Huang forecasts.

Along with a string of AI-related announcements coming out of the GPU powerhouse’s annual technology conference, Huang used a May 24 blog post to tout the rollout of Google’s (NASDAQ: GOOGL) latest iteration of its TensorFlow machine-learning framework, the Cloud Tensor Processing Unit, or TPU.

The combination of Nvidia’s (NASDAQ: NVDA) new Volta GPU architecture and Google’s TPU illustrates how—in a variation on a technology theme—”AI is eating software,” Huang asserted.

Arguing that GPUs are defying the predicted end of Moore’s Law, Huang further argued: “AI developers are racing to build new frameworks to tackle some of the greatest challenges of our time. They want to run their AI software on everything from powerful cloud services to devices at the edge of the cloud.”

Nvidia CEO Jensen Huang

Along with the muscular Volta architecture, Nvidia earlier this month also unveiled a GPU-accelerated cloud platform geared toward deep learning. The AI development stack runs on the company’s distribution of Docker containers and is touted as “purpose built” for developing deep learning models on GPUs.

That dovetails with Google’s “AI-first” strategy that includes the Cloud TPU initiative aimed at automating AI development. The new TPU is a four-processor board described as a machine- learning “accelerator” that can be accessed from the cloud and used to train machine-learning models.

Google said its Cloud TPU could be mixed-and-matched with the Volta GPU or Skylake CPUs from Intel (NASDAQ: INTC).

Cloud TPUs were designed to be clustered in datacenters, with 64 stacked processors dubbed “TPU pods” capable of 11.5 petaflops, according to Google CEO Sundar Pichai. The cloud-based Tensor processors are aimed at computer-intensive training of machine learning models as well as real-time tasks like making inferences about images

Along with TensorFlow, Huang said Nvidia’s Volta GPU would be optimized for a range of machine-learning frameworks, including Caffe2 and Microsoft Cognitive Toolkit.

Nvidia is meanwhile releasing as open source technology its version of a “dedicated, inferencing TPU” called the Deep Learning Accelerator that has been designed into its Xavier chip for AI-based autonomous vehicles.

In parallel with those efforts, Google has been using its TPUs for the inference stage of a deep neural network since 2015. TPUs are credited with helping to bolster the effectiveness of various AI workloads, including language translation and image recognition programs, the company said.

The combination of processing power, cloud access and machine-learning training models are combining to fuel Huang’s projected “Cambrian explosion” of AI technology: “Deep learning is a strategic imperative for every major tech company,” he observed. “It increasingly permeates every aspect of work from infrastructure, to tools, to how products are made.”

Recent items:

New AI Chips to Give GPUs a Run For Deep Learning Money

‘Cloud TPU’ Bolsters Google’s ‘AI-First’ Strategy

The post Nvidia’s Huang Sees AI ‘Cambrian Explosion’ appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post Nvidia’s Huang Sees AI ‘Cambrian Explosion’ appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Nvidia’s Huang Sees AI ‘Cambrian Explosion’

By George Leopold

The processing power and cloud access to developer tools used to train machine-learning models are making artificial intelligence ubiquitous across computing platforms and data framework, insists Nvidia CEO Jensen Huang.

One consequence of this AI revolution will be “a Cambrian explosion of autonomous machines” ranging from billions of AI-power Internet of Things devices to autonomous vehicles, Huang forecasts.

Along with a string of AI-related announcements coming out of the GPU powerhouse’s annual technology conference, Huang used a May 24 blog post to tout the rollout of Google’s (NASDAQ: GOOGL) latest iteration of its TensorFlow machine-learning framework, the Cloud Tensor Processing Unit, or TPU.

The combination of Nvidia’s (NASDAQ: NVDA) new Volta GPU architecture and Google’s TPU illustrates how—in a variation on a technology theme—”AI is eating software,” Huang asserted.

Arguing that GPUs are defying the predicted end of Moore’s Law, Huang further argued: “AI developers are racing to build new frameworks to tackle some of the greatest challenges of our time. They want to run their AI software on everything from powerful cloud services to devices at the edge of the cloud.”

Nvidia CEO Jensen Huang

Along with the muscular Volta architecture, Nvidia earlier this month also unveiled a GPU-accelerated cloud platform geared toward deep learning. The AI development stack runs on the company’s distribution of Docker containers and is touted as “purpose built” for developing deep learning models on GPUs.

That dovetails with Google’s “AI-first” strategy that includes the Cloud TPU initiative aimed at automating AI development. The new TPU is a four-processor board described as a machine- learning “accelerator” that can be accessed from the cloud and used to train machine-learning models.

Google said its Cloud TPU could be mixed-and-matched with the Volta GPU or Skylake CPUs from Intel (NASDAQ: INTC).

Cloud TPUs were designed to be clustered in datacenters, with 64 stacked processors dubbed “TPU pods” capable of 11.5 petaflops, according to Google CEO Sundar Pichai. The cloud-based Tensor processors are aimed at computer-intensive training of machine learning models as well as real-time tasks like making inferences about images

Along with TensorFlow, Huang said Nvidia’s Volta GPU would be optimized for a range of machine-learning frameworks, including Caffe2 and Microsoft Cognitive Toolkit.

Nvidia is meanwhile releasing as open source technology its version of a “dedicated, inferencing TPU” called the Deep Learning Accelerator that has been designed into its Xavier chip for AI-based autonomous vehicles.

In parallel with those efforts, Google has been using its TPUs for the inference stage of a deep neural network since 2015. TPUs are credited with helping to bolster the effectiveness of various AI workloads, including language translation and image recognition programs, the company said.

The combination of processing power, cloud access and machine-learning training models are combining to fuel Huang’s projected “Cambrian explosion” of AI technology: “Deep learning is a strategic imperative for every major tech company,” he observed. “It increasingly permeates every aspect of work from infrastructure, to tools, to how products are made.”

Recent items:

New AI Chips to Give GPUs a Run For Deep Learning Money

‘Cloud TPU’ Bolsters Google’s ‘AI-First’ Strategy

The post Nvidia’s Huang Sees AI ‘Cambrian Explosion’ appeared first on Datanami.

Read more here:: www.datanami.com/feed/

What is this ‘GDPR’ I keep hearing about and how does it affect me?

By Sheetal Kumbhar

A lot has been written and said lately about GDPR, not least of all by VanillaPlus. (See: GDPR compliance: We need to comply but where to begin? and More than half of companies in data protection survey will be affected by GDPR, but 5% don’t know what it is. In case your compliance people have been hiding […]

The post What is this ‘GDPR’ I keep hearing about and how does it affect me? appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Smartphone Obsession Grows with 25% of Millennials Spending More Than 5 Hours Per Day on the Phone

By IoT – Internet of Things

A new study released by B2X, the leading provider of customer care for smart mobile and Internet of Things (IoT) devices, found that global consumers’ dependence on their smartphones continues to grow, while they also rapidly adopt a new generation of IoT devices. Accordingly, consumers continue to pay more and more for their smartphones, and […]

The post Smartphone Obsession Grows with 25% of Millennials Spending More Than 5 Hours Per Day on the Phone appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Smartphone Obsession Grows with 25% of Millennials Spending More Than 5 Hours Per Day on the Phone

By News Aggregator

By IoT – Internet of Things

A new study released by B2X, the leading provider of customer care for smart mobile and Internet of Things (IoT) devices, found that global consumers’ dependence on their smartphones continues to grow, while they also rapidly adopt a new generation of IoT devices. Accordingly, consumers continue to pay more and more for their smartphones, and […]

The post Smartphone Obsession Grows with 25% of Millennials Spending More Than 5 Hours Per Day on the Phone appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

The post Smartphone Obsession Grows with 25% of Millennials Spending More Than 5 Hours Per Day on the Phone appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

NVIDIA CEO: AI Workloads Will “Flood” Data Centers

By Yevgeniy Sverdlik

During a keynote at his company’s big annual conference in Silicon Valley last week, NVIDIA CEO Jensen Huang took several hours to announce the chipmaker’s latest products and innovations, but also to drive home the inevitability of the force that is Artificial Intelligence.

NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, currently the part of the AI field where most action is happening. GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them.

“Machine Learning is one of the most important computer revolutions ever,” Huang said. “The number of [research] papers in Deep Learning is just absolutely explosive.” (Deep Learning is a class of Machine Learning algorithms where innovation has skyrocketed in recent years.) “There’s no way to keep up. There is now 10 times as much investment in AI companies since 10 years ago. There’s no question we’re seeing explosive growth.”

While AI and Machine Learning are one of Gartner’s top 10 strategic technology trends for 2017, most other trends on the list – such as conversational systems, virtual and augmented reality, Internet of Things, and intelligent apps – are accelerating in large part because of advances in Machine Learning.

“Over the next 10 years, virtually every app, application and service will incorporate some level of AI,” Gartner fellow and VP David Cearley said in a statement. “This will form a long-term trend that will continually evolve and expand the application of AI and machine learning for apps and services.”

No Longer Just for Hyper-Scalers

Growth in Machine Learning means a “flood” of AI workloads is headed for the world’s data center floors, Huang said. Up until now, the most impactful production applications of Deep Learning have been developed and deployed by a handful of hyper-scale cloud giants – such as Google, Microsoft, Facebook, and Baidu – but NVIDIA sees the technology starting to proliferate beyond the massive cloud data centers.

“AI is just another kind of computing, and it’s going to hit many, many markets,” Ian Buck, the NVIDIA VP in charge of the company’s Accelerated Computing unit, told Data Center Knowledge in an interview. While there’s no doubt that Machine Learning will continue growing as portion of the total computing power inside cloud data centers, he expects to see it in data centers operated by everybody in the near future — from managed service providers to banks. “It’s going to be everywhere.”

In preparation for this flood, data center managers need to answer some key basic questions: Will it make more sense for my company to host Deep Learning workloads in the cloud or on-premises? Will it be a hybrid of the two? How much of the on-prem infrastructure will be needed for training Deep Learning algorithms? How much of it will be needed for inference? If we’ll have a lot of power-hungry training servers, will we go for maximum performance or give up some performance in exchange for higher efficiency of the whole data center? Will we need inference capabilities at the edge?

Cloud or On-Premises? Probably Both

Today, many companies large and small are in early research phases, looking for ways Deep Learning can benefit their specific businesses. One data center provider that specializes in hosting infrastructure for Deep Learning told us most of their customers hadn’t yet deployed their AI applications in production.

This drives demand for rentable GPUs in the cloud, which Amazon Web Services, Microsoft Azure, and Google Cloud Platform are happy to provide. By using their services, researchers can access lots of GPUs without having to spend a fortune on on-premises hardware.

“We’re seeing a lot of demand for it [in the] cloud,” Buck said. “Cloud is one of the reasons why all the hyper-scalers and cloud providers are excited about GPUs.”

A common approach, however, is combining some on-premises systems with cloud services. Berlin-based AI startup Twenty Billion Neurons, for example, synthesizes video material to train its AI algorithm to understand the way physical objects interact with their environment. Because those videos are so data-intensive, twentybn uses an on-premises compute cluster at its lab in Toronto to handle them, while outsourcing the actual training workloads to cloud GPUs in a Cirrascale data center outside San Diego.

Read more: This Data Center is Designed for Deep Learning

Cloud GPUs are also a good way to start exploring Deep Learning for a company without committing a lot of capital upfront. “We find that cloud is a nice lubricant to getting adoption up for GPUs in general,” Buck said.

Efficiency v. Performance

If your on-premises Deep Learning infrastructure will do a lot of training – the computationally intensive applications used to teach neural networks things like speech and image recognition – prepare for power-hungry servers with lots of GPUs on every motherboard. That means higher power densities than most of the world’s data centers have been designed to support (we’re talking up to 30kW per rack).

Read more: Deep Learning Driving Up Data Center Power Density

However, it doesn’t automatically mean you’ll need as high-density cooling infrastructure as possible. Here, the tradeoff is between performance and the number of users, or workloads, the infrastructure can support simultaneously. Maximum performance means the highest-power GPUs money can buy, but it’s not necessarily the most efficient way to go.

NVIDIA’s latest Volta GPUs, expected to hit the market in the third quarter, deliver maximum performance at 300 watts, but if you slash the power in half you will still get 80 percent of the number-crunching muscle, Buck said. If “you back off power a little bit, you still maintain quite a bit of performance. It means I can up the number of servers in a rack and max out my data center. It’s just an efficiency choice.”

What about the Edge?

Inferencing workloads – applications neural networks use to apply what they’ve been trained to do – require fewer GPUs and less power, but they have to perform extremely fast. (Alexa wouldn’t be much fun to use if it took even 5 seconds to respond to a voice query.)

While not particularly difficult to handle on-premises, one big question to answer about inferencing servers for the data center manager is how close they have to be to where input data originates. If your corporate data centers are in Ashburn, Virginia, but your Machine Learning application has to provide real-time suggestions to users in Dallas or Portland, chances are you’ll need some inferencing servers in or near Dallas and Portland to make it actually feel close to real-time. If your application has to do with public safety — analyzing video data at intersections to help navigate autonomous vehicles for example – it’s very likely that you’ll need some inferencing horsepower right at those intersections.

“Second Era of Computing”

Shopping suggestions on Amazon.com (one of the earliest uses of Machine Learning in production), Google search predictions, neither of these capabilities was written out as sequences of specific if/then instructions by software engineers, Huang said, referring to the rise of Machine Learning as a “second era of computing.”

And it’s growing quickly, permeating all industry verticals, which means data center managers in every industry have some homework to do.

Read more here:: datacenterknowledge.com/feed/

NVIDIA CEO: AI Workloads Will “Flood” Data Centers

By News Aggregator

By Yevgeniy Sverdlik

During a keynote at his company’s big annual conference in Silicon Valley last week, NVIDIA CEO Jensen Huang took several hours to announce the chipmaker’s latest products and innovations, but also to drive home the inevitability of the force that is Artificial Intelligence.

NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, currently the part of the AI field where most action is happening. GPUs work in tandem with CPUs, accelerating the processing necessary to both train machines to do certain tasks and to execute them.

“Machine Learning is one of the most important computer revolutions ever,” Huang said. “The number of [research] papers in Deep Learning is just absolutely explosive.” (Deep Learning is a class of Machine Learning algorithms where innovation has skyrocketed in recent years.) “There’s no way to keep up. There is now 10 times as much investment in AI companies since 10 years ago. There’s no question we’re seeing explosive growth.”

While AI and Machine Learning are one of Gartner’s top 10 strategic technology trends for 2017, most other trends on the list – such as conversational systems, virtual and augmented reality, Internet of Things, and intelligent apps – are accelerating in large part because of advances in Machine Learning.

“Over the next 10 years, virtually every app, application and service will incorporate some level of AI,” Gartner fellow and VP David Cearley said in a statement. “This will form a long-term trend that will continually evolve and expand the application of AI and machine learning for apps and services.”

No Longer Just for Hyper-Scalers

Growth in Machine Learning means a “flood” of AI workloads is headed for the world’s data center floors, Huang said. Up until now, the most impactful production applications of Deep Learning have been developed and deployed by a handful of hyper-scale cloud giants – such as Google, Microsoft, Facebook, and Baidu – but NVIDIA sees the technology starting to proliferate beyond the massive cloud data centers.

“AI is just another kind of computing, and it’s going to hit many, many markets,” Ian Buck, the NVIDIA VP in charge of the company’s Accelerated Computing unit, told Data Center Knowledge in an interview. While there’s no doubt that Machine Learning will continue growing as portion of the total computing power inside cloud data centers, he expects to see it in data centers operated by everybody in the near future — from managed service providers to banks. “It’s going to be everywhere.”

In preparation for this flood, data center managers need to answer some key basic questions: Will it make more sense for my company to host Deep Learning workloads in the cloud or on-premises? Will it be a hybrid of the two? How much of the on-prem infrastructure will be needed for training Deep Learning algorithms? How much of it will be needed for inference? If we’ll have a lot of power-hungry training servers, will we go for maximum performance or give up some performance in exchange for higher efficiency of the whole data center? Will we need inference capabilities at the edge?

Cloud or On-Premises? Probably Both

Today, many companies large and small are in early research phases, looking for ways Deep Learning can benefit their specific businesses. One data center provider that specializes in hosting infrastructure for Deep Learning told us most of their customers hadn’t yet deployed their AI applications in production.

This drives demand for rentable GPUs in the cloud, which Amazon Web Services, Microsoft Azure, and Google Cloud Platform are happy to provide. By using their services, researchers can access lots of GPUs without having to spend a fortune on on-premises hardware.

“We’re seeing a lot of demand for it [in the] cloud,” Buck said. “Cloud is one of the reasons why all the hyper-scalers and cloud providers are excited about GPUs.”

A common approach, however, is combining some on-premises systems with cloud services. Berlin-based AI startup Twenty Billion Neurons, for example, synthesizes video material to train its AI algorithm to understand the way physical objects interact with their environment. Because those videos are so data-intensive, twentybn uses an on-premises compute cluster at its lab in Toronto to handle them, while outsourcing the actual training workloads to cloud GPUs in a Cirrascale data center outside San Diego.

Read more: This Data Center is Designed for Deep Learning

Cloud GPUs are also a good way to start exploring Deep Learning for a company without committing a lot of capital upfront. “We find that cloud is a nice lubricant to getting adoption up for GPUs in general,” Buck said.

Efficiency v. Performance

If your on-premises Deep Learning infrastructure will do a lot of training – the computationally intensive applications used to teach neural networks things like speech and image recognition – prepare for power-hungry servers with lots of GPUs on every motherboard. That means higher power densities than most of the world’s data centers have been designed to support (we’re talking up to 30kW per rack).

Read more: Deep Learning Driving Up Data Center Power Density

However, it doesn’t automatically mean you’ll need as high-density cooling infrastructure as possible. Here, the tradeoff is between performance and the number of users, or workloads, the infrastructure can support simultaneously. Maximum performance means the highest-power GPUs money can buy, but it’s not necessarily the most efficient way to go.

NVIDIA’s latest Volta GPUs, expected to hit the market in the third quarter, deliver maximum performance at 300 watts, but if you slash the power in half you will still get 80 percent of the number-crunching muscle, Buck said. If “you back off power a little bit, you still maintain quite a bit of performance. It means I can up the number of servers in a rack and max out my data center. It’s just an efficiency choice.”

What about the Edge?

Inferencing workloads – applications neural networks use to apply what they’ve been trained to do – require fewer GPUs and less power, but they have to perform extremely fast. (Alexa wouldn’t be much fun to use if it took even 5 seconds to respond to a voice query.)

While not particularly difficult to handle on-premises, one big question to answer about inferencing servers for the data center manager is how close they have to be to where input data originates. If your corporate data centers are in Ashburn, Virginia, but your Machine Learning application has to provide real-time suggestions to users in Dallas or Portland, chances are you’ll need some inferencing servers in or near Dallas and Portland to make it actually feel close to real-time. If your application has to do with public safety — analyzing video data at intersections to help navigate autonomous vehicles for example – it’s very likely that you’ll need some inferencing horsepower right at those intersections.

“Second Era of Computing”

Shopping suggestions on Amazon.com (one of the earliest uses of Machine Learning in production), Google search predictions, neither of these capabilities was written out as sequences of specific if/then instructions by software engineers, Huang said, referring to the rise of Machine Learning as a “second era of computing.”

And it’s growing quickly, permeating all industry verticals, which means data center managers in every industry have some homework to do.

Read more here:: datacenterknowledge.com/feed/

The post NVIDIA CEO: AI Workloads Will “Flood” Data Centers appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Patching is Hard

By Steven Bellovin

There are many news reports of a ransomware worm. Much of the National Health Service in the UK has been hit; so has FedEx. The patch for the flaw exploited by this malware has been out for a while, but many companies haven’t installed it. Naturally, this has prompted a lot of victim-blaming: they should have patched their systems. Yes, they should have, but many didn’t. Why not? Because patching is very hard and very risk, and the more complex your systems are, the harder and riskier it is.

Patching is hard? Yes — and every major tech player, no matter how sophisticated they are, has had catastrophic failures when they tried to change something. Google once bricked Chromebooks with an update. A Facebook configuration change took the site offline for 2.5 hours. Microsoft ruined network configuration and partially bricked some computers; even their newest patch isn’t trouble-free. An iOS update from Apple bricked some iPad Pros. Even Amazon knocked AWS off the air.

There are lots of reasons for any of these, but let’s focus on OS patches. Microsoft — and they’re probably the best in the business at this — devotes a lot of resources to testing patches. But they can’t test every possible user device configuration, nor can they test against every software package, especially if it’s locally written. An amazing amount of software inadvertently relies on OS bugs; sometimes, a vendor deliberately relies on non-standard APIs because there appears to be no other way to accomplish something. The inevitable result is that on occasion, these well-tested patches will break some computers. Enterprises know this, so they’re generally slow to patch. I learned the phrase “never install .0 of anything” in 1971, but while software today is much better, it’s not perfect and never will be. Enterprises often face a stark choice with security patches: take the risk of being knocked of the air by hackers, or take the risk of knocking yourself off the air. The result is that there is often an inverse correlation between the size of an organization and how rapidly it installs patches. This isn’t good, but with the very best technical people, both at the OS vendor and on site, it may be inevitable.

To be sure, there are good ways and bad ways to handle patches. Smart companies immediately start running patched software in their test labs, pounding on it with well-crafted regression tests and simulated user tests. They know that eventually, all operating systems become unsupported, and they plan (and budget) for replacement computers, and they make sure their own applications run on newer operating systems. If it won’t, they update or replace those applications, because running on an unsupported operating system is foolhardy.

Companies that aren’t sophisticated enough don’t do any of that. Budget-constrained enterprises postpone OS upgrades, often indefinitely. Government agencies are often the worst at that, because they’re dependent on budgets that are subject to the whims of politicians. But you can’t do that and expect your infrastructure to survive. Windows XP support ended more than three year ago. System administrators who haven’t upgraded since then may be negligent; more likely, they couldn’t persuade management (or Congress or Parliament…) to fund the necessary upgrade.

(The really bad problem is with embedded systems — and hospitals have lots of those. That’s “just” the Internet of Things security problem writ large. But IoT devices are often unpatchable; there’s no sustainable economic model for most of them. That, however, is a subject for another day.)

Today’s attack is blocked by the MS17-010 patch, which was released March 14. (It fixes holes allegedly exploited by the US intelligence community, but that’s a completely different topic. I’m on record as saying that the government should report flaws.) Two months seems like plenty of time to test, and it probably is enough — but is it enough time for remediation if you find a problem? Imagine the possible conversation between FedEx’s CSO and its CIO:

“We’ve got to install MS17-010; these are serious holes.”

“We can’t just yet. We’ve been testing it for the last two weeks; it breaks the shipping label software in 25% of our stores.”

“How long will a fix take?”

“About three months — we have to get updated database software from a vendor, and to install it we have to update the API the billing software uses.”

“OK, but hurry — these flaws have gotten lots of attention. I don’t think we have much time.”

So — if you’re the CIO, what do you do? Break the company, or risk an attack? (Again, this is an imaginary conversation.)

That patching is so hard is very unfortunate. Solving it is a research question. Vendors are doing what they can to improve the reliability of patches, but it’s a really, really difficult problem.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Malware, Security

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Patching is Hard

By News Aggregator

By Steven Bellovin

There are many news reports of a ransomware worm. Much of the National Health Service in the UK has been hit; so has FedEx. The patch for the flaw exploited by this malware has been out for a while, but many companies haven’t installed it. Naturally, this has prompted a lot of victim-blaming: they should have patched their systems. Yes, they should have, but many didn’t. Why not? Because patching is very hard and very risk, and the more complex your systems are, the harder and riskier it is.

Patching is hard? Yes — and every major tech player, no matter how sophisticated they are, has had catastrophic failures when they tried to change something. Google once bricked Chromebooks with an update. A Facebook configuration change took the site offline for 2.5 hours. Microsoft ruined network configuration and partially bricked some computers; even their newest patch isn’t trouble-free. An iOS update from Apple bricked some iPad Pros. Even Amazon knocked AWS off the air.

There are lots of reasons for any of these, but let’s focus on OS patches. Microsoft — and they’re probably the best in the business at this — devotes a lot of resources to testing patches. But they can’t test every possible user device configuration, nor can they test against every software package, especially if it’s locally written. An amazing amount of software inadvertently relies on OS bugs; sometimes, a vendor deliberately relies on non-standard APIs because there appears to be no other way to accomplish something. The inevitable result is that on occasion, these well-tested patches will break some computers. Enterprises know this, so they’re generally slow to patch. I learned the phrase “never install .0 of anything” in 1971, but while software today is much better, it’s not perfect and never will be. Enterprises often face a stark choice with security patches: take the risk of being knocked of the air by hackers, or take the risk of knocking yourself off the air. The result is that there is often an inverse correlation between the size of an organization and how rapidly it installs patches. This isn’t good, but with the very best technical people, both at the OS vendor and on site, it may be inevitable.

To be sure, there are good ways and bad ways to handle patches. Smart companies immediately start running patched software in their test labs, pounding on it with well-crafted regression tests and simulated user tests. They know that eventually, all operating systems become unsupported, and they plan (and budget) for replacement computers, and they make sure their own applications run on newer operating systems. If it won’t, they update or replace those applications, because running on an unsupported operating system is foolhardy.

Companies that aren’t sophisticated enough don’t do any of that. Budget-constrained enterprises postpone OS upgrades, often indefinitely. Government agencies are often the worst at that, because they’re dependent on budgets that are subject to the whims of politicians. But you can’t do that and expect your infrastructure to survive. Windows XP support ended more than three year ago. System administrators who haven’t upgraded since then may be negligent; more likely, they couldn’t persuade management (or Congress or Parliament…) to fund the necessary upgrade.

(The really bad problem is with embedded systems — and hospitals have lots of those. That’s “just” the Internet of Things security problem writ large. But IoT devices are often unpatchable; there’s no sustainable economic model for most of them. That, however, is a subject for another day.)

Today’s attack is blocked by the MS17-010 patch, which was released March 14. (It fixes holes allegedly exploited by the US intelligence community, but that’s a completely different topic. I’m on record as saying that the government should report flaws.) Two months seems like plenty of time to test, and it probably is enough — but is it enough time for remediation if you find a problem? Imagine the possible conversation between FedEx’s CSO and its CIO:

“We’ve got to install MS17-010; these are serious holes.”

“We can’t just yet. We’ve been testing it for the last two weeks; it breaks the shipping label software in 25% of our stores.”

“How long will a fix take?”

“About three months — we have to get updated database software from a vendor, and to install it we have to update the API the billing software uses.”

“OK, but hurry — these flaws have gotten lots of attention. I don’t think we have much time.”

So — if you’re the CIO, what do you do? Break the company, or risk an attack? (Again, this is an imaginary conversation.)

That patching is so hard is very unfortunate. Solving it is a research question. Vendors are doing what they can to improve the reliability of patches, but it’s a really, really difficult problem.

Written by Steven Bellovin, Professor of Computer Science at Columbia University

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Malware, Security

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post Patching is Hard appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

RIPE 74 – Highlights from Day 2, Part 2

By News Aggregator

By Kevin Meynell

The RIPE 74 meeting is happening this week in Budapest, Hungary, and we’re highlighting the presentations and activities related to the Deploy360 technologies throughout the week.

As we mentioned in the first part of this blog, Tuesday was a busy day for us and too much to cover in one post, so here’s the second part covering the points of interest.

First of all, take a look at the ‘IPv4 Transfers 5 years after runout‘ presentation from Elvis Daniel Velea (V4Escrow). This showed there are only around 37.4 million IPv4 addresses still available across all RIR regions, with AfriNIC having the most at 18 million, and ARIN the least at zero. At current projections, all IPv4 addresses will be exhausted by early 2021, and there has also been a significant rise in IPv4 transfers since 2014.

The interesting factor though, is these transfers are primarily into large developed economies, which suggests that smaller economies may have difficult in growing their Internet capacity in future. There are currently only a few large blocks (/16 or larger) available on the market, so most transactions are for /17s or smaller with prices being observed around USD 12-14 per IP address. Even smaller blocks are now trading around USD 15-20 per IP address, but the bottom line is that supply remains extremely limited and prices are expected to approach USD 20 by the end of 2017, with IPv4 addresses expected to become completely unavailable by 2025.

So we haven’t said it enough times, network operators need to be deploying IPv6 now or face the prospect of not being able to expand their businesses – either technically or economically – in the near future.

More practically, there was a good presentation about RKPI deployment from Yossi Gilad (Hebrew University of Jerusalem). He highlighted some of the challenges of deploying RPKI such as loose Route Origin Authorisations (ROAs) whereby the specified maximum prefix length exceeds the prefix length. This affects more than 30% of all IP prefixes in ROAs, and allows attacks to hijack all traffic to non-advertised sub-prefixes in the ROA. Other mistakes include misconfigured ROAs that invalidate genuine prefixes, and potentially cause disconnection from legitimate routes.

ROAlert is a tool that allows you to check whether a network is properly protected by ROAs, and if not, what the problems are. It also offers a proactive notification system by retrieving ROAs from the RPKI and comparing them against BGP advertisements, thus alerting network operators to wrongly configured ROAs. The results so far have been encouraging, with 168 operators having been notified of ROA errors of which 42% were fixed within a month. Network operators are encouraged to use this facility, and the hope is that it will be adopted by the RIR communities.

Keeping with the same theme, Andreas Reuter (Freie Universität Berlin) reported on the levels of RPKI adoption. Their analysis attempted to determine which ASes had adopted RPKI filtering policies, although it is not always easy to determine whether a route was being filtered based on RPKI, or whether this was due to private routing policy decisions. It was determined though, that a handful ASes are making routing decisions based on RPKI, and the next steps are to develop a live monitoring system to improve the quality of the data collection in order to get a more accurate view of RPKI adoption.

Finally, although it’s not a Deploy360 topic, there was a fascinating presentation on the Quantum Internet from Stephanie Wehner (Delft University of Technology). The aim of a quantum network is to communicate qubits (the quantum equivalent of the bit) almost instantaneously between two points on earth, which can address the delays associated with the speed of light. This has been demonstrated at 100 km distances, and there have been successful experiments at 300 km ranges, but the real challenge is over longer distances which is currently problematic to achieve in a reliable manner.

If you’re interested in learning more, QuTech will be holding an open day in Delft, The Netherlands on 22 June 2017. Would be a great opportunity to find out more about this technology that promises to radically change how we think about computing and networking.

For those of you who cannot attend the RIPE meeting in person, just a reminder that remote participation is available with audio and video streaming and also a jabber chat room.

The full programme can be found at https://ripe74.ripe.net/programme/meeting-plan/

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post RIPE 74 – Highlights from Day 2, Part 2 appeared on IPv6.net.

Read more here:: IPv6 News Aggregator