accelerate dns propagation

10 Things Every CIO Must Know about Their Data Centers

By Tim Kittila

While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.

CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:

  1. If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
  1. Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
  1. Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
  1. How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
  1. What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
  1. Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
  1. Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
  1. Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
  1. What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
  1. What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.

Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Read more here:: datacenterknowledge.com/feed/

10 Things Every CIO Must Know about Their Data Centers

By News Aggregator

By Tim Kittila

While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.

CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:

  1. If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
  1. Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
  1. Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
  1. How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
  1. What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
  1. Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
  1. Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
  1. Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
  1. What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
  1. What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.

Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Read more here:: datacenterknowledge.com/feed/

The post 10 Things Every CIO Must Know about Their Data Centers appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

GigaSpaces Closes Analytics-App Gap With Spark

By George Leopold

Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.

In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.

The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.

The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.

The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.

The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.

The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.

Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.

Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.

“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”

To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)

Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.

IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.

Recent items:

IBM Bolsters Spark Ties with Latest SQL Engine

In-Memory Analytics to Boost Flight Ops For Major US Airline

The post GigaSpaces Closes Analytics-App Gap With Spark appeared first on Datanami.

Read more here:: www.datanami.com/feed/

GigaSpaces Closes Analytics-App Gap With Spark

By News Aggregator

By George Leopold

Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.

In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.

The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.

The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.

The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.

The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.

The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.

Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.

Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.

“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”

To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)

Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.

IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.

Recent items:

IBM Bolsters Spark Ties with Latest SQL Engine

In-Memory Analytics to Boost Flight Ops For Major US Airline

The post GigaSpaces Closes Analytics-App Gap With Spark appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post GigaSpaces Closes Analytics-App Gap With Spark appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Smart thermostats gain traction in Europe and North America

By Sheetal Kumbhar

Berg Insight, the M2M/IoT market research provider, released new findings about the smart thermostat market. The number of North American and European homes with a smart thermostat grew by 67% to 10.1 million in 2016. The North American market recorded a 64% growth in the installed base of smart thermostats to 7.8 million. In Europe, […]

The post Smart thermostats gain traction in Europe and North America appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Smart thermostats gain traction in Europe and North America

By News Aggregator

By Sheetal Kumbhar

Berg Insight, the M2M/IoT market research provider, released new findings about the smart thermostat market. The number of North American and European homes with a smart thermostat grew by 67% to 10.1 million in 2016. The North American market recorded a 64% growth in the installed base of smart thermostats to 7.8 million. In Europe, […]

The post Smart thermostats gain traction in Europe and North America appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Smart thermostats gain traction in Europe and North America appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Deploy360@IETF99, Day 3: IPv6 & TLS

By Kevin Meynell

After a packed first couple of days, Wednesday at IETF 99 in Prague is a bit quieter for us. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

There’s just the three working groups to follow today, starting at 09.30 CEST/UTC+2 with TLS. A couple of very important drafts up for discussion though, with both the TLS 1.3 and DTLS 1.3 specifications in last call. There’s also a couple of other interesting drafts relating to DANE record and DNSSEC authentication chain extension for TLS, and Data Center use of Static DH in TLS 1.3.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively, there’s DMM that will be discussing at least one IPv6-relevant draft on the Applicability of Segment Routing IPv6 to the user-plane of mobile networks.

During the first afternoon session at 13.30 CEST/UTC+2, there’s DHC. This will continue to discuss four DHCPv6 related drafts, as well as hear about the DHCPv6 deployment experiences at Comcast.

Don’t forget that from 17.10 CDT/UTC-6 onwards will be the IETF Plenary Session. This is being held in Congress Hall I/II.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

Deploy360@IETF99, Day 3: IPv6 & TLS

By News Aggregator

By Kevin Meynell

After a packed first couple of days, Wednesday at IETF 99 in Prague is a bit quieter for us. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

There’s just the three working groups to follow today, starting at 09.30 CEST/UTC+2 with TLS. A couple of very important drafts up for discussion though, with both the TLS 1.3 and DTLS 1.3 specifications in last call. There’s also a couple of other interesting drafts relating to DANE record and DNSSEC authentication chain extension for TLS, and Data Center use of Static DH in TLS 1.3.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively, there’s DMM that will be discussing at least one IPv6-relevant draft on the Applicability of Segment Routing IPv6 to the user-plane of mobile networks.

During the first afternoon session at 13.30 CEST/UTC+2, there’s DHC. This will continue to discuss four DHCPv6 related drafts, as well as hear about the DHCPv6 deployment experiences at Comcast.

Don’t forget that from 17.10 CDT/UTC-6 onwards will be the IETF Plenary Session. This is being held in Congress Hall I/II.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 3: IPv6 & TLS appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Nation Scale Internet Filtering — Do’s and Don’ts

By Paul Vixie

If a national government wants to prevent certain kinds of Internet communication inside its borders, the costs can be extreme and success will never be more than partial. VPN and tunnel technologies will keep improving as long as there is demand, and filtering or blocking out every such technology will be a never-ending game of one-upmanship. Everyone knows and will always know that determined Internet users will find a way to get to what they want, but sometimes the symbolic message is more important than the operational results. In this article, I will describe some current and prior approaches to this problem, and also, make some recommendations doing nation-state Internet filtering in the most responsible and constructive manner.

History, Background, and SOPA

For many years, China’s so-called Great Firewall has mostly stopped most law-abiding people including both citizens and visitors from accessing most of the Internet content that the Chinese government does not approve of. As a frequent visitor to China, I find it a little odd that my Verizon Wireless data roaming is implemented as a tunnel back to the USA, and is therefore unfiltered. Whereas, when I’m on a local WiFi network, I’m behind the Great Firewall, unable to access Facebook, Twitter, and so on. The downside of China’s approach is that I’ve been slow to expand my business there — I will not break the law, and I need my employees to have access to the entire Internet.

Another example is Italy’s filtering policy regarding unlicensed (non-taxpaying) online gambling, which was blocked not by a national “Great Firewall” but rather SOPA-style DNS filtering mandated for Italian ISP’s. The visible result was an uptick in the use of Google DNS (8.8.8.8 and 8.8.4.4) by Italian gamblers, and if there was also an increase in gambling tax revenue, that was not widely reported. The downside here is the visible cracks in Italian society — many of Italians apparently do not trust their own government. Furthermore, in 2013 the European Union ruled that this kind of filtering was a violation of EU policy.

In Turkey up until 2016, the government had similar protections in place, not about gambling but rather pornography and terrorism and anti-Islamic hate speech. The filtering was widely respected, showing that the Turkish people and their government were more closely aligned at that time than was evident during the Italian experiment. It was possible for Turkish internet users to opt-out of the government’s Internet filtering regime, but such opt-out requests were uncommon. This fit the Internet’s cooperation-based foundation perfectly: where interests are aligned, cooperation is possible, but where interests are not aligned, unilateral mandates are never completely effective.

In the years since the SOPA debacle in the United States, I’ve made it my priority to discuss with the entertainment and luxury goods industries the business and technical problems posed to them by the Internet. Away from the cameras, most executives freely admit that it’s not possible to prevent determined users from reaching any part of the Internet they might seek, including so-called “pirate” sites which may even be “dedicated to infringement”. I learned however that there is a class of buyers, of both music and movies and luxury goods, who are not interested in infringement per se, and who are often simply misled by “pirate” Internet sites who pretend to be legitimate. One estimate was that only 1/3rd of commercial music is bought legally, and the remaining 2/3rd is roughly divided between dedicated (1/3rd) and accidental (1/3rd) infringement. If so, then getting the accidental infringers who comprise 1/3rd of the market to buy their music legally wouldn’t change the cost of music for those buyers, but could raise the music industry’s revenues by 100%. We should all think of that as a “win-win-win” possibility.

Speaking for myself, I’d rather live and act within the law, respecting intellectual property rights, and using my so-called “dollar votes” to encourage more commercial art to be produced. I fought SOPA not because I believed that content somehow “wanted to be free”, but because this kind of filtering will only be effective where the end-users see it as a benefit — see it, in other words, as aligned with their interests. That’s why I co-invented the DNS RPZ firewall system back in 2010, which allows security policy subscribers to automatically connect to their providers in near-realtime, and to then cooperate on wide-scale filtering of DNS content based on a shared security policy. This is the technology that SOPA would have used, except, SOPA would have been widely bypassed, and where not bypassed, would have prohibited DNSSEC deployment. American Internet users are more like Italians than Turks — they don’t want their government telling them what they can’t do.

I think, though, that every government ought to offer this kind of DNS filtering, so that any Internet user in that country who wants to see only the subset of the Internet considered safe by their national government, can get that behavior as a service. Some users, including me, would be happy to follow such policy advice even though we’d fight against any similar policy mandate. In my case, I’d be willing to pay extra to get this kind of filtering. My nation’s government invests a lot of time and money identifying illegal web sites, whether dedicated to terrorism, or infringement, or whatever. I’d like them to publish their findings in real time using an open and unencumbered protocol like DNS RPZ, so that those of us who want to avoid those varieties of bad stuff can voluntarily do so. In fact, the entertainment industry could do the same — because I don’t want to be an accidental infringer either.

Future, Foreground, and Specific Approaches

While human ingenuity can sometimes seem boundless, a nation-state exerting any kind of control over Internet reachability within its borders has only three broad choices available to them.

First, the Great Firewall approach. In this scenario, the government is on-path and can witness, modify, or insert traffic directly. This is costly, both in human resources, services, equipment, electric power, and prestige. It’s necessary for every in-country Internet Service Provider who wants an out-of-country connection, to work directly with government agencies or agents to ensure that real time visibility and control are among the government’s powers. This may require that all Internet border crossings occur in some central location, or it may require that the government’s surveillance and traffic modification capabilities be installed in multiple discrete locations. In addition to hard costs, there will be soft costs like errors and omissions which induce unexplained failures. The inevitable effects on the nation’s economy must be considered, since a “Great Firewall” approach must by definition wall the country off from mainstream human ideas, with associated chilling effects on outside investment. Finally, this approach, like all access policies, can be bypassed by a determined-enough end-user who is willing to ignore the law. The “Great Firewall” approach will maximize the bypass costs, having first maximized deployment costs.

Second, a distributed announcement approach using Internet Protocol address-level firewalls. Every user and every service on the Internet has to have one or more IP addresses from which to send, or to which receive, packets to or from other Internet participants. While the user-side IP addresses tend to be migratory and temporary in nature due to mobile users or address-pool sharing, the server-side IP addresses tend to be well known, pre-announced, and predictable. If a national government can compel all of its Internet Service Providers to listen for “IP address firewall” configuration information from a government agency, and to program its own local firewalls in accordance with the government’s then-current access policies, then it would have the effect of making distant (out-of-country) services deliberately unreachable by in-country users. Like all policy efforts, this can be bypassed, either by in-country (user) effort, or by out-of-country (service) provider effort, or by middle-man proxy or VPN provider effort. Bypass will be easier than in the Great Firewall approach described above, but a strong advantage of this approach is that the government does not have to be on-path, and so everyone’s deployment costs are considerably lower.

Third and finally, a distributed announcement approach using IP Domain Name System (DNS-level) firewalls. Every Internet access requires at least one DNS lookup, and these lookups can be interrupted according to policy if the end-user and Internet Service Provider (ISP) are willing to cooperate on the matter. A policy based firewall operating at the DNS level can interrupt communications based on several possible criteria: either a “domain name” can be poisoned, or a “name server”, or an “address result”. In each case, the DNS element to be poisoned has to be discovered and advertised in advance, exactly as in the “address-level firewall” and “Great Firewall” approaches described above. However, DNS lookups are far less frequent than packet-level transmissions, and so the deployment cost of a DNS-level firewall will be far lower than for a packet-level firewall. A DNS firewall can be constructed using off the shelf “open source” software using the license-free “DNS Response Policy Zone” (DNS RPZ) technology first announced in 2010. The DNS RPZ system allows an unlimited number of DNS operators (“subscribers”) to synchronize their DNS firewall policy to one or more “providers” such as national governments or industry trade associations. DNS firewalls offer the greatest ease of bypass, so much so that it’s better to say that “end-user cooperation is assumed,” which could be a feature rather than a bug.

Conclusion

A national government who wants to make a difference in the lived Internet experience of its citizens should consider not just the hard deployment and operational costs, but also the soft costs to the overall economy, and in prestige, and especially, what symbolic message is intended. If safety as defined by the government is to be seen as a goal it shares with its citizens and that will be implemented using methods and policies agreed to by its citizens, then ease of bypass should not be a primary consideration. Rather, ease of participation, and transparency of operation will be the most important ingredients for success.

Written by Paul Vixie, CEO, Farsight Security

Follow CircleID on Twitter

More under: Access Providers, Censorship, DNS, Intellectual Property, Internet Governance, Networks, Policy & Regulation

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Nation Scale Internet Filtering — Do’s and Don’ts

By News Aggregator

By Paul Vixie

If a national government wants to prevent certain kinds of Internet communication inside its borders, the costs can be extreme and success will never be more than partial. VPN and tunnel technologies will keep improving as long as there is demand, and filtering or blocking out every such technology will be a never-ending game of one-upmanship. Everyone knows and will always know that determined Internet users will find a way to get to what they want, but sometimes the symbolic message is more important than the operational results. In this article, I will describe some current and prior approaches to this problem, and also, make some recommendations doing nation-state Internet filtering in the most responsible and constructive manner.

History, Background, and SOPA

For many years, China’s so-called Great Firewall has mostly stopped most law-abiding people including both citizens and visitors from accessing most of the Internet content that the Chinese government does not approve of. As a frequent visitor to China, I find it a little odd that my Verizon Wireless data roaming is implemented as a tunnel back to the USA, and is therefore unfiltered. Whereas, when I’m on a local WiFi network, I’m behind the Great Firewall, unable to access Facebook, Twitter, and so on. The downside of China’s approach is that I’ve been slow to expand my business there — I will not break the law, and I need my employees to have access to the entire Internet.

Another example is Italy’s filtering policy regarding unlicensed (non-taxpaying) online gambling, which was blocked not by a national “Great Firewall” but rather SOPA-style DNS filtering mandated for Italian ISP’s. The visible result was an uptick in the use of Google DNS (8.8.8.8 and 8.8.4.4) by Italian gamblers, and if there was also an increase in gambling tax revenue, that was not widely reported. The downside here is the visible cracks in Italian society — many of Italians apparently do not trust their own government. Furthermore, in 2013 the European Union ruled that this kind of filtering was a violation of EU policy.

In Turkey up until 2016, the government had similar protections in place, not about gambling but rather pornography and terrorism and anti-Islamic hate speech. The filtering was widely respected, showing that the Turkish people and their government were more closely aligned at that time than was evident during the Italian experiment. It was possible for Turkish internet users to opt-out of the government’s Internet filtering regime, but such opt-out requests were uncommon. This fit the Internet’s cooperation-based foundation perfectly: where interests are aligned, cooperation is possible, but where interests are not aligned, unilateral mandates are never completely effective.

In the years since the SOPA debacle in the United States, I’ve made it my priority to discuss with the entertainment and luxury goods industries the business and technical problems posed to them by the Internet. Away from the cameras, most executives freely admit that it’s not possible to prevent determined users from reaching any part of the Internet they might seek, including so-called “pirate” sites which may even be “dedicated to infringement”. I learned however that there is a class of buyers, of both music and movies and luxury goods, who are not interested in infringement per se, and who are often simply misled by “pirate” Internet sites who pretend to be legitimate. One estimate was that only 1/3rd of commercial music is bought legally, and the remaining 2/3rd is roughly divided between dedicated (1/3rd) and accidental (1/3rd) infringement. If so, then getting the accidental infringers who comprise 1/3rd of the market to buy their music legally wouldn’t change the cost of music for those buyers, but could raise the music industry’s revenues by 100%. We should all think of that as a “win-win-win” possibility.

Speaking for myself, I’d rather live and act within the law, respecting intellectual property rights, and using my so-called “dollar votes” to encourage more commercial art to be produced. I fought SOPA not because I believed that content somehow “wanted to be free”, but because this kind of filtering will only be effective where the end-users see it as a benefit — see it, in other words, as aligned with their interests. That’s why I co-invented the DNS RPZ firewall system back in 2010, which allows security policy subscribers to automatically connect to their providers in near-realtime, and to then cooperate on wide-scale filtering of DNS content based on a shared security policy. This is the technology that SOPA would have used, except, SOPA would have been widely bypassed, and where not bypassed, would have prohibited DNSSEC deployment. American Internet users are more like Italians than Turks — they don’t want their government telling them what they can’t do.

I think, though, that every government ought to offer this kind of DNS filtering, so that any Internet user in that country who wants to see only the subset of the Internet considered safe by their national government, can get that behavior as a service. Some users, including me, would be happy to follow such policy advice even though we’d fight against any similar policy mandate. In my case, I’d be willing to pay extra to get this kind of filtering. My nation’s government invests a lot of time and money identifying illegal web sites, whether dedicated to terrorism, or infringement, or whatever. I’d like them to publish their findings in real time using an open and unencumbered protocol like DNS RPZ, so that those of us who want to avoid those varieties of bad stuff can voluntarily do so. In fact, the entertainment industry could do the same — because I don’t want to be an accidental infringer either.

Future, Foreground, and Specific Approaches

While human ingenuity can sometimes seem boundless, a nation-state exerting any kind of control over Internet reachability within its borders has only three broad choices available to them.

First, the Great Firewall approach. In this scenario, the government is on-path and can witness, modify, or insert traffic directly. This is costly, both in human resources, services, equipment, electric power, and prestige. It’s necessary for every in-country Internet Service Provider who wants an out-of-country connection, to work directly with government agencies or agents to ensure that real time visibility and control are among the government’s powers. This may require that all Internet border crossings occur in some central location, or it may require that the government’s surveillance and traffic modification capabilities be installed in multiple discrete locations. In addition to hard costs, there will be soft costs like errors and omissions which induce unexplained failures. The inevitable effects on the nation’s economy must be considered, since a “Great Firewall” approach must by definition wall the country off from mainstream human ideas, with associated chilling effects on outside investment. Finally, this approach, like all access policies, can be bypassed by a determined-enough end-user who is willing to ignore the law. The “Great Firewall” approach will maximize the bypass costs, having first maximized deployment costs.

Second, a distributed announcement approach using Internet Protocol address-level firewalls. Every user and every service on the Internet has to have one or more IP addresses from which to send, or to which receive, packets to or from other Internet participants. While the user-side IP addresses tend to be migratory and temporary in nature due to mobile users or address-pool sharing, the server-side IP addresses tend to be well known, pre-announced, and predictable. If a national government can compel all of its Internet Service Providers to listen for “IP address firewall” configuration information from a government agency, and to program its own local firewalls in accordance with the government’s then-current access policies, then it would have the effect of making distant (out-of-country) services deliberately unreachable by in-country users. Like all policy efforts, this can be bypassed, either by in-country (user) effort, or by out-of-country (service) provider effort, or by middle-man proxy or VPN provider effort. Bypass will be easier than in the Great Firewall approach described above, but a strong advantage of this approach is that the government does not have to be on-path, and so everyone’s deployment costs are considerably lower.

Third and finally, a distributed announcement approach using IP Domain Name System (DNS-level) firewalls. Every Internet access requires at least one DNS lookup, and these lookups can be interrupted according to policy if the end-user and Internet Service Provider (ISP) are willing to cooperate on the matter. A policy based firewall operating at the DNS level can interrupt communications based on several possible criteria: either a “domain name” can be poisoned, or a “name server”, or an “address result”. In each case, the DNS element to be poisoned has to be discovered and advertised in advance, exactly as in the “address-level firewall” and “Great Firewall” approaches described above. However, DNS lookups are far less frequent than packet-level transmissions, and so the deployment cost of a DNS-level firewall will be far lower than for a packet-level firewall. A DNS firewall can be constructed using off the shelf “open source” software using the license-free “DNS Response Policy Zone” (DNS RPZ) technology first announced in 2010. The DNS RPZ system allows an unlimited number of DNS operators (“subscribers”) to synchronize their DNS firewall policy to one or more “providers” such as national governments or industry trade associations. DNS firewalls offer the greatest ease of bypass, so much so that it’s better to say that “end-user cooperation is assumed,” which could be a feature rather than a bug.

Conclusion

A national government who wants to make a difference in the lived Internet experience of its citizens should consider not just the hard deployment and operational costs, but also the soft costs to the overall economy, and in prestige, and especially, what symbolic message is intended. If safety as defined by the government is to be seen as a goal it shares with its citizens and that will be implemented using methods and policies agreed to by its citizens, then ease of bypass should not be a primary consideration. Rather, ease of participation, and transparency of operation will be the most important ingredients for success.

Written by Paul Vixie, CEO, Farsight Security

Follow CircleID on Twitter

More under: Access Providers, Censorship, DNS, Intellectual Property, Internet Governance, Networks, Policy & Regulation

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post Nation Scale Internet Filtering — Do’s and Don’ts appeared on IPv6.net.

Read more here:: IPv6 News Aggregator