buy books online europe

Deploy360@IETF99, Day 5: Kdo se moc ptá, moc se dozví

By Kevin Meynell

There’s a couple of sessions of interest on the last day of IETF 99 before we say na shledanou to the City of a Hundred Spires.

Both sessions are running in parallel on the Friday morning starting at 09.30 CEST/UTC+2. ACME will continue to discuss the ACME specification, as well as the addition of CAA checking for compliance with CA/B Forum guidelines. There’s also new drafts specifying how to issue certificates for telephone numbers, how to issue certificates for VoIP service providers to Secure Telephony Identity, and ACME extensions to enable the issuance of short-term and automatically renewed certificates, certificates for e-mail recipients that want to use S/MIME, and certificates for use by TLS e-mail services.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively you can check out LPWAN that’s working on enabling IPv6 connectivity with very low wireless transmission rates between battery-powered devices spread across multiple kilometres. This will be discussing five drafts related to IPv6 header fragmentation and compression, as well as ICMPv6 usage over LPWANs.

That brings this IETF to an end, so it’s goodbye from us in Prague. Many thanks for reading along this week… please do read our other IETF 99-related posts … and we’ll see you at IETF 100 on 12-17 November 2017 in Singapore!

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

Deploy360@IETF99, Day 5: Kdo se moc ptá, moc se dozví

By News Aggregator

By Kevin Meynell

There’s a couple of sessions of interest on the last day of IETF 99 before we say na shledanou to the City of a Hundred Spires.

Both sessions are running in parallel on the Friday morning starting at 09.30 CEST/UTC+2. ACME will continue to discuss the ACME specification, as well as the addition of CAA checking for compliance with CA/B Forum guidelines. There’s also new drafts specifying how to issue certificates for telephone numbers, how to issue certificates for VoIP service providers to Secure Telephony Identity, and ACME extensions to enable the issuance of short-term and automatically renewed certificates, certificates for e-mail recipients that want to use S/MIME, and certificates for use by TLS e-mail services.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively you can check out LPWAN that’s working on enabling IPv6 connectivity with very low wireless transmission rates between battery-powered devices spread across multiple kilometres. This will be discussing five drafts related to IPv6 header fragmentation and compression, as well as ICMPv6 usage over LPWANs.

That brings this IETF to an end, so it’s goodbye from us in Prague. Many thanks for reading along this week… please do read our other IETF 99-related posts … and we’ll see you at IETF 100 on 12-17 November 2017 in Singapore!

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 5: Kdo se moc ptá, moc se dozví appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

10 Things Every CIO Must Know about Their Data Centers

By News Aggregator

By Tim Kittila

While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.

CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:

  1. If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
  1. Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
  1. Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
  1. How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
  1. What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
  1. Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
  1. Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
  1. Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
  1. What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
  1. What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.

Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Read more here:: datacenterknowledge.com/feed/

The post 10 Things Every CIO Must Know about Their Data Centers appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Deploy360@IETF99, Day 4: IoT, IPv6, DNSSEC & TLS

By Kevin Meynell

Thursday at IETF 99 in Prague is a mixture of overflow sessions, the Internet-of-Things, and encryption. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

Our day doesn’t actually start until 13.30 CEST/UTC+2, with the second part of V6OPS. This will continue discussing the ten drafts from whichever point it left them on Tuesday morning (see our Day 2 post for more information).

If you have V6OPS fatigue, then alternatively check out ROLL. This focuses on routing for the Internet-of-Things and has six drafts up for discussion.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


The second afternoon session at 15.50 CEST/UTC+2 features IPWAVE. This will be discussing two drafts on transmitting IPv6 over over IEEE 802.11-OCB in Vehicle-to-Internet and Vehicle-to-Infrastructure networks, and on a problem statement for IP Wireless Access in Vehicular Environments. A further draft summarises a survey on IP-based Vehicular Networking for Intelligent Transportation Systems.

There’s two working groups during the evening session starting at 18.10 CEST/UTC+2. UTA is discussing three drafts related to the compulsory use of TLS for SMTP, an interesting one proposing to obsolete clear text transfer for e-mail, and one proposing an SMTP service extension.

Finally, there’s the second part of DNSOP. There appears to be just the one DNSSEC-related draft in this session, on algorithm negotiation.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

Deploy360@IETF99, Day 4: IoT, IPv6, DNSSEC & TLS

By News Aggregator

By Kevin Meynell

Thursday at IETF 99 in Prague is a mixture of overflow sessions, the Internet-of-Things, and encryption. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

Our day doesn’t actually start until 13.30 CEST/UTC+2, with the second part of V6OPS. This will continue discussing the ten drafts from whichever point it left them on Tuesday morning (see our Day 2 post for more information).

If you have V6OPS fatigue, then alternatively check out ROLL. This focuses on routing for the Internet-of-Things and has six drafts up for discussion.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


The second afternoon session at 15.50 CEST/UTC+2 features IPWAVE. This will be discussing two drafts on transmitting IPv6 over over IEEE 802.11-OCB in Vehicle-to-Internet and Vehicle-to-Infrastructure networks, and on a problem statement for IP Wireless Access in Vehicular Environments. A further draft summarises a survey on IP-based Vehicular Networking for Intelligent Transportation Systems.

There’s two working groups during the evening session starting at 18.10 CEST/UTC+2. UTA is discussing three drafts related to the compulsory use of TLS for SMTP, an interesting one proposing to obsolete clear text transfer for e-mail, and one proposing an SMTP service extension.

Finally, there’s the second part of DNSOP. There appears to be just the one DNSSEC-related draft in this session, on algorithm negotiation.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 4: IoT, IPv6, DNSSEC & TLS appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

GigaSpaces Closes Analytics-App Gap With Spark

By News Aggregator

By George Leopold

Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.

In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.

The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.

The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.

The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.

The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.

The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.

Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.

Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.

“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”

To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)

Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.

IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.

Recent items:

IBM Bolsters Spark Ties with Latest SQL Engine

In-Memory Analytics to Boost Flight Ops For Major US Airline

The post GigaSpaces Closes Analytics-App Gap With Spark appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post GigaSpaces Closes Analytics-App Gap With Spark appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Internet Trends Report: Five ITSM Takeaways

By Industry Perspectives

Aparna TA is a Product Analyst for ManageEngine.

There’s an interesting time ahead for ITSM as it moves into the cloud and evolves to support a mobile workforce. Help desks will have to adapt as end users’ expectations of ITSM solutions start to mirror those of consumer applications.

Every year, Mary Meeker, a partner at venture capitalist firm Kleiner Perkins Caufield & Byers, produces a report on global internet trends. And this year’s narrative, like all others, is extremely sought out by technology companies and enthusiasts. This highly anticipated report sets the stage for the next big thing and sheds light on consumer technology adoption.

This year it covers a wide range of topics — from the increasing measurability of online advertising and the growing internet base in China to technological advancements in healthcare services. While the report doesn’t explicitly talk about the impact of global trends on ITSM, it leaves a lot of breadcrumbs that set expectations for the future of the ITSM industry. Here are a few things that might be relevant to IT service management.

  1. Mobile is driving the consumerization of enterprise IT – People spent more than twice as much time on mobile, desktop and other connected devices in 2016 than they did in 2008. As the wall between personal life and work wears down, customer expectations on an enterprise level are mirroring those of consumer apps.

IT service desk vendors are starting to adopt a mobile-first strategy to stay relevant to the mobile workforce. The ability to put a service desk in the palms of end users helps to drastically increase self-service adoption and improve user satisfaction.

  1. The cloud is accelerating change across enterprises – Cloud adoption has increased to new heights and is creating opportunities for new methods of software delivery such as APIs, microservices, elastic databases, etc. The shift from costlier perpetual licenses to cheaper subscription models has contributed to the rapid increase in cloud adoption, as the time and cost of setting up a cloud infrastructure are minimized.

As customers start to move toward a cloud-only IT infrastructure, SaaS has become a de facto model for many new vendors. With simple integrations, ITSM will be able to adopt newer technologies more quickly now than ever. The cloud also provides mobility and has the potential to take service desk operations beyond four walls to remote locations across the globe.

  1. Rising security concerns dictating the need for more compliance – As enterprises adopt cloud infrastructure, they are more wary about their applications’ security and compliance. The increased adoption of public and private clouds has led to an exponential increase in the severity of malicious threats.

Cloud vendors are warming up to new data protection and security policies, especially after the EU’s announcement of the General Data Protection Regulation (GDPR). This illustrates the greater need for the ITSM community and cloud vendors to work together to keep vulnerabilities at bay.

  1. Gaming can help optimize learning and engagement – There are about 2.6 billion gamers now compared to just 100 million in 1997, and gaming is still evolving. Gaming provides an intuitive interface to learn, and many organizations now use gamification to provide an engaging learning platform.

Many help desks have already implemented gamification in their tools to increase IT technician productivity. This can also be used to align IT technician’s day-to-day activities to business goals, thus creating a sense of accomplishment. Gaming can also be used to help end users adopt self-service portals and IT service desks faster.

  1. Social media can provide an opportunity to improve customer service – In a survey by Ovum, more than 60 percent of organizations expressed the need to provide easier access to online support channels. The growth of new tools like APIs and browser extensions has paved the way for innovative service delivery models which integrate enterprise applications (such as help desks) with consumer applications, including social media. Many companies are actively using social media as a channel to address customer concerns and resolve issues.

Many help desks have built-in integrations with social channels that automatically convert tweets or posts into tickets, thus utilizing popular social channels to widen the reach of online support. Social media channels provide a unique opportunity to go the extra mile to delight customers while gaining trust and brand equity.

These are just some of the key internet trends that coincide or overlap with the trajectory of ITSM and related technologies. While this report focuses on major internet trends, there are several technologies like AI, machine learning, analytics and IoT that are expected to be big game changers in the future of ITSM.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

Internet Trends Report: Five ITSM Takeaways

By News Aggregator

By Industry Perspectives

Aparna TA is a Product Analyst for ManageEngine.

There’s an interesting time ahead for ITSM as it moves into the cloud and evolves to support a mobile workforce. Help desks will have to adapt as end users’ expectations of ITSM solutions start to mirror those of consumer applications.

Every year, Mary Meeker, a partner at venture capitalist firm Kleiner Perkins Caufield & Byers, produces a report on global internet trends. And this year’s narrative, like all others, is extremely sought out by technology companies and enthusiasts. This highly anticipated report sets the stage for the next big thing and sheds light on consumer technology adoption.

This year it covers a wide range of topics — from the increasing measurability of online advertising and the growing internet base in China to technological advancements in healthcare services. While the report doesn’t explicitly talk about the impact of global trends on ITSM, it leaves a lot of breadcrumbs that set expectations for the future of the ITSM industry. Here are a few things that might be relevant to IT service management.

  1. Mobile is driving the consumerization of enterprise IT – People spent more than twice as much time on mobile, desktop and other connected devices in 2016 than they did in 2008. As the wall between personal life and work wears down, customer expectations on an enterprise level are mirroring those of consumer apps.

IT service desk vendors are starting to adopt a mobile-first strategy to stay relevant to the mobile workforce. The ability to put a service desk in the palms of end users helps to drastically increase self-service adoption and improve user satisfaction.

  1. The cloud is accelerating change across enterprises – Cloud adoption has increased to new heights and is creating opportunities for new methods of software delivery such as APIs, microservices, elastic databases, etc. The shift from costlier perpetual licenses to cheaper subscription models has contributed to the rapid increase in cloud adoption, as the time and cost of setting up a cloud infrastructure are minimized.

As customers start to move toward a cloud-only IT infrastructure, SaaS has become a de facto model for many new vendors. With simple integrations, ITSM will be able to adopt newer technologies more quickly now than ever. The cloud also provides mobility and has the potential to take service desk operations beyond four walls to remote locations across the globe.

  1. Rising security concerns dictating the need for more compliance – As enterprises adopt cloud infrastructure, they are more wary about their applications’ security and compliance. The increased adoption of public and private clouds has led to an exponential increase in the severity of malicious threats.

Cloud vendors are warming up to new data protection and security policies, especially after the EU’s announcement of the General Data Protection Regulation (GDPR). This illustrates the greater need for the ITSM community and cloud vendors to work together to keep vulnerabilities at bay.

  1. Gaming can help optimize learning and engagement – There are about 2.6 billion gamers now compared to just 100 million in 1997, and gaming is still evolving. Gaming provides an intuitive interface to learn, and many organizations now use gamification to provide an engaging learning platform.

Many help desks have already implemented gamification in their tools to increase IT technician productivity. This can also be used to align IT technician’s day-to-day activities to business goals, thus creating a sense of accomplishment. Gaming can also be used to help end users adopt self-service portals and IT service desks faster.

  1. Social media can provide an opportunity to improve customer service – In a survey by Ovum, more than 60 percent of organizations expressed the need to provide easier access to online support channels. The growth of new tools like APIs and browser extensions has paved the way for innovative service delivery models which integrate enterprise applications (such as help desks) with consumer applications, including social media. Many companies are actively using social media as a channel to address customer concerns and resolve issues.

Many help desks have built-in integrations with social channels that automatically convert tweets or posts into tickets, thus utilizing popular social channels to widen the reach of online support. Social media channels provide a unique opportunity to go the extra mile to delight customers while gaining trust and brand equity.

These are just some of the key internet trends that coincide or overlap with the trajectory of ITSM and related technologies. While this report focuses on major internet trends, there are several technologies like AI, machine learning, analytics and IoT that are expected to be big game changers in the future of ITSM.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

The post Internet Trends Report: Five ITSM Takeaways appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Nation Scale Internet Filtering — Do’s and Don’ts

By News Aggregator

By Paul Vixie

If a national government wants to prevent certain kinds of Internet communication inside its borders, the costs can be extreme and success will never be more than partial. VPN and tunnel technologies will keep improving as long as there is demand, and filtering or blocking out every such technology will be a never-ending game of one-upmanship. Everyone knows and will always know that determined Internet users will find a way to get to what they want, but sometimes the symbolic message is more important than the operational results. In this article, I will describe some current and prior approaches to this problem, and also, make some recommendations doing nation-state Internet filtering in the most responsible and constructive manner.

History, Background, and SOPA

For many years, China’s so-called Great Firewall has mostly stopped most law-abiding people including both citizens and visitors from accessing most of the Internet content that the Chinese government does not approve of. As a frequent visitor to China, I find it a little odd that my Verizon Wireless data roaming is implemented as a tunnel back to the USA, and is therefore unfiltered. Whereas, when I’m on a local WiFi network, I’m behind the Great Firewall, unable to access Facebook, Twitter, and so on. The downside of China’s approach is that I’ve been slow to expand my business there — I will not break the law, and I need my employees to have access to the entire Internet.

Another example is Italy’s filtering policy regarding unlicensed (non-taxpaying) online gambling, which was blocked not by a national “Great Firewall” but rather SOPA-style DNS filtering mandated for Italian ISP’s. The visible result was an uptick in the use of Google DNS (8.8.8.8 and 8.8.4.4) by Italian gamblers, and if there was also an increase in gambling tax revenue, that was not widely reported. The downside here is the visible cracks in Italian society — many of Italians apparently do not trust their own government. Furthermore, in 2013 the European Union ruled that this kind of filtering was a violation of EU policy.

In Turkey up until 2016, the government had similar protections in place, not about gambling but rather pornography and terrorism and anti-Islamic hate speech. The filtering was widely respected, showing that the Turkish people and their government were more closely aligned at that time than was evident during the Italian experiment. It was possible for Turkish internet users to opt-out of the government’s Internet filtering regime, but such opt-out requests were uncommon. This fit the Internet’s cooperation-based foundation perfectly: where interests are aligned, cooperation is possible, but where interests are not aligned, unilateral mandates are never completely effective.

In the years since the SOPA debacle in the United States, I’ve made it my priority to discuss with the entertainment and luxury goods industries the business and technical problems posed to them by the Internet. Away from the cameras, most executives freely admit that it’s not possible to prevent determined users from reaching any part of the Internet they might seek, including so-called “pirate” sites which may even be “dedicated to infringement”. I learned however that there is a class of buyers, of both music and movies and luxury goods, who are not interested in infringement per se, and who are often simply misled by “pirate” Internet sites who pretend to be legitimate. One estimate was that only 1/3rd of commercial music is bought legally, and the remaining 2/3rd is roughly divided between dedicated (1/3rd) and accidental (1/3rd) infringement. If so, then getting the accidental infringers who comprise 1/3rd of the market to buy their music legally wouldn’t change the cost of music for those buyers, but could raise the music industry’s revenues by 100%. We should all think of that as a “win-win-win” possibility.

Speaking for myself, I’d rather live and act within the law, respecting intellectual property rights, and using my so-called “dollar votes” to encourage more commercial art to be produced. I fought SOPA not because I believed that content somehow “wanted to be free”, but because this kind of filtering will only be effective where the end-users see it as a benefit — see it, in other words, as aligned with their interests. That’s why I co-invented the DNS RPZ firewall system back in 2010, which allows security policy subscribers to automatically connect to their providers in near-realtime, and to then cooperate on wide-scale filtering of DNS content based on a shared security policy. This is the technology that SOPA would have used, except, SOPA would have been widely bypassed, and where not bypassed, would have prohibited DNSSEC deployment. American Internet users are more like Italians than Turks — they don’t want their government telling them what they can’t do.

I think, though, that every government ought to offer this kind of DNS filtering, so that any Internet user in that country who wants to see only the subset of the Internet considered safe by their national government, can get that behavior as a service. Some users, including me, would be happy to follow such policy advice even though we’d fight against any similar policy mandate. In my case, I’d be willing to pay extra to get this kind of filtering. My nation’s government invests a lot of time and money identifying illegal web sites, whether dedicated to terrorism, or infringement, or whatever. I’d like them to publish their findings in real time using an open and unencumbered protocol like DNS RPZ, so that those of us who want to avoid those varieties of bad stuff can voluntarily do so. In fact, the entertainment industry could do the same — because I don’t want to be an accidental infringer either.

Future, Foreground, and Specific Approaches

While human ingenuity can sometimes seem boundless, a nation-state exerting any kind of control over Internet reachability within its borders has only three broad choices available to them.

First, the Great Firewall approach. In this scenario, the government is on-path and can witness, modify, or insert traffic directly. This is costly, both in human resources, services, equipment, electric power, and prestige. It’s necessary for every in-country Internet Service Provider who wants an out-of-country connection, to work directly with government agencies or agents to ensure that real time visibility and control are among the government’s powers. This may require that all Internet border crossings occur in some central location, or it may require that the government’s surveillance and traffic modification capabilities be installed in multiple discrete locations. In addition to hard costs, there will be soft costs like errors and omissions which induce unexplained failures. The inevitable effects on the nation’s economy must be considered, since a “Great Firewall” approach must by definition wall the country off from mainstream human ideas, with associated chilling effects on outside investment. Finally, this approach, like all access policies, can be bypassed by a determined-enough end-user who is willing to ignore the law. The “Great Firewall” approach will maximize the bypass costs, having first maximized deployment costs.

Second, a distributed announcement approach using Internet Protocol address-level firewalls. Every user and every service on the Internet has to have one or more IP addresses from which to send, or to which receive, packets to or from other Internet participants. While the user-side IP addresses tend to be migratory and temporary in nature due to mobile users or address-pool sharing, the server-side IP addresses tend to be well known, pre-announced, and predictable. If a national government can compel all of its Internet Service Providers to listen for “IP address firewall” configuration information from a government agency, and to program its own local firewalls in accordance with the government’s then-current access policies, then it would have the effect of making distant (out-of-country) services deliberately unreachable by in-country users. Like all policy efforts, this can be bypassed, either by in-country (user) effort, or by out-of-country (service) provider effort, or by middle-man proxy or VPN provider effort. Bypass will be easier than in the Great Firewall approach described above, but a strong advantage of this approach is that the government does not have to be on-path, and so everyone’s deployment costs are considerably lower.

Third and finally, a distributed announcement approach using IP Domain Name System (DNS-level) firewalls. Every Internet access requires at least one DNS lookup, and these lookups can be interrupted according to policy if the end-user and Internet Service Provider (ISP) are willing to cooperate on the matter. A policy based firewall operating at the DNS level can interrupt communications based on several possible criteria: either a “domain name” can be poisoned, or a “name server”, or an “address result”. In each case, the DNS element to be poisoned has to be discovered and advertised in advance, exactly as in the “address-level firewall” and “Great Firewall” approaches described above. However, DNS lookups are far less frequent than packet-level transmissions, and so the deployment cost of a DNS-level firewall will be far lower than for a packet-level firewall. A DNS firewall can be constructed using off the shelf “open source” software using the license-free “DNS Response Policy Zone” (DNS RPZ) technology first announced in 2010. The DNS RPZ system allows an unlimited number of DNS operators (“subscribers”) to synchronize their DNS firewall policy to one or more “providers” such as national governments or industry trade associations. DNS firewalls offer the greatest ease of bypass, so much so that it’s better to say that “end-user cooperation is assumed,” which could be a feature rather than a bug.

Conclusion

A national government who wants to make a difference in the lived Internet experience of its citizens should consider not just the hard deployment and operational costs, but also the soft costs to the overall economy, and in prestige, and especially, what symbolic message is intended. If safety as defined by the government is to be seen as a goal it shares with its citizens and that will be implemented using methods and policies agreed to by its citizens, then ease of bypass should not be a primary consideration. Rather, ease of participation, and transparency of operation will be the most important ingredients for success.

Written by Paul Vixie, CEO, Farsight Security

Follow CircleID on Twitter

More under: Access Providers, Censorship, DNS, Intellectual Property, Internet Governance, Networks, Policy & Regulation

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post Nation Scale Internet Filtering — Do’s and Don’ts appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Life Lessons: Achilles Rupf, CEO of Naka Mobile

By Sheetal Kumbhar

Achilles Rupf, CEO of Naka Mobile, says that if you have self believe and willing to learn from your mistakes, nothing will stop you. What job did you want when you grew up? I always wanted to be a pilot! Being in charge of my company isn’t too bad either though. 2. If you had one […]

The post Life Lessons: Achilles Rupf, CEO of Naka Mobile appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/