tech

Humming an Open Internet Demise in London?

By Anthony Rutkowski

In mid-March, the group dubbed by Wired Magazine 20 years ago as Crypto-Rebels and Anarchists — the IETF — is meeting in London. With what is likely some loud humming, the activists will likely seek to rain mayhem upon the world of network and societal security using extreme end-to-end encryption, and collaterally diminish some remaining vestiges of an “open internet.” Ironically, the IETF uses what has become known as the “NRA defence”: extreme encryption doesn’t cause harm, criminals and terrorists do. The details and perhaps saving alternatives are described in this article.

Formally known as the Internet Engineering Task Force (IETF), the group began its life as a clever DARPA skunkworks project to get funded academics engaged in collective brainstorming of radical new ideas for DOD. It never created an actual organization — which helped avoid responsibility for its actions. During the 1990s, the IETF became embraced as a strategic home for a number of companies growing the new, lucrative market for disruptive DARPA internet products and services — coupled with continued copious funding from the Clinton Administration which also treated it as a means for promoting an array of perceived U.S. political-economic interests.

Over subsequent years, as other industry technical bodies grew and prospered, the IETF managed to find a niche value proposition in maintaining and promoting its legacy protocols. During the past few years, however, the IETF’s anarchist roots and non-organization existence have emerged as a significant security liability. The zenith was reached with the “Pervasive Encryption” initiative, bringing Edward Snowden virtually to the IETF meetings, and humming to decide on radical actions that met the fancy of his acolytes.

The Pervasive Encryption initiative

The IETF began doing Snowden’s bidding with the “Pervasive Encryption” initiative as their common crusade against what Snowden deemed “Pervasive Monitoring.” The IETF activists even rushed to bless his mantra in the form of its own Best Current Practice turned into a mitigation commandment called RFC 7258.

The initiative will come to fruition at a humming session in London at the IETF 101st gathering in a few weeks. The particular object of humming is an IETF specification designated TLS 1.3 and designed to provide extremely strong, autonomous encryption for traffic between any end-points (known as “end-to-end” or “e2e”). TLS = Transport Layer Security. The specification has been the subject of no less than 24 versions and more than 25 thousand messages to reach a final stage of alleged un-breakability. In the IETF vernacular, the primary design goal of TLS 1.3 is to “develop a mode that encrypts as much of the handshake as is possible to reduce the amount of observable data to both passive and active attackers.” How this occurs leverages an array of cryptologic techniques to achieve perfect “forward secrecy.”

There are perceived short-term benefits for some parties from the essentially invisible traffic capabilities between two end-points on devices anywhere in the world that are described below. However, the impacts are overwhelmingly, profoundly adverse. Innumerable parties over the past two years have raised alarms, and include multiple organizations and venues: workshops and lists within the IETF itself, vendor concerns, effects concerns, major enterprise users such as Financial Data Center Operators, major malware software vendors, the IEEE, the 3GPP mobile services community, the ITU-T Security Group and TSB Secretariat, a plethora of company R&D activities in the form of remedial product patents, trade press articles, and literally hundreds of research studies published in professional journals. The bottom-line view among the IETF activists, however, is “not our problem.”

The use of TLS by the IETF is somewhat ironic. Transport Layer Security (TLS) actually had its origins in early OSI industry efforts in the 1980s to provide for responsible security for the OSI internet. Indeed, an initial acceptable industry specification was formally published in the early 90s as a joint ITU-T/ISO (International Telecommunication Union Telecommunications Standardization Sector and International Organization for Standardization) joint standard that remains in effect today.

IETF crypto-activists a few years later took over the ITU-T/ISO internet TLS to roll out their own versions to compensate for DARPA internet cyber security deficiencies. However, it was the Snowden affection that primarily drove zealots to embark on TLS 1.3 as the crown jewel of the Pervasive Encryption initiative. A secondary but significant factor was the interest of Over-the-Top providers in free, unfettered bandwidth to customers leveraging the NetNeutrality political mandate, and added substantial fuel to the TLS 1.3 fire. Indeed, OTT providers have pursued a TLS variant known as QUIC — which allows for multiple simultaneous encrypted streams to end-user customers. QUIC creates major operational and compliance challenges similar to TLS 1.3 and is already being blocked. So as those in London hum for TLS 1.3 anarchy, what is gained and what is lost?

What is gained with TLS 1.3?

There are several “winners.” TLS1.3 makes eavesdropping significantly more difficult. There are fewer “handshakes,” so it should be faster than previous TLS versions. The platform enhances a sense of confidentiality for some individual users — especially the paranoid and those seeking increased protection for activities they want unknown. Those who profess extreme privacy zeal will likely be pleased.

For those engaged in any kind of unlawful activities, TLS 1.3 is a kind of nirvana. It includes those who seek to distribute and manage malware on remote machines — for either programmed attacks or for clandestine campaigns such as those manifested by Russian agents in the U.S. elections. Symantec has already presented statistics on how a considerable amount of malware is distributed via end-to-end encryption tunnels.

The platform also potentially enhances business opportunities and revenue for Over the Top (OTT) providers, and for vendors that leverage it for PR purposes. The latter includes some browser vendors and a few cloud data centre operators who cater to hosting customers for whom opaque end-to-end encryption for unaccountable activities is a value proposition.

TLS 1.3 also provides a perceived sense of satisfaction for those eternal “crypto anarchists” who have been labouring for so many years to best the government agency cryptologists and law enforcement authorities.

In a somewhat amusing, unintended way, the biggest winners may be the vendors of devices and software that detect and block TLS 1.3 traffic. They will benefit from the enormously increased market for their products.

What is lost with TLS 1.3?

TLS 1.3 (and QUIC) are already known to be highly disruptive to network operators’ ability to manage or audit networks. This occurs through a number of factors, but one of the most prevalent is that it breaks the functionality of the enormous number of network “middleboxes” that are essential for network operation. The problem is exacerbated in commercial mobile networks where the operator is also attempting to manage radio access network (RAN) bandwidth.

Because encrypted e2e transport paths in potentially very large numbers are being created and managed autonomously by some unknown third parties, a network provider faces devastating consequences with respect to providing sufficient bandwidth and meeting network performance expectations. It is in effect an unauthorized taking of the provider’s transport network resources.

As noted above, TLS 1.3 significantly facilitates widespread malware distribution, including agents that can be remotely managed for all kinds of tailored attacks. In the vernacular of cybersecurity, it exponentially increases the threat surface of the network infrastructure. The proliferation of Internet of Things (IoT) devices exacerbates the remotely controlled agent attack potential. Although, the counter-argument is to somehow magically improved the security at all the network end-points, the ability to really accomplish this fanciful objective is ephemeral and not real. It seems likely that most end users will view their loss of security and control of their terminal devices as much more important than any perceived loss of privacy from potential transport layer monitoring in transit networks.

A particularly pernicious result for enterprise network and data centre operators, including government agencies, is the potential for massive sensitive data exfiltration. A peripheral intruder through a TLS 1.3 encrypted tunnel into a data centre or company network could leverage their access to command substantial resources to gather and export intelligence or account information of interest. This potential result is one of the principal reasons for a continuing awareness campaign of the Enterprise Data Center Operators organization — coupled with proffering alternative options.

Most providers of network services are required to meet compliance obligations imposed by government regulation, industry Service Level Agreements, or insurance providers. The insurance impact may arise from an assessment that the potential liabilities of allowing TLS 1.3 traffic exposes providers to substantial tort litigation as an accessory to criminal or civil harm. The long list of compliance “by design” obligations are all likely to be significantly impeded or completely prevented by TLS 1.3 implementations:

  • Availability (including public services, specific resilience and survivability requirements, outage reporting)
  • Emergency and public safety communication (including authority to many, one to authority, access/prioritization during emergency, device discovery/disablement)
  • Lawful interception (including signaling, metadata analysis, content)
  • Retained data (including criminal investigative, civil investigative/eDiscovery, sector compliance, contractual requirements and business auditing)
  • Identity management (including access identity, communicating party identity. communicating party blocking)
  • Cyber Security (including defensive measures, structured threat information exchange)
  • Personally Identifiable Information protection
  • Content control (including intellectual property right protection, societal or organization norms)
  • Support for persons with disabilities

Lastly, the implementation of TLS 1.3 is likely to be found unlawful in most countries and backed up by longstanding treaty provisions that recognize the sovereign right of each nation to control its telecommunications and provide for national security. Furthermore, nearly every nation in the world requires that with proper authorization, encrypted traffic must be either made available in decrypted form, or the encryption keys provided to law enforcement authorities — which TLS 1.3 prevents. Few if any rational nations or enterprises are going to allow end-to-end encrypted traffic transiting their networks or communicating with end-point hosts at data centres or users without the ability to have some visibility to assess the risk.

Myth of “the Open Internet”

The reality is that there have always been many internets running on many technologies and protocols and loosely gatewayed under diverse operational, commercial, and political control. In fact, the largest and most successful of them is the global commercial mobile network infrastructure which manages its own tightly controlled technical specifications and practices. With the rapid emergence of NFV-SDNs and 5G, internets on demand are beginning to appear.

The myth of a singular “Open Internet” has always been a chimera among Cyber Utopians and clueless politicians riding the Washington Internet lobbyhorse. The myth was begun by the Clinton Administration twenty years ago as an ill-considered global strategy to advance its perceived beneficial objectives and Washington politics. It came to backfire on the U.S. and the world in multiple dangerous ways. In reality, the humming approval of TLS 1.3 in London will likely diminish the “openness” within and among internets, but it will also properly cordon off the dangerous ones.

Thus, the perhaps unintended result of the IETF crypto zealots moving forward with TLS 1.3 will be for most operators to watch for TLS 1.3 traffic signatures at the network boundaries or end-points and either kill the traffic or force its degradation.

Innovation and a major industry standards organization to the rescue

Fortunately, there are responsible alternatives to TLS 1.3 and QUIC. For the past two years, some of the best research centres around the world have been developing the means for “fine-grained” visibility of encrypted traffic that balances both the security interests and privacy concerns. Several dozen platforms have been published as major papers, created innovative university programs, led to a major standards Technical Report, and generated even a seminal PhD thesis. A few have been patented. A number of companies have pursued proprietary solutions.

The question remained, however, what major global industry standards body would step up to the challenge of taking the best-of-breed approaches and rapidly produce new technical specifications for use. It occurred last year when the ETSI Cyber Security Technical Committee agreed to move forward with several Fine Grained Transport Layer Middlebox Security Protocols. ETSI as both a worldwide and European body has previously led major successful global standards efforts such as the GSM mobile standards now spun out as 3GPP, and the NFV Industry Standards Group, so it had the available resources and industry credentials.

Considerable outreach is being undertaken to many other interested technical organizations, and a related Hot Middlebox Workshop and Hackathon are scheduled for June. The result allows the IETF to hum as it wishes, and the rest of the world can move on with responsible alternatives that harmonize all the essential requirements of network operators, data centres, end users, and government authorities.

Written by Anthony Rutkowski, Principal, Netmagic Associates LLC

Follow CircleID on Twitter

More under: Cyberattack, Cybercrime, Cybersecurity, Internet Governance, Policy & Regulation

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Cisco Launches New ‘5G Now’ Portfolio for Service Providers Taking Action Today

By IoT – Internet of Things

Continuing on its path to disrupt the industry by redefining the network, Cisco announced today its ‘5G Now’ portfolio for service providers ready to go full throttle on their 5G roadmap. 5G services promise to offer emerging new services at significantly faster speeds, expanded capacity and stronger coverage to accommodate the more than 27 billion […]

The post Cisco Launches New ‘5G Now’ Portfolio for Service Providers Taking Action Today appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Fagerberg adds Consulting Editor role at IoT Now Transport to his leadership of analysts Berg Insight

By Jeremy Cowan

IoT Now is pleased to announce that, in addition to his position as founder and chief executive of Internet of Things (IoT) analysts, Berg Insight, Johan Fagerberg has taken up a newly-created, part-time post as Consulting Editor of IoTNowTransport.com.

Launched in 2017, IoT Now Transport is the first sub-brand from IoT Now to focus on a single industry vertical. It follows the success of numerous Insight Reports that IoT Now has published on transport, telematics, fleet management and automotive connections since 2010. The reports have been compiled by independent analyst houses, specially commissioned by the leading online and print publication in the Internet of Things, IoT Now.

Johan Fagerberg has added Consulting Editorship of the website IoT Now Transport to his leadership of analysts Berg Insight.

The Sweden-based analyst firm, Berg Insight, specialises in several areas of business and technology within IoT, but has developed a particularly strong reputation for its focus and expertise in connected transport.

“This is a major development for the fast-growing IoT Now brand, and its first sub-brand focusing on a single industry. Since the launch of IoT Now (initially M2M Now) in 2010, transport has been an enormously exciting sector for us and it became clear that its followers needed a dedicated information resource,” says editorial director & Publisher, Jeremy Cowan. “The launch of IoTNowTransport.com became a matter of time. We have since invested heavily in talented and highly informed IoT writers such as Antony Savvas, Bob Emmerson and Nick Booth.

“The specialist editorial input from Johan Fagerberg now means that IoT Now Transport can offer the site’s visitors an even greater depth of expertise that is frankly unrivalled in the industry,” says Cowan.

Comment on this article below or via Twitter: @IoTNow OR @jcIoTnow

The post Fagerberg adds Consulting Editor role at IoT Now Transport to his leadership of analysts Berg Insight appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

About half of U.S. consumers with insurance are interested in receiving additional services from their insurance provider

By Parks Associates

New research by Parks Associates shows that 40-50% of consumers with either homeowners or renters insurance are interested in receiving additional services, such as restoration and maintenance services, from their insurance provider. Insurance Opportunities in the Connected Home explores the convergence of smart home technologies and insurance services as a way to generate new revenue, reduce claims, and […]

The post About half of U.S. consumers with insurance are interested in receiving additional services from their insurance provider appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Nokia and China Mobile to jointly explore use of 5G to drive new business opportunities for vertical industries

By IoT – Internet of Things

Nokia and China Mobile have signed an agreement under which the companies will jointly investigate how China Mobile can extend its service offerings for vertical markets using the massive connectivity, ultra reliability and ultra-low latency capabilities of 5G. The research will focus on how industries could benefit from the growth of smart cities, smart transportation […]

The post Nokia and China Mobile to jointly explore use of 5G to drive new business opportunities for vertical industries appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Vodafone and Samsung strategic partnership to launch Smart Home services

By IoT – Internet of Things

Vodafone Group will become Samsung’s exclusive strategic telecoms partner in selected European markets to develop and launch a range of consumer Internet of Things (IoT) ‘Smart Home’ product and services. The “V-Home by Vodafone” suite brings together Samsung’s “SmartThings” open platform and the “V by Vodafone” consumer IoT system (launched in November 2017) to offer […]

The post Vodafone and Samsung strategic partnership to launch Smart Home services appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Have We Reached Peak Use of DNSSEC?

By Geoff Huston

The story about securing the DNS has a rich and, in Internet terms, protracted history. The original problem statement was simple: how can you tell if the answer you get from your query to the DNS system is ‘genuine’ or not? The DNS alone can’t help here. You ask a question and get an answer. You are trusting that the DNS has not lied to you, but that trust is not always justified.

Whether the DNS responses you may get are genuine or not is not just an esoteric question. For example, many regimes have implemented a mechanism for enforcing various national regulations relating to online access to content by using DNS interception to prevent the resolution of certain domain names. In some cases, the interception changes a normal response into silence, while in other cases a false DNS answer is loaded into the response. As well as such regulatory-inspired intervention in the DNS, there is also the ever-present risk of malicious attack. If an attacker can pervert the DNS in order to persuade a user that a named resource lies behind the attacker’s IP address, then there is always the potential to use DNS interception in ways that are intended to mislead and potentially defraud the user.

DNSSEC is a partial response to this risk. It allows the end client’s DNS resolver to check the validity and completeness of the responses that are provided by the DNS system. If the published DNS data was digitally signed in the first place, then the client DNS can detect this, and the user can be informed when DNS responses have been altered. It is also possible to validate assertions that the name does not exist in the zone, so that non-existence of a name can be validated through DNSSEC. If the response cannot be validated, then the user has good grounds to suspect that some third party is tampering inside the DNS.

From this perspective, DNSSEC has been regarded as a Good Thing. When we look at the rather depressing saga of misuse and abuse of the Internet and want to do something tangible to improve the overall picture, then ‘improving’ the DNS is one possible action. Of course, it’s not a panacea, and DNSSEC will not stop DNS interception, nor will it stop various forms of intervention and manipulation of DNS responses. What it can do is allow the client who posed the query some ability to validate the received response. DNSSEC can inform a client whether the signed response that they get to a DNS query is an authentic response.

The Costs and Benefit Perceptions of DNSSEC Signing

DNSSEC is not without its own costs, and the addition of more moving parts to any system invariably increases its fragility. It takes time and effort to manage cryptographic keys. It takes time and effort to sign DNS zones and ensure that the correct signed information is loaded into the DNS at all times. Adding digital signatures to DNS responses tends to bloat DNS messages, and these larger DNS messages add to the stress of using a lightweight datagram protocol to carry DNS queries and responses. The response validation function also takes time and effort. DNSSEC is not a ‘free’ addition to the DNS.

In addition, DNSSEC adds further vulnerabilities to the DNS. For example, if the date fields in the DNSKEY resource records expire, then the material that has been loaded into the zone that was signed with this key also expires, as seen by validating resolvers. More generally, if the overlay of keys and meshed digital signatures fails in any way then validating resolvers will be unable to validate DNS responses for this zone. DNSSEC is not implemented as a warning to the user. DNSSEC will cause information to be withheld if the validating DNS resolver fails to validate the response.

Attacks intended to pervert DNS responses fall into two major categories. The first is denial, where a DNS response is blocked and withheld from the query agent. DNSSEC can’t solve that problem. It may well be that the DNS “name does not exist” (NXDOMAIN) response cannot be validated, but that still not help in revealing what resource record information is being occluded by this form of interception. The second form of attack is alteration, where parts of a DNS response are altered in an effort to mislead the client. DNSSEC can certainly help in this case, assuming that the zone being attacked is signed, and the client performs DNSSEC validation.

Is the risk of pain from this second class of attack an acceptable offset against the added effort and cost to both maintain signed zones and operate DNSSEC-validating resolvers? The answer has never been an overwhelming and enthusiastic “yes.” The response to DNSSEC has been far more tempered. Domain name zone administrators appear to perceive DNSSEC-signing of their zone a representing a higher level of administrative overhead, higher delays in DNS resolution, and the admission of further points of vulnerability?

The overwhelming majority of domain name zone administrators appear to be just not aware of DNSSEC, or even if they want to sign their zone they cannot publish a signed zone because of limitations in the service provided by registrar, or if they are aware and could sign their zone, then they don’t appear to judge that the perceived benefit of DNSSEC-signing their zone adequately offsets the cost of maintaining the signed zone.

There are a number of efforts to try and alter the combined issues of capability and perception. Some of these efforts attempt to offload the burden of zone signing and key management to a set of fully automated tools, while others use a more direct financial incentive, offering reduced name registration fees for DNSSEC-signed zones.

The metrics of signed DNSSEC zones are not easy to come by for the entire Internet, but subsections of the namespace are more visible. In New Zealand, for example, just 0.17%of the names in the .nz domain are DNSSEC-signed (ldp.nz). It appears that this particular number is not anomalously high or low, but, as noted, solid whole-of-Internet data is not available for this particular metric.

It appears that on the publication side, the metrics of DNSSEC adoption still show a considerable level of caution bordering on skepticism.

DNSSEC Validation

What about resolution behaviour? Are there measurements to show the extent to which users pass their queries towards DNS resolvers that perform DNSSEC validation?

Happily, we are able to perform this measurement with some degree of confidence in the results. Using measurement scripts embedded in online ads and using an ad campaign that presents the scripting ad across a significant set of endpoints that receive advertisements, we can trigger the DNS to resolve names that are exclusively served by this measurement system’s servers. By a careful examination of queries that are seen by the servers, it is possible to determine if the end user system is passing their DNS queries into DNSSEC-validating resolvers.

We’ve been doing this measurement continuously for more than four years now, and Figure 1 shows the proportion of users that pass their DNS queries though DNSSEC-validating resolvers.

* * *

Perhaps it’s worth a brief digression at this point and look at exactly what “measuring DNSSEC validation” really entails.

Nothing about the DNS is as simple as it might look in the first instance, and this measurement is no exception. Many client-side stub resolvers are configured to use two or more recursive resolvers. The local DNS stub resolver will pass the query to one of these resolvers, and if there is no response within a defined timeout interval, or if the local stub resolver receives a SERVAIL or a REFUSED code, then the stub resolver may re-query using another configured resolver. If the definition of “passing a query through DNSSEC-validating resolvers” is that the DNS system as a whole both validates signed DNS information and withholds signed DNS information if the validation function fails, then we need to be a little more careful in performing the measurement.

The measurement test involves resolving 2 DNS names: one is validly signed, and the other has an incorrect signature. Using this pair of tests, users can be grouped into three categories:

  1. None of the resolvers used by the stub resolver performs DNSSEC validation, and this is evident when the client is able to demonstrate resolution of both DNS names and did not query for any DNSSEC signature information.
  2. Some of the resolvers perform DNSSEC validation, but not all, and this is evident when the client is seen to query for DNSSEC signature information yet demonstrates resolution of both DNS names.
  3. All of the resolvers used by the client perform DNSSEC validation, and this is evident when the client is seen to query for DNSSEC signature information and demonstrates that only the validly-signed DNS name resolved.

The measurement we are using in Figure 1 is category ‘c’, where we are counting end systems that have resolved the validly-signed DNS name and have been unable to resolve the invalidly-signed DNS name.

* * *

Figure 1 shows a story that is consistent with an interpretation of “peak DNSSEC” from the perspective of DNSSEC validation. When we started this measurement in late 2013, we observed that around 9% of users passed their queries to DNSSEC validating resolvers. This number rose across 2014 and 2015, and by the end of 2015, some 16% of users were sitting behind DNSSEC-validating DNS resolvers. However, that’s where it stuck. Across all of 2016 this number remained steady at around 16%, then in 2017, it dropped. The first half of the year saw the number at just below 15%, but a marked drop in July 2017 saw a further drop to 13%. At the time of the planned roll of the KSK, the number dropped further to 12%, where it has remained until now.

If this number continues to drop, then we stand the risk of losing impetus with DNSSEC deployment. If fewer users validate DNS responses, then the rationale for signing a zone weakens. And the fewer the number of signed zones, the motivation for resolvers to perform DNSSEC validation also weakens. Up until 2016 DNSSEC was in a virtuous circle: the more the number of validating resolvers the greater the motivation for signed zones, and the more the number of signed zones the greater the motivation for resolvers to perform validation. But this same feedback cycle also works in the opposite sense, and the numbers over the past 14 months bear this out, at least on the validation side.

From the validation perspective, the use of DNSSEC appeared to have peaked in early 2016 and has been declining since then.

Why?

Given that our current perceptions of the benefits of DNSSEC appear to be overshadowed by our perceptions of the risks in turning on DNSSEC, then the somewhat erratic measures of DNSSEC adoption are perhaps unsurprising.

I also suspect that the planned KSKroll and the last-minute suspension of this operation in October 2017 did the overall case for DNSSEC adoption no favour. The perception that DNSSEC was a thoroughly tested and well understood technical mechanism was given a heavy blow by this suspension of the planned key roll. It exposed some uncertainties relating our understanding of the DNSSEC environment in particular and also of the DNS system as a whole, and while the measure was entirely reasonable as an operationally conservative measure, the implicit signal about our current lack of thorough understanding of the way the DNS, and DNSSEC, works sent negative signals towards both potential and current users of DNSSEC.

However, I can’t help but think that this is an unfortunate development, as the benefits of DNSSEC are under-appreciated in my view. DNSSEC provides a mechanism to enables high trust in the integrity of DNS responses, and DANE (placing domain name keys in the DNS and signing these entries with DNSSEC) is a good example of how DNSSEC can be used to improve the integrity of a currently fractured structure of trust in the namespace. The use of authenticated denial in DNSSEC provides a hook to improve the resilience of the DNS by pushing the resolution of non-existing names back to recursive resolvers through the use of NSEC caching. DNSSEC is not just being able to believe what the DNS tells you, but also making the namespace more trustworthy and improving the behaviours of the DNS and its resilience to certain forms of hostile attack.

It would be a sad day if we were to give up on these objectives due to lack of momentum behind DNSSEC adoption. It would be unfortunate if we were to persist with the obviously corrupted version of name certification we have today because some browser software writers are obsessed by wanting to shave off the few milliseconds that need to be spent in validating the name against the name’s public key when using DNS. It would be unfortunate if the DNS continues to be the weapon of choice in truly massive denial of service attacks because we are unable to deploy widespread NSEC caching to absorb these attacks close to the source. But a declining level of DNSSEC adoptions means that these objectives appear to fade away.

I’m hoping that we have not passed the point of peak use of DNSSEC, and the last 2 years has been a temporary aberration in a larger picture of progressive uptake. That would be the optimistic position.

Otherwise, we are being pushed into a somewhat more challenging environment that has strong parallels with the Tragedy of the Commons. If the natural incentives to individual actors do not gently nudge us to prefer outcomes that provide better overall security, more resilient infrastructure and a safer environment for all users, then we are all in trouble.

If the network’s own ecosystem does not naturally lead to commonly beneficial outcomes, then we leave the door open to various forms of regulatory imposition. And the centuries of experience we have with such regulatory structures should not inspire us with any degree of confidence. Such regulatory strictures often tend to be inefficiently applied, selectively favour some actors at the expense of others, generally impose additional costs on consumers and, as often as not, fail to achieve their intended outcomes. If we can’t build a safe and more resilient Internet on our own, then I’m not sure that we are going to like what will happen instead!

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Cybersecurity, DNS, DNS Security

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

LoRa Alliance™ to Demonstrate Market Readiness of LoRaWAN™ Low-Power Wide Area Networking for IoT at Mobile World Congress

By IoT – Internet of Things

The LoRa Alliance™, the global association of companies backing the open LoRaWAN™ standard for Internet of Things (IoT) low-power wide-area networks (LPWANs), announced its participation at Mobile World Congress 2018. This year’s booth will feature three demonstration areas featuring end-to-end LoRaWAN IoT solutions for the energy & utilities, smart agriculture, and buildings & industrial applications. […]

The post LoRa Alliance™ to Demonstrate Market Readiness of LoRaWAN™ Low-Power Wide Area Networking for IoT at Mobile World Congress appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed