Internet Protocol

Caribbean Businesses Can Make Good Use of Free DNS Security

By Gerard Best

IBM Security, Packet Clearing House (PCH) and Global Cyber Alliance (GCA) unveiled a free Domain Name System (DNS) service designed to protect all Internet users from a wide range of common cyber threats. Launched on November 16 with simultaneous press events in London, Maputo and New York, the public DNS resolver has strong privacy and security features built-in, and can be enabled with a few changes to network settings, as outlined on the organization’s website.

Using the IP address 9.9.9.9, the aptly named Quad9 service leverages IBM X-Force threat intelligence and further correlates with more than a dozen additional threat intelligence feeds from leading cybersecurity firms, in order to help keep individual users’ data and devices safe. It automatically protects users from accessing any website or internet address identified as dangerous.

“Leveraging threat intelligence is a critical way to stay ahead of cybercriminals,” Jim Brennan, Vice President Strategy and Offering Management, IBM Security, said in a release. “Consumers and small businesses traditionally didn’t have free, direct access to the raw data used by security firms to protect big businesses. With Quad9, we’re putting that data to work for the industry in an open way and further enriching those insights via the community of users. Through IBM’s donating use of the 9.9.9.9 address to Quad9, we’re applying these collaborative defense techniques while giving users greater privacy controls.”

The open, free service became the latest to provide security to end users on a global scale by leveraging the DNS system to deliver a smart threat intelligence feed.

“Quad9 is a free layer of protection that can put the DNS to work for all Internet users,” said John Todd, executive director of Quad9. “It allows optional encryption of the query between the user and the server, and it minimises the amount of data that can leak to unknown destinations. And it uses DNSSEC to cryptographically validate the content of the DNS answers that it’s passing back to users for domain names that implement this security feature.”

It allows users to select from secure and unsecured service, the latter being for more advanced users who may have specific reasons they want to get to malware or phishing sites, or who want to perform testing against an unfiltered DNS recursive resolver. The service can also be extended to IoT devices, which face vulnerabilities such as botnet command-and-control requests.

Not only does Quad9 help Internet users avoid millions of malicious websites, but it also promises to help keep their browsing habits private. Deep-pocketed online advertisers are constantly investing in ways to take personal data from unsuspecting Internet users, in order to edge out competitors and expand markets. Frequently, low-security DNS servers are used to build extensive personal profiles of Internet users, including their browsing habits, location and identity. Many DNS providers, including many larger ISPs, are already in the lucrative business of storing personal data for resale to market research firms or digital advertising groups.

A further blow was struck in April when the US Federal Communications Commission repealed broadband privacy rules that would have required Internet service providers to get consumer consent before selling or sharing personal information with advertisers and other companies. But the fight is far from over. With the launch of Quad9, a group of Internet non-profits has made available a free service specifically designed to put Internet users back in control of their personal data.

The service is deliberately engineered to not store or analyze personally identifiable information (PII). Todd said that decision was, in part, a deliberate stance against the ingrained practice among Internet service providers (ISPs) who collect and resell private information to commercial data brokers such as online marketers.

“Our foremost goal is to protect Internet users from malicious actors, whether the threat be from malware or fraud or the nonconsensual monetization of their privacy. Quad9 doesn’t collect or store any PII, including Internet Protocol addresses. We don’t have accounts or profiles or ask who our users are. Since we don’t collect personal information, it can’t be sold or stolen,” he said.

The new service comes at a time when better protection of consumer data and Internet user privacy are being demanded by stakeholders, including governments. In May 2018, the European Union will adopt the General Data Protection Regulation (GDPR), a set of sweeping regulations meant to protect the personal data and privacy of its citizens.

Like their counterparts in Europe and USA, Caribbean stakeholders also stand to gain from these security and privacy benefits. By some estimates, global cybercrime will cost approximately $6 trillion per year on average through 2021. For businesses in developing economies of the Caribbean, cybercrime is a major concern. Around the region, legislators, law enforcement officials and security experts are locked in a struggle to keep pace with the escalating sophistication of transnational cybercriminal operations. The high cost typically involved in protecting against attacks by blocking them through DNS could explain why that technique has not been used widely by Caribbean businesses and Internet users.

“Sophisticated corporations can subscribe to dozens of threat feeds and block them through DNS, or pay a commercial provider for the service. However, small to medium-sized businesses and consumers have been left behind — they lack the resources or are not aware of what can be done with DNS. Quad9 solves these problems. It is memorable, easy to use, relies on excellent and broad threat information, protects privacy and security, and is free,” Phil Reitinger, president and CEO of GCA, said in a release.

The new Quad9 service shares the global infrastructure of PCH, a US-based non-profit which has over the last two decades established the world’s largest authoritative DNS service network, extending from heavily networked parts of North America, Europe and Asia to the less well-connected areas of sub-Saharan Africa and the Caribbean. PCH hosts multiple root letters and more than 300 TLDs on thousands of servers in 150 locations across the globe.

Quad 9 has 100 points of presence in 59 countries, including 12 in the Caribbean, and plans to double that location count by 2019. Leveraging the expertise and global assets of PCH, the new DNS service promises to offer security and privacy to users in the Caribbean, without compromising speed. Bill Woodcock, executive director of Packet Clearing House, said Quad9 users in those regions could actually experience noticeable improvements in performance and resiliency.

“Many DNS service providers are not sufficiently provisioned to be able to support high-volume input/output and caching, and adequately balance load among their servers. But Quad9 uses large caches, and load-balances user traffic to ensure shared caching, letting us answer a large fraction of queries from cache. Because Quad9 shares the PCH DNS infrastructure platform, all root and most TLD queries can be answered locally within the same stack of servers, without passing query onward and making it vulnerable to interception and collection by others. When Quad9 does have to pass a query onward to a server outside of our control, unlike other recursive resolvers, we use a variety of techniques to ensure that the very minimum necessary information leaves our network and users’ privacy is maximised,” he said.

“This is a service that is squarely aimed at improving the Internet security and privacy situation for the global Internet user base, not just the developed world,” he added. “The fact that we can do it faster is just icing on the cake.”

Written by Gerard Best, Development Journalist

Follow CircleID on Twitter

More under: Cybersecurity, DNS, DNS Security, Privacy

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Avenue4 Helps IPv4 Sellers and Buyers Gain Market Access, Overcome Complexities

With more than 30 years combined experience providing legal counsel to the technology industry, Avenue4’s principals have established unique expertise in the IPv4 industry, driving transparency and professionalism in the IPv4 market.

Co-founded by Marc Lindsey and Janine Goodman, Avenue4 possesses a deep understanding of the current market conditions surrounding IPv4 address trading and transfers. Through a broad network of contacts within Fortune 500 organizations, Avenue4 has gathered a significant inventory of IPv4 numbers. Leveraging this inventory and its reputation within the IT and telecom industries, Avenue4 is creating value for sellers and helping buyers make IPv6 adoption decisions that maximize return on their existing IPv4 infrastructure investments.

Understanding the IPv4 Market

Internet Protocol addresses, or IP addresses, are essential to the operation of the Internet. Every device needs an IP address in order to connect to the Internet, and then to communicate with other devices, computers, and services. IPv4 is Version 4 of the Internet Protocol in use today. The finite quantity of IPv4 addresses, which had generally been available (for free) through Regional Internet Registries (RIRs) — such as American Registry for Internet Numbers (ARIN) are exhausted, and additional IPv4 addresses are only available in the North American, European and Asia Pacific regions through trading (or transfers) in the secondary market.

The next-generation Internet Protocol, IPv6, provides a near limitless free supply of IP addresses from the RIRs. However, IPv6 is not backward compatible with IPv4, which currently dominates the vast majority of Internet traffic. Migration to IPv6 can be costly — requiring significant upgrades to an organization’s IP network infrastructure (e.g., installing and configuring IPv6 routers, switches, firewalls, other security devices, and enhancing IP-enabled software, and then running both IPv4 and IPv6 networks concurrently). As a result, the global migration to IPv6 has progressed slowly — with many organizations planning their IPv6 deployments as long-term projects. Demand for IPv4 numbers will, therefore, likely continue to be strong for several more years.

Supplying Choice

Avenue4 specializes in connecting buyers and sellers of IPv4 addresses and provides access to a supply of IPv4 address space. The availability of this supply provides organizations with a viable choice to support their existing networks while the extended migration to IPv6 is underway. Although the supply of IPv4 address space has contracted relative to demand over the last 12 months, the IPv4 trading market provides network operators breathing room to develop and execute IPv6 deployment plans that are appropriate for their businesses.

Expertise Needed

Organizations in need of IPv4 addresses can purchase them from entities with unused addresses, and the transfer of control resulting from such sales can be effectively recorded in the regional Internet Registry (RIR) system pursuant to their market-based registration transfer policies. However, structuring and closing transactions can be complex, and essential information necessary to make smart buy/sell decisions are not readily available. Succeeding in the market requires advisors with up-to-date knowledge about the nuances of the commercial, contractual, and Internet governance policies that shape IPv4 market transactions. With its deep experience, Avenue4 LLC cultivates transactions most likely to reach closure, structures creative and value-enhancing arrangements, and then takes those transactions to closure through the negotiation and registration transfer processes. By successfully navigating these challenges to broker, structure and negotiate some of the largest and most complex IPv4 transactions to date, Avenue4 has emerged as one of the industry’s most trusted IPv4 market advisors.

Avenue4 has focused on providing the counsel and guidance necessary to complete high-value transactions that meet the sellers’ market objectives, and provide buyers with flexibility and choice. When Avenue4 is engaged in deals, we believe that sellers and buyers should feel confident that the transactions we originate will be structured with market-leading terms, executed ethically, and closed protecting the negotiated outcome.

Avenue4’s leadership team has advised some of the largest and most sophisticated holders of IPv4 number blocks. The principals of Avenue4, however, believe that technology enabled services are the key to making the market more accessible to all participants. With the launch of its new online trading platform, ACCELR/8, Avenue4 is now bringing the same level of expertise and process maturity to the small and mid-size block market.

Read more here:: www.circleid.com/rss/topics/ipv6

The Internet is Dead – Long Live the Internet

By Juha Holkkola

Back in the early 2000s, several notable Internet researchers were predicting the death of the Internet. Based on the narrative, the Internet infrastructure had not been designed for the scale that was being projected at the time, supposedly leading to fatal security and scalability issues. Yet somehow the Internet industry has always found a way to dodge the bullet at the very last minute.

While the experts projecting gloom and doom have been silent for the good part of the last 15 years, it seems that the discussion on the future of the Internet is now resurfacing. Some industry pundits such as Karl Auerbach have pointed out that essential parts of Internet infrastructure such as the Domain Name System (DNS) are fading from users’ views. Others such as Jay Turner are predicting the downright death of the Internet itself.

Looking at the developments over the last five years, there are indeed some powerful megatrends that seem to back up the arguments made by the two gentlemen:

  • As the mobile has penetrated the world, it has created a shift from browser-based services into mobile applications. Although not many people realize this, the users of mobile apps do not really have to interface the Internet infrastructure at all. Instead, they simply push the buttons in the app and the software is intelligent enough to take care off the rest. Because of these developments, key services in the Internet infrastructure are gradually disappearing from the plain sight of the regular users.
  • As Internet of Things (IoT) and cloud computing gain momentum, the enterprise side of the market is increasingly concerned about the level of information security. Because the majority of these threats originate from the public Internet, building walls between private networks and the public Internet has become an enormous business. With emerging technologies such as Software-Defined Networking (SDN), we are now heading towards a world littered with private networks that expand from traditional enterprise setups into public clouds, isolated machine networks and beyond.

Once these technology trends have played their course, it is quite likely that the public Internet infrastructure and the services it provides will no longer be directly used by most people. In this sense, I believe both Karl Auerbach and Jay Turner are quite correct in their assessments.

Yet at the same time, both the mobile applications and the secure private networks that move the data around will continue to be highly dependent on the underlying public Internet infrastructure. Without a bedrock on which the private networks and the public cloud services are built, it would be impossible to transmit the data. Due to this, I believe that the Internet will transform away from the open public network it was originally supposed to be.

As an outcome of this process, I further believe that the Internet infrastructure will become a utility that is very similar to the electricity grids of today. While almost everyone benefits from them on daily basis, only electric engineers are interested in their inner workings or have a direct access to them. So essentially, the Internet will become a ubiquitous transport layer for the data that flows within the information societies of tomorrow.

From the network management perspective, the emergence of the secure overlay networks running on top of the Internet will introduce a completely new set of challenges. While network automation can carry out much of the configuration and management work, it will cause networks to disappear from the plain sight in a similar way to mobile apps and public network services. This calls for new operational tools and processes required to navigate in this new world.

Once all has been said and done, the chances are that the Internet infrastructure we use today will still be there in 2030. However, instead of being viewed as an open network that connects the world, it will have evolved into a transport layer that is primarily used for transmitting encrypted data.

The Internet is Dead — Long Live the Internet.

Written by Juha Holkkola, Co-Founder and Chief Technologist at FusionLayer Inc.

Follow CircleID on Twitter

More under: Access Providers, Broadband, Cloud Computing, Cybersecurity, Data Center, DDoS, DNS, Domain Names, Internet of Things, Internet Protocol, IP Addressing, IPv6, Mobile Internet, Networks, Telecom, Web

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Five Questions: Making Sure Cable is at Consumer Tech’s Table

By Cablefax Staff

Once a group concentrated more on TV makers and other CE manufacturers, the Consumer Technology Association (formerly the Consumer Electronics Association) is increasingly overlapping with the cable industry. As the Internet of Things proliferates and streaming video’s play widens, there’s even more room for collaboration. Among the areas cable and CTA are working together is WAVE, an interoperability effort for commercial Internet video or OTT that includes Comcast, Cox and others. We chatted with CTA Research and Standards svp Brian Markwalter recently about the intersection of the two industries.

Are there any cable companies that stand out to you from a technology perspective?

Sure, I’d say one in particular is a member so I guess we have more interaction—Comcast. Most of the big cable companies participate in our standards process so we have exposure from them on the technical side. We also have worked with CableLabs off and on over the years. I think there’s a strong relationship between the consumer technology industry and cable in part because people love their technology products and so many things are now connected and entertainment and TV have always been a big part of the consumer experience. So as more and more devices are connected it’s pretty natural that cable companies are part of that experience.

Are the standards and regulations really what stand out to you from the technology side of cable?

Yes, there does tend to be quite a bit of coordinating. We’ve coordinated on things like Internet protocol IPv6 transition, we coordinate on how devices attached to cable systems and I think that will continue to work on those kinds of issues. I’m sure there will be ongoing conversations about security and improving the overall cyber security footprint for devices. We all want consumers to have secure networks. It will take kind of a layered approach where the devices have security and the networks have security too.

Where do you see cable and technology intersecting in the future?

I think there’s kind of a cycle where sometimes consumers are adopting technology first and then the providers. I’d put tablets in that realm where consumers started buying tablets pretty quickly and there was fast ramp up. People, cable companies and others had to learn how to get content to those devices. I think we’ll continue to see that kind of cycle of cable-making advances improving speed and services and then consumers finding things in the market that they like. So right now, there’s fast growth in connected product stuff. Consumer IoT marketplace, health and fitness devices, smart home type products, and we also see the more advanced cable operators are learning how to integrate those things into their offerings. For example, doing smart home or security services for their customers.

Is there anything about cable that you think might be irrelevant in the future?

No, I don’t think so. I think everybody is adapting. We’re all working together. Cable is participating in a very big CTA project [WAVE] around streaming media, streaming video to the home. People are trying to simplify that process, and create more common ways to do that built on HTML5. I think the services and models will adapt—as we all are from more of a broadcast structure—to more interactive and personalized content and services. But I think cable is positioned pretty well to do that.

Do cable companies have much of a presence at CES?

Yes, for sure. There’s three ways the industries have a presence. There’s on the show floor and exhibits, there’s meetings and meeting rooms, which is common, and then just being there to soak up the technology and being with people. So, we know for sure that there’s a large number of cable technologists that come to the show. We know that cable CTOs often do tours of the show and we’ve helped coordinate that.

The post Five Questions: Making Sure Cable is at Consumer Tech’s Table appeared first on Cablefax.

Read more here:: feeds.feedburner.com/cable360/ct/operations?format=xml

Leading Lights 2017 Finalists: Most Innovative IoT/M2M Strategy (Service Provider)

By Iain Morris Our short list for the most innovative IoT/M2M strategy service provider includes a mainstream network operator, the world’s biggest maker of Internet Protocol network equipment, a smart grid specialist and a ‘digital agriculture’ hub.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out?

By Martin Geddes

The IETF has the final day of its 98th meeting in Chicago today (Friday 31 Mar), far away from here in Vilnius. The Internet is maturing and becoming indispensable to modern life, and is transitioning to industrial types of use. Are the IETF’s methods fit-for-purpose for the future, and if not, what to do about it?

My assertion is that the Internet Engineering Task Force (IETF) is an institution whose remit is coming to a natural end. This is the result of spectacular success, not failure. However, continuing along the present path risks turning that success into a serious act of wrongdoing. This will leave a social and political legacy that will tarnish the collaborative technical achievements that have been accumulated thus far.

Before we give gracious thanks for the benefits the IETF has brought us all, let’s pause to lay out the basic facts about the IETF: its purpose, processes and resulting product. It is a quasi-standards body that issues non-binding memos called Requests for Comments (RFCs). These define the core Internet interoperability protocols and architecture, as used by nearly all modern packet-based networks.

The organisation also extends its activities to software infrastructure closely associated with packet handling, or that co-evolved with it. Examples of those general-purpose needs include email message exchange or performance data capture and collation. There is a fully functioning governance structure provided by the Internet Society to review its remit and activities.

This remit expressly excludes transmission and computing hardware, as well as end user services and any application-specific protocols. It has reasonably well-defined boundaries of competence and concern, neighbouring with institutions like the IEEE, CableLabs, W3C, 3GPP, ITU, ICANN, GSMA, IET, TMF, ACM, and many others.

The IETF is not a testing, inspection or certification body; there’s no IETF seal of approval you can pay for. Nor does it have a formal governmental or transnational charter. It doesn’t have an “IETF tax” to levy like ICANN does, so can’t be fracked for cash. Nobody got rich merely from attending IETF meetings, although possibly a few got drunk or stoned afterwards.

The IETF’s ethos is one which also embraces widespread industry and individual participation, and a dispersal of decision-making power. It has an aversion to overt displays of power and authority, a product of being a voluntary cooperative association. It has no significant de jure powers of coercion over members, and only very weak de facto ones.

All technology standards choices are necessarily political (as some parties are favoured or disfavoured), yet overall the IETF has proven to be a model of collaborative behaviour and pragmatic compromise. You might disagree with its technical choices, but few could argue they are the result of abuses of over-concentrated and unaccountable power.

Inevitably many of the active participants and stakeholders do come from today’s incumbent ISPs, equipment vendors and application service providers. Whether Comcast, Cisco or Google are your personal heroes or villains does not detract from the IETFs essential story of success. It is a socio-technical ecosystem whose existence is amply justified by sustained and widespread adoption of its specification and standards products.

Having met many active participants over many years, I can myself attest to their good conscience and conduct. This is an institution that has many meritocratic attributes, with influence coming from reputational stature and sustained engagement.

As a result of their efforts, we have an Internet that has appeared in a remarkably short period of human history. It has delivered extraordinary benefits that have positively affected most of humanity. That rapid development process is bound to be messy in many ways, as is the nature of the world. The IETF should carry no shame or guilt for the Internet being less than ideal.

To celebrate and to summarise thus far: we have for nearly half a century been enjoying the fruits of a first-generation Internet based on a first-generation core architecture. The IETF has been a core driver and enabler of this grand technical experiment. Its greatest success is that it has helped us to explore the manifest possibilities of pervasive computing and ubiquitous and cheap communications.

Gratitude is the only respectable response.

OK, so that’s the upside. Now for the downside. First steps are fateful, and the IETF and resulting Internet were born by stepping away from the slow and stodgy standards processes of the mainstream telecoms industry, and its rigorous insistence on predictable and managed quality. The computing industry is also famous for its chaotic power struggles since application platform standards (like Windows and Office) can define who controls the profit pool in a whole ecosystem (like PCs).

The telecoms and computing worlds have long existed in a kind of techno-economic “hot war” over who controls the application services and their revenues. That the IETF has managed to function and survive as a kind of “demilitarised zone for distributed computing” is close to miraculous. This war for power and profit continues to rage, and may never cease. The IETF’s existence is partly attributable to the necessity of these parties to have a safe space to find compromise.

The core benefit of packet networking is to enable the statistical sharing of costly physical transmission resources. This “statistical multiplexing” allows you to perform this for a wide range of application types concurrently (as long as the traffic is scheduled appropriately). The exponential growth of PCs and smartphones has created intense and relentlessly growing application demand, especially when coupled with spectacular innovation in functionality.

So the IETF was born and grew up in an environment where there was both a strong political and economic need for a universal way of interoperating packet networks. The US government supplied all the basic technology for free and mandated its use over rival technologies and approaches.

With that as its context, it hasn’t always been necessary to make the best possible technical and architectural choices for it to stay in business. Nonetheless, the IETF has worked tirelessly to (re)design protocols and interfaces that enable suitable network supply for the evolving demand.

In the process of abandoning the form and formality of telco standards bodies, the IETF adopted a mantra of “rough consensus and running code”. Every technical standards RFC is essentially a “success recipe” for information interchange. This ensures a “semantic impedance match” across any management, administration or technological boundary.

The emphasis on ensuring “success” is reinforced by being “conservative in what you send and liberal in what you accept” in any protocol exchange. Even the April Fool RFC begins “It Has To Work”, i.e. constructing “success modes” are the IETF’s raison d’être.

Since RFCs exist to scratch the itches of real practitioners, they mostly have found an immediate and willing audience of “success seekers” to adopt them. This has had the benefit of maximising the pace at which the possibility space could be explored. A virtuous cycle was created of more users, new applications, fresh network demand, and new protocol needs that the IETF has satisfied.

Yet if you had to apply a “truth in advertising” test to the IETF, you would probably call it the Experimental Internet Interface Exploration Task Force. This is really a prototype Internet still, with the protocols being experimental in nature. Driven by operational need, RFCs only define how to construct the “success modes” that enable the Internet to meet growing demands. And that’s the essential problem…

The IETF isn’t, if we are honest with ourselves, an engineering organisation. It doesn’t particularly concern itself with the “failure modes”; you only have to provide “running code”, not a safety case. There is no demand that you demonstrate the trustworthiness of your ideas with a model of the world with understood error and infidelity to reality. You are never asked to prove what kinds of loads your architecture can safely accept, and what its capability limits might be.

This is partly a result of the widespread industry neglect of the core science and engineering of performance. We also see serious and unfixable problems with the Internet’s architecture when it comes to security, resilience and mobility. These difficulties result in several consequent problems, which if left unattended to, will severely damage the IETF’s technical credibility and social legitimacy.

The first issue is that the IETF takes on problems for which it lacks an ontological and epistemological framework to resolve. (This is a very posh way of saying “people don’t know that they don’t know what they are doing”.)

Perhaps the best example is “bufferbloat” and the resulting “active queue management” proposals. These, regrettably, create a whole raft of new and even worse network performance management problems. These “failure modes” will emerge suddenly and unexpectedly in operation, which will prompt a whole new round of “fixes” to reconstruct “success”. This, in turn, guarantees future disappointment and further disaster.

Another is that you find some efforts are misdirected as they perpetuate poor initial architecture choices in the 1970s. For instance, we are approaching nearly two decades of the IPv4 to IPv6 transition. If I swapped your iPhone 7 for your monochrome feature phone of 2000, I think you’d get the point about technical change: we’ve moved from Windows 98 on the desktop to wearable and even ingestible nanocomputing in that period.

Such sub-glacial IPv6 adoption tells us there is something fundamentally wrong: the proposed benefits simply don’t exist in the minds of commercially-minded ISPs. Indeed, there are many new costs, such as an explosion in the security attack surface to be managed. Yet nobody dares step back and say “maybe we’ve got something fundamental wrong here, like swapping scopes and layers, or confusing names and addresses.”

There is an endless cycle of new problems, superficial diagnoses, and temporarily fixes to restore “success”. These “fixes” in turn results in an ever-growing “technical debt” and unmanaged complexity and new RFCs have to find out how to relate to a tangled morass of past RFCs.

The IETF (and industry at large) lacks a robust enough theory of distributed computing to collapse that complexity. Hence the potential for problems of protocol interaction explode over time. The operational economics and technical scalability of the Internet are now being called into doubt.

Yet the most serious problem the IETF faces is not a limit on its ability to construct new “success modes”. Rather, it is the fundamental incompatibility of the claim to “engineering” with its ethos.

Architects and engineers are professions that engage in safety-critical structures and activities. The Internet, due to its unmitigated success, is now integral to the operation of many social and economic activities. No matter how many disclaimers you put on your work, people can and do use it for home automation, healthcare, education, telework and other core social and economic needs.

Yet the IETF is lacking in the mindset and methods for taking responsibility for engineering failure. This exclusive focus on “success” was an acceptable trade-off in the 1980s and 1990s, as we engaged in pure experiment and exploration. It is increasingly unacceptable for the 2010s and 2020s. We already embed the Internet into every device and activity, and that will only intensify as meatspace blends with cyberspace, with us all living as cyborgs in a hybrid metaverse.

The lack of “skin in the game” means many people are taking credit (and zillions of frequent flyer miles) for the “success modes” based on claiming the benefits of “engineering”, without experiencing personal consequences for the unexamined and unquantified technical risks they create for others. This is unethical.

As we move to IoT and intimate sensed biodata, it becomes rather scary. You might Web-based think adtech is bad, but the absence of privacy-by-design makes the Internet a dangerous place for our descendants.

There are lots of similarly serious problems ahead. One of the big ones is that the Internet is not a scale-free architecture, as there is no “performance by design”. A single counter-example suffices to resolve this question: there are real scaling limits that real networks have encountered. Society is betting its digital future on a set of protocols and standards whose load limits are unknown.

There are good reasons to be concerned that we are going to get some unpleasant performance surprises. This kind of problem cannot be resolved through “rough consensus and running code”. It requires rigorous and quantified performance engineering, with systems of invariants, and a semantic framework to turn specifications into operational systems.

The danger the IETF now faces is that the Internet falls ever further below the level of predictability, performance and safety that we take for granted in every other aspect of modern life. No other utility or engineering discipline could get away with such sloppiness. It’s time for the Internet and its supporting institutions to grow up and take responsibility as it industrialises.

If there is no action by the IETF, eventually the public will demand change. “The Internet is a bit shit” is already a meme floating in the zeitgeist. Politicians will seek scapegoats for the lack of benefit of public investments. The telcos have lobbyists and never were loved anyway. In contrast, the IETF is not in a position to defend itself.

The backlash might even see a radical shift in power away from its open and democratic processes. Instead, we will get “backroom deals” between cloud and telco giants, in which the fates of economies and societies are sealed in private as the billing API is defined. An “Industrial Internet” may see the IETF’s whole existence eclipsed.

The root issue is the dissonance between a title that includes the word “engineering”, and an organisation that fails to enact this claim. The result is a serious competency issue, that results in an accountability deficit, that then risks a legitimacy crisis. After all, to be an engineer you need to adhere to rules of conduct and a code of ethics.

My own father was just a fitter on Boeing 747s, but needed constant exams and licensing just like a medical doctor. An architect in Babylon could be put to death for a building that collapsed and killed someone! Why not accountability for the network architects designing core protocols necessary to the functioning of society?

As a consequence of changing times and user needs, I believe that the IETF needs to begin a period of deep reflection and introspection:

  • What is its technical purpose? We have proven that packet networks can work at scale, and have value over other approaches. Is the initial experimental phase over?
  • What are its ethical values? What kind of rewards does it offer, and what risks does it create? Do people experience consequences and accountability either way?
  • How should the IETF respond to new architectures that incorporate our learning from decades of Internet Protocol?
  • What is the IETF’s role in basic science and engineering, if any, given their growing importance as the Internet matures?

The easy (and wrong) way forward is to put the existing disclaimers into large flashing bold, and issue an RFC apologising for the lack of engineering rigour. That doesn’t cut the ethical mustard. A simple name change to expunge “engineering” from the title (which would provoke howls of rage and never happen) also doesn’t address the core problem of a capability and credibility gap.

The right way is to make a difficult choice: to scale up, scale down, or scale out?

One option is to “scale up”, and make its actions align with its titular claim to being a true engineering institution. This requires a painful process to identify the capability gaps, and to gather the necessary resources to fill them. This could be directly by developing the missing science and mathematics, or through building alliances with other organisations who might be better equipped.

Licensed engineers with relevant understanding may be needed to approve processes and proposals; experts in security and performance risk and safety would provide oversight and governance. It would be a serious rebuild of the IETF’s core mission and methods of operation. The amateur ethos would be lost, but that’s a price worth paying for professional legitimacy.

In this model, RFCs of an “information infrastructure” nature would be reviewed more like how a novel suspension bridge or space rocket has a risk analysis. After all, building packet networks is now merely “rocket science”, applying well-understood principles and proven engineering processes. This doesn’t require any new inventions or breakthroughs.

An alternative is for the IETF to define an “end game”, and the scaling down of its activities. Some would transfer to other entities with professional memberships, enforced codes of behaviour, and licensed practitioners for safety-related activities. Others would cease entirely. Rather like the initial pioneers of the railroad or telegraph, their job is done. IPv6 isn’t the answer, because Internet Protocol’s foundations are broken and cannot be fixed.

The final option that I see is to “scale out”, and begin a new core of exploration but focused on new architectures beyond TCP/IP. The basic social and collaboration processes of the IETF are sound, and the model for exploring “success modes” is proven. In this case, a renaming to the Internet Experiment Task Force for the spin-out might be seen as an acceptable and attractive one.

One thing is certain, and that is that the Internet is in a period of rapid maturation and fundamental structural change. If the IETF wishes to remain relevant and not reviled, then it needs to adapt to an emerging and improved Industrial Internet or perish along with the Prototype Internet it has nurtured so well.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Internet Protocol

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Trusted Objects and partners deliver a secure TLS protocol to both IP and non-IP objects

By Sheetal Kumbhar

Trusted Objects, a security expert for the Internet of Things (IoT), in partnership with Avnet Silica and UbiquiOS technology, have announced the availability of its secure TLS solution, allowing end-to-end sensor-to-server security to billions of sensors regardless whether they support the IP (Internet Protocol) or not. Now IoT developers are, say the partners, able to benefit […]

The post Trusted Objects and partners deliver a secure TLS protocol to both IP and non-IP objects appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The Three Pillars of ICANN’s Technical Engagement Strategy

ICANN’s technical engagement team was established two years ago. Since then, we have made a great deal of progress in better engaging with our peers throughout the Internet Assigned Numbers Authority (IANA) stewardship transition proposal process and currently during the implementation phase. Over the past few months, the Office of the CTO has been reinforced with a dedicated research team composed of experienced Internet technologists. These experts are working hard to raise the level of ICANN engagement on Internet identifiers technology usage measurement, its evolution, and are collecting and sharing data that can further support the community in its policy development processes. They are also focusing on helping to build bridges with other relevant technical partners.

Our overall strategy for technical engagement is based on three pillars:

  • Continue building trust with our technical partners and peers within the ecosystem.
  • Expand our participation in relevant forums and events where we can further raise awareness about ICANN’s mission, while encouraging more diversity in participation in our community policy development processes.
  • Continue contributing ICANN’s positions on technical topics discussed outside our regular forums, but ones affecting our mission, keeping the focus on our shared responsibilities and effective coordination.

We can highlight in this blog some ongoing activities toward each goal:

Expanding Participation in Technical Forums

To continue building a sustainable relationship with our peers, we have increased, in number and in quality, our participation and contribution to various technical forums led by our partner organizations, including:

  • Internet Engineering Task Force (IETF)
  • Regional Internet Registries (RIRs): African Network Information Center (AFRINIC), Asia-Pacific Network Information Centre (APNIC), American Registry for Internet Numbers (ARIN), Latin American and Caribbean Network Information Centre (LACNIC) and Réseaux IP Européens Network Coordination Centre (RIPE NCC)
  • Regional country code top-level domain organizations: African TLD Organization (AFTLD), Council of European National TLD Registries (CENTR), Asia Pacific TLD Organization (APTLD), Latin American and Carribean TLD Organization (LACTLD)
  • And many others …

Encouraging Diversity of Participants

As a community, we face the challenge of strengthening the bottom-up, multistakeholder policy development process, while at the same time ensuring that participation becomes more diverse. Looking beyond regional and gender diversity, we must also achieve technical diversity. For example, when we work on domain name policies that affect online services, how do we ensure that we have Internet service operators, application developers and software designers around the table to give their operational perspectives? And as mobile technology becomes an increasingly prevalent way of consuming Internet services, and mobile operators are important players in that sector, how do we ensure that they engage with and contribute to our policy development processes?

We have also seen a growing interest from the Internet services abuse mitigation community in understanding and engaging more actively in our community-led policy development processes. As a result, the output of these processes is taking their needs into consideration. Our Security, Stability and Resiliency (SSR) and Global Stakeholder Engagement (GSE) teams have worked together to provide capability-building programs dedicated to this community. We are exploring ways to cover more ground (particularly in emerging regions). Our recent participation in the Governmental Advisory Committee (GAC) Public Safety Working Group’s workshop in Nairobi has confirmed this need. A follow-up mechanism is under discussion to make sure our engagement efforts meet these needs.

Engaging in Technical Topics that Affect Our Ecosystem

Finally, within our technical scope, we have launched an Internet Protocol version 6 (IPv6) initiative to refine ICANN’s position on IPv6. The initiative defines actions that will ensure that, as organization, we do our part to provide online services that our community can transparently access over both IPv6 and Internet Protocol version 4 (IPv4). Read more about our IPv6 initiative.

Read more here:: www.icann.org/news/blog.rss

A Conversation with Community Leader Lise Fuhr

Lise Fuhr is a leader in the Internet community in Denmark. Here, she reflects on what ICANN58 means for Denmark – and what are the key issues she will focus on at the meeting.

Tell us a little about yourself and your involvement in ICANN.

I’m currently Director General at the European Telecommunications Network Operators’ Association (ETNO), the association that includes Europe’s leading providers of telecommunication and digital services. In ICANN, ETNO is an active in the Internet Service Providers and Connectivity Providers (ISPCP) and the Business Constituency (BC).

I’ve had several roles in the ICANN community, as a member of the Second Accountability and Transparency Review Team (ATRT2) and as co-chair of the Cross-Community Working Group that developed the proposal for the Internet Assigned Names Authority (IANA) stewardship transition. At present, I am a Board member of the ICANN affiliate Public Technical Identifiers (PTI), which is responsible for the operation of the IANA functions.

In the past, I was COO of Danish registry DIFO and DK Hostmaster, the entities responsible for the country code top-level domain (ccTLD) .dk. I have also worked for the Danish Ministry of Science, Technology and Innovation and for Telia Networks.

ICANN is all about the multistakeholder model. We actively seek participation from diverse cross-sections of society. From your perspective, what does the multistakeholder model of governance mean for the Denmark?

Having ICANN58 in Copenhagen will help build an even stronger awareness of the role of Internet governance and of the multistakeholder model in Denmark. Today’s Internet ecosystem is broad – most societal and industrial sectors rely on the Internet. Almost every sector needs to take part in how the Internet is governed.

What relationship do you see between ICANN and its stakeholders and how would you like to see it evolve?

ETNO has always advocated for an active role in Internet governance. For this reason, we support the multistakeholder model, embodied by ICANN and its activities. We want to support ICANN as it takes its first steps after the transition. The multistakeholder model is an opportunity to bring positive values to the global Internet community. Freedom to invest and freedom to innovate both remain crucial to a thriving and diverse Internet environment.

What issues will you be following at ICANN58?

The discussion around the new generic top-level domains (gTLDs) will be very important. The program should be balanced and consider both the opportunities and the risks to be addressed. In addition, the work on enhancing ICANN’s accountability will also be essential to rounding out the good work done so far with the transition. Another important issue is the debate on the migration from Internet Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6). Last but not least, trust is a top priority, so it’s important to participate in the discussions around security.

Read more here:: www.icann.org/news/blog.rss

Reaction: Do We Really Need a New Internet?

By Russ White

The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be.

And then, this week, I ran across a couple of articles (Feb 14 & Feb 15) arguing that we need a new Internet. For instance, from Feb 14 post:

What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn’t designed with healthcare, transport or energy grids in mind, to the extent it was ‘designed’ at all. Every “circle of death” watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future.

So the Internet is broken. Completely. We need a new one.

Really?

First, I’d like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different “Internet” — one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a “new Internet” that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before — this is simply impossible.

So lets leave that idea aside, and think about some of the other complaints.

The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say “let the network fix it.” At the same time, network engineers say either: “sure, I can give you perfect security, let me just install this firewall,” or “I don’t have anything to do with security, fix that in the application.” On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say “just let my use my thumbprint,” without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this “fixable?” sure, for some strong measure of security — but a “new Internet” isn’t going to fare any better than the current one unless people start talking to one another.

The Internet cannot scale. Well, that all depends on what you mean by “scale.” It seems pretty large to me, and it seems to be getting larger. The problem is that it is often harder to design in scaling than you might think. You often do not know what problems you are going to encounter until you actually encounter them. To think that we can just “apply some math,” and make the problem go away shows a complete lack of historical understanding. What you need to do is build in the flexibility that allows you to overcome scaling issues as they arise, rather than assuming you can “fix” the problem at the first point and not worry about it ever again. The “foundation” analogy does not really work here; when you are building a structure, you have a good idea of what it will be used for, and how large it will be. You do not build a building today and then say, “hey, let’s add a library on the 40th floor with a million books, and then three large swimming pools and a new eatery on those four new floors we decided to stick on the top.” The foundation limits scaling as well as ensures it; sometimes the foundation needs to be flexible, rather than fixed.

There have been too many protocol mistakes. Witness IPv6. Well, yes, there have been many protocol mistakes. For instance, IPv6. But the problem with IPv6 is not that we didn’t need it, not that there was not a problem, nor even that all bad decisions were made. Rather, the problem with IPv6 is the technical community became fixated on Network Address Translators, effectively designing an entire protocol around eliminating a single problem. Narrow fixations always result in bad engineering solutions — it’s just a fact of life. What IPv6 did get right was eliminating fragmentation, a larger address space, and a few other areas.

That IPv6 exists at all, and is even being deployed at all, shows just the entire problem with “the Internet is broken” line of thinking. It shows that the foundations of the Internet are flexible enough to take on a new protocol, and to fix problems up in the higher layers. The original design worked, in fact — parts and pieces can be replaced if we get something wrong. This is more valuable than all the iron clad promises of a perfect future Internet you can ever make.

We are missing a layer. This is grounded in the RINA model, which I like, and I actually use in teaching networking a lot more than any other model. In fact, I consider the OSI model a historical curiosity, a byway that was probably useful for a time, but is no longer useful. But the RINA model implies a fixed number of layers, in even numbers. The argument, boiled down to its essential point, is that since we have seven, we must be wrong.

The problem with the argument is twofold. First, sometimes six layers is right, and at other times eight might be. Second, we do have another layer in the Internet model; it’s just generally buried in the applications themselves. The network does not end with TCP, or even HTTP; it ends with the application. Applications often have their own flow control and error management embedded, if they need them. Some don’t, so exposing all those layers, and forcing every application to use them all, would actually be a waste of resources.

The Internet assumes a flawed model of end to end connectivity. Specifically, that the network will never drop packets. Well, TCP does assume this, but TCP isn’t the only transport protocol on the network. There is also something called “UDP,” and there are others out there as well (at least the last time I looked). It’s not that the network doesn’t provide more lossy services, it’s that most application developers have availed themselves of the one available service, no matter whether or not it is needed for their specific application.

The bottom line.

When I left San Francisco to fly home, 2nd street was closed. Why? Because a piece of concrete had come lose on one of the buildings nearby, and seemed to be just about ready to fall to the street. On the way to the airport, the driver told me stories of several other buildings in the area that were problematic, some that might need to be taken down and rebuilt. The image of the industrial building process, almost perfect every time, is an illusion. You can’t just “build a solid foundation” and then “build as high as you like.”

Sure, the Internet is broken. But anything we invent will, ultimately, be broken in some way or another. Sure the IETF is broken, and so is open source, and so is… whatever we might invent next. We don’t need a new Internet, we need a little less ego, a lot less mud slinging, and a lot more communication. We don’t need the perfect fix, we need people who will seriously think about where the layers and lines are today, why they are there, and why and how we should change them. We don’t need grand designs, we need serious people who are seriously interested in working on fixing what we have, and people who are interested in being engineers, rather than console jockeys or system administrators.

Written by Russ White, Network Architect at LinkedIn

Follow CircleID on Twitter

More under: Internet Protocol, Security, Web

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml