rfc

Using DNSSEC to improve S/MIME security

By Kevin Meynell

DNSSEC badge

RFC 8162 “Using Secure DNS to Associate Certificates with Domain Names for S/MIME” was published a couple of months ago. This seems to have gone a bit unnoticed, but defines an experimental protocol for verifying digital certificates associated with S/MIME messages in a similar manner to what DANE does for TLS.

S/MIME encoded messages often contain a digital certificate that authenticates the sender of the message and can be used for encrypting replies. However, in order for the receiver of the message to verify that the certificate belongs to the sender, their mail user agent also needs to be able to validate the trust anchor from where the certificate is derived. Trust anchors are often distributed with operating systems or are installed by users, but this relies on the integrity of these processes and the third-parties issuing the trust anchor.

RFC 8162 therefore defines a new DNS Resource Record (RR) type called SMIMEA that can be used by a domain owner to associate a certificate or public key with an e-mail address, thereby forming an SMIMEA certificate association. This association may be an end entity, intermediate or trust anchor certificate, and allows an application or service to lookup and verify a certificate or public key in the DNS.

Of course, a DNS zone containing SMIMEA records also needs to be DNSSEC-signed, and the DNS response should be correctly validated. All the more reason to be deploying DNSSEC, so please check out our Start Here page to find out how to get started!

Read more here:: www.internetsociety.org/deploy360/blog/feed/

Using DNSSEC to improve S/MIME security

By News Aggregator

DNSSEC badge

By Kevin Meynell

RFC 8162 “Using Secure DNS to Associate Certificates with Domain Names for S/MIME” was published a couple of months ago. This seems to have gone a bit unnoticed, but defines an experimental protocol for verifying digital certificates associated with S/MIME messages in a similar manner to what DANE does for TLS.

S/MIME encoded messages often contain a digital certificate that authenticates the sender of the message and can be used for encrypting replies. However, in order for the receiver of the message to verify that the certificate belongs to the sender, their mail user agent also needs to be able to validate the trust anchor from where the certificate is derived. Trust anchors are often distributed with operating systems or are installed by users, but this relies on the integrity of these processes and the third-parties issuing the trust anchor.

RFC 8162 therefore defines a new DNS Resource Record (RR) type called SMIMEA that can be used by a domain owner to associate a certificate or public key with an e-mail address, thereby forming an SMIMEA certificate association. This association may be an end entity, intermediate or trust anchor certificate, and allows an application or service to lookup and verify a certificate or public key in the DNS.

Of course, a DNS zone containing SMIMEA records also needs to be DNSSEC-signed, and the DNS response should be correctly validated. All the more reason to be deploying DNSSEC, so please check out our Start Here page to find out how to get started!

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Using DNSSEC to improve S/MIME security appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

DPRIVE experimental service debuts @ IETF 99

By Kevin Meynell

TLS badge

The IETF is not only a place to discuss the development of Internet protocols, but also offers a place for developers and operators to ‘eat their own dog food’ on the meeting network. And given that the IETF DPRIVE Working Group has published some RFC specifications over the past year, the most recent IETF 99 in Prague provided a timely opportunity to run an experimental DNS-over-TLS service.

DNS queries and responses are currently transmitted over the Internet entirely in the clear, and whilst DNSSEC is able to authenticate a response from a DNS server, it does not actually encrypt the transmitted information. The aim of DPRIVE is therefore to add mechanisms to provide confidentiality to DNS transactions and address concerns about pervasive monitoring using TLS or DTLS to encrypt queries and responses between DNS clients and servers.

Some information about how the experimental DNS-over-TLS service was set-up on the IETF network can be found on the IETF99 Experiments page, but the DNS Privacy Project offers a list of experimental servers supporting both IPv4 and IPv6 if you want to try this out yourself. You also can check out their up status.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

DPRIVE experimental service debuts @ IETF 99

By News Aggregator

TLS badge

By Kevin Meynell

The IETF is not only a place to discuss the development of Internet protocols, but also offers a place for developers and operators to ‘eat their own dog food’ on the meeting network. And given that the IETF DPRIVE Working Group has published some RFC specifications over the past year, the most recent IETF 99 in Prague provided a timely opportunity to run an experimental DNS-over-TLS service.

DNS queries and responses are currently transmitted over the Internet entirely in the clear, and whilst DNSSEC is able to authenticate a response from a DNS server, it does not actually encrypt the transmitted information. The aim of DPRIVE is therefore to add mechanisms to provide confidentiality to DNS transactions and address concerns about pervasive monitoring using TLS or DTLS to encrypt queries and responses between DNS clients and servers.

Some information about how the experimental DNS-over-TLS service was set-up on the IETF network can be found on the IETF99 Experiments page, but the DNS Privacy Project offers a list of experimental servers supporting both IPv4 and IPv6 if you want to try this out yourself. You also can check out their up status.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post DPRIVE experimental service debuts @ IETF 99 appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

ICANN’s KSK Rollover: What You Need to Know

By Erin Scherer

ICANN is planning to roll, or change, the “top” pair of cryptographic keys used in the Domain Name System Security Extensions (DNSSEC) protocol, known as the Root Zone KSK. This will be the first time the KSK has been changed since it was initially generated in 2010. Changing these keys is an important step to take to ensure security, similar to how changing passwords is considered to be an important safety measure.

According to the ICANN website, “Maintaining an up-to-date KSK is essential to ensuring DNSSEC-signed domain names continue to validate following the rollover. Failure to have the current root zone KSK will mean that DNSSEC-enabled validators will be unable to verify that DNS responses have not been tampered with and thus will return an error response to all DNSSEC-signed queries.”

What does this rollover mean?

Rolling the KSK means generating a new cryptographic key pair and distributing the new public component to everyone who operates validating resolvers.

Once the new keys have been generated, network operators performing DNSSEC validation will need to update their systems with the new key so that when a user attempts to visit a website, it can validate it against the new KSK.

Who will be affected?

According to ICANN, about one-in-four global Internet users, or 750 million people, could be affected by the KSK rollover. That figure is based on the estimated number of Internet users who use DNSSEC validating resolvers.

ICANN is encouraging you to test and check your systems prior to the KSK rollover to confirm what action is needed. They have provided a free testbed to help you determine whether your systems can handle automated updates properly.

Network Operators who update DNSSEC-enabled resolver trust anchor configuration manually should ensure that the new root zone KSK is configured before October 11, 2017.

Anyone who writes, integrates, distributes or operates software supporting DNSSEC validation that correctly follows the RFC 5011 automatic trust anchor protocol does not need to take any action.

Do you need to change anything with ARIN?

No. There is no action that you need to take with us. We are simply passing this message along to ensure our community is aware of this impactful change. We are not involved in the rollover itself, nor will anything here at ARIN change as a result of the rollover.

When is the rollover taking place?

The change will occur in a phased approach. The important dates to be aware of include:

  • 11 July 2017: New KSK published in DNS
  • 19 September 2017: Size increase for DNSKEY response from root name servers
  • 11 October 2017: New KSK begins to sign the root zone key set (This is the actual rollover event)
  • 11 January 2018: Revocation of old KSK
  • 22 March 2018: Last day the old KSK appears in the root zone
  • August 2018: Old key is deleted from equipment in both ICANN Key Management Facilities
Want to learn more? Check out these resources from ICANN:
Links:
Documents

Have a Question?
Send an email to globalsupport@icann.org with “KSK Rollover” in the subject line to submit your questions.

The post ICANN’s KSK Rollover: What You Need to Know appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

vMX Lightweight 4over6 Virtual Network Function – Juniper and Snabb in a Docker Container

As service providers accelerate the migration of their core networks to IPv6, they need to ensure uninterrupted access and service continuity for all existing IPv4 users. This white paper describes the IPv6 transition challenges service providers are facing, and how Lightweight 4over6, as presented by the IETF RFC 7596, can help.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

NTIA Issues RFC, Asks for Input on Dealing With Botnets and DDoS Attacks

By CircleID Reporter

NTIA issued a Request for Comments today asking for broad input from “all interested stakeholders, including private industry, academia, civil society, and other security experts,” on actions against botnets and distributed attacks. “The goal of this RFC is to solicit informed suggestions and feedback on current, emerging, and potential approaches for dealing with botnets and other automated, distributed threats and their impact.” Although the department has expressed interested in all aspects of this issue, it has indicated particular interest in two broad approaches where substantial progress can be made. They are:

Attack Mitigation: “Minimizing the impact of botnet behavior by rapidly identifying and disrupting malicious behaviors, including the potential of filtering or coordinated network management, empowering market actors to better protect potential targets, and reducing known and emerging risks.”

Endpoint Prevention: “Securing endpoints, especially IoT devices, and reducing vulnerabilities, including fostering prompt adoption of secure development practices, developing practical plans to rapidly deal with newly discovered vulnerabilities, and supporting adoption of new technology to better control and safeguard devices at the local network level.”

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, DDoS

Read more here:: feeds.circleid.com/cid_sections/news?format=xml

NTIA Issues RFC, Asks for Input on Dealing With Botnets and DDoS Attacks

By News Aggregator

By CircleID Reporter

NTIA issued a Request for Comments today asking for broad input from “all interested stakeholders, including private industry, academia, civil society, and other security experts,” on actions against botnets and distributed attacks. “The goal of this RFC is to solicit informed suggestions and feedback on current, emerging, and potential approaches for dealing with botnets and other automated, distributed threats and their impact.” Although the department has expressed interested in all aspects of this issue, it has indicated particular interest in two broad approaches where substantial progress can be made. They are:

Attack Mitigation: “Minimizing the impact of botnet behavior by rapidly identifying and disrupting malicious behaviors, including the potential of filtering or coordinated network management, empowering market actors to better protect potential targets, and reducing known and emerging risks.”

Endpoint Prevention: “Securing endpoints, especially IoT devices, and reducing vulnerabilities, including fostering prompt adoption of secure development practices, developing practical plans to rapidly deal with newly discovered vulnerabilities, and supporting adoption of new technology to better control and safeguard devices at the local network level.”

Follow CircleID on Twitter

More under: Cyberattack, Cybersecurity, DDoS

Read more here:: feeds.circleid.com/cid_sections/news?format=xml

The post NTIA Issues RFC, Asks for Input on Dealing With Botnets and DDoS Attacks appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Uber Goes IPv6 to Support its Growing Infrastructure

By News Aggregator

By Megan Kruse

Uber recently announced it’s deploying IPv6. The company has made the decision to deploy IPv6 to support the company’s growing infrastructure, as explained in the engineering team’s announcement where they detail three major areas of infrastructure they need to update – network architecture, software support, and vendor support.

From the post:

“Three key factors made it clear to us that deploying IPv6 across our networks was going to be critical for maintaining our architecture’s stability at scale:

  • Generous IP allocation: The size of our network has grown rapidly over the past few years, supporting thousands of server racks in our data centers. Each rack is allocated a /24 IPv4 subnet out of our Request for Comment (RFC) 1918 IP space, which includes 256 IPv4 addresses per rack. In most of our rack deployments, we host no more than 48 servers.
  • Resource limitation: At this stage in our growth, we have used more than 50 percent of our 10.0.0.0/8 IPv4 subnet for internal usage. If we do not transition to IPv6, it is possible that our RFC1918 (the Internet Engineering Task Force (IETF) memorandum on methods of assigning of private IP addresses) space could be exhausted in the foreseeable future.
  • Overlapping IP addresses: Traditionally, Uber’s networks defined their own IP addresses for their resources. When Uber began merging with other companies, some IPv4 addresses overlapped between two internal networks of different organizations.”

The post explains in some detail how they’re working to update their network architecture including hardware, automation, and network design; updating vast amounts of code through collaborative teamwork; and working with vendors to ensure IPv6 support across the board.

Kudos to Uber for managing this transition and sharing their experiences with others!

If you are ready to get started with IPv6, visit our START HERE page for more information. Looking for something that isn’t there? Contact us! We’re here to help!

Also, in case you missed it yesterday, there’s a new State of IPv6 Deployment report out with tons of statistics, insights, and recommendations. It might be the perfect tool to help you make the case for IPv6 if you’ve been struggling to get the go-ahead.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Uber Goes IPv6 to Support its Growing Infrastructure appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out?

By News Aggregator

By Martin Geddes

The IETF has the final day of its 98th meeting in Chicago today (Friday 31 Mar), far away from here in Vilnius. The Internet is maturing and becoming indispensable to modern life, and is transitioning to industrial types of use. Are the IETF’s methods fit-for-purpose for the future, and if not, what to do about it?

My assertion is that the Internet Engineering Task Force (IETF) is an institution whose remit is coming to a natural end. This is the result of spectacular success, not failure. However, continuing along the present path risks turning that success into a serious act of wrongdoing. This will leave a social and political legacy that will tarnish the collaborative technical achievements that have been accumulated thus far.

Before we give gracious thanks for the benefits the IETF has brought us all, let’s pause to lay out the basic facts about the IETF: its purpose, processes and resulting product. It is a quasi-standards body that issues non-binding memos called Requests for Comments (RFCs). These define the core Internet interoperability protocols and architecture, as used by nearly all modern packet-based networks.

The organisation also extends its activities to software infrastructure closely associated with packet handling, or that co-evolved with it. Examples of those general-purpose needs include email message exchange or performance data capture and collation. There is a fully functioning governance structure provided by the Internet Society to review its remit and activities.

This remit expressly excludes transmission and computing hardware, as well as end user services and any application-specific protocols. It has reasonably well-defined boundaries of competence and concern, neighbouring with institutions like the IEEE, CableLabs, W3C, 3GPP, ITU, ICANN, GSMA, IET, TMF, ACM, and many others.

The IETF is not a testing, inspection or certification body; there’s no IETF seal of approval you can pay for. Nor does it have a formal governmental or transnational charter. It doesn’t have an “IETF tax” to levy like ICANN does, so can’t be fracked for cash. Nobody got rich merely from attending IETF meetings, although possibly a few got drunk or stoned afterwards.

The IETF’s ethos is one which also embraces widespread industry and individual participation, and a dispersal of decision-making power. It has an aversion to overt displays of power and authority, a product of being a voluntary cooperative association. It has no significant de jure powers of coercion over members, and only very weak de facto ones.

All technology standards choices are necessarily political (as some parties are favoured or disfavoured), yet overall the IETF has proven to be a model of collaborative behaviour and pragmatic compromise. You might disagree with its technical choices, but few could argue they are the result of abuses of over-concentrated and unaccountable power.

Inevitably many of the active participants and stakeholders do come from today’s incumbent ISPs, equipment vendors and application service providers. Whether Comcast, Cisco or Google are your personal heroes or villains does not detract from the IETFs essential story of success. It is a socio-technical ecosystem whose existence is amply justified by sustained and widespread adoption of its specification and standards products.

Having met many active participants over many years, I can myself attest to their good conscience and conduct. This is an institution that has many meritocratic attributes, with influence coming from reputational stature and sustained engagement.

As a result of their efforts, we have an Internet that has appeared in a remarkably short period of human history. It has delivered extraordinary benefits that have positively affected most of humanity. That rapid development process is bound to be messy in many ways, as is the nature of the world. The IETF should carry no shame or guilt for the Internet being less than ideal.

To celebrate and to summarise thus far: we have for nearly half a century been enjoying the fruits of a first-generation Internet based on a first-generation core architecture. The IETF has been a core driver and enabler of this grand technical experiment. Its greatest success is that it has helped us to explore the manifest possibilities of pervasive computing and ubiquitous and cheap communications.

Gratitude is the only respectable response.

OK, so that’s the upside. Now for the downside. First steps are fateful, and the IETF and resulting Internet were born by stepping away from the slow and stodgy standards processes of the mainstream telecoms industry, and its rigorous insistence on predictable and managed quality. The computing industry is also famous for its chaotic power struggles since application platform standards (like Windows and Office) can define who controls the profit pool in a whole ecosystem (like PCs).

The telecoms and computing worlds have long existed in a kind of techno-economic “hot war” over who controls the application services and their revenues. That the IETF has managed to function and survive as a kind of “demilitarised zone for distributed computing” is close to miraculous. This war for power and profit continues to rage, and may never cease. The IETF’s existence is partly attributable to the necessity of these parties to have a safe space to find compromise.

The core benefit of packet networking is to enable the statistical sharing of costly physical transmission resources. This “statistical multiplexing” allows you to perform this for a wide range of application types concurrently (as long as the traffic is scheduled appropriately). The exponential growth of PCs and smartphones has created intense and relentlessly growing application demand, especially when coupled with spectacular innovation in functionality.

So the IETF was born and grew up in an environment where there was both a strong political and economic need for a universal way of interoperating packet networks. The US government supplied all the basic technology for free and mandated its use over rival technologies and approaches.

With that as its context, it hasn’t always been necessary to make the best possible technical and architectural choices for it to stay in business. Nonetheless, the IETF has worked tirelessly to (re)design protocols and interfaces that enable suitable network supply for the evolving demand.

In the process of abandoning the form and formality of telco standards bodies, the IETF adopted a mantra of “rough consensus and running code”. Every technical standards RFC is essentially a “success recipe” for information interchange. This ensures a “semantic impedance match” across any management, administration or technological boundary.

The emphasis on ensuring “success” is reinforced by being “conservative in what you send and liberal in what you accept” in any protocol exchange. Even the April Fool RFC begins “It Has To Work”, i.e. constructing “success modes” are the IETF’s raison d’être.

Since RFCs exist to scratch the itches of real practitioners, they mostly have found an immediate and willing audience of “success seekers” to adopt them. This has had the benefit of maximising the pace at which the possibility space could be explored. A virtuous cycle was created of more users, new applications, fresh network demand, and new protocol needs that the IETF has satisfied.

Yet if you had to apply a “truth in advertising” test to the IETF, you would probably call it the Experimental Internet Interface Exploration Task Force. This is really a prototype Internet still, with the protocols being experimental in nature. Driven by operational need, RFCs only define how to construct the “success modes” that enable the Internet to meet growing demands. And that’s the essential problem…

The IETF isn’t, if we are honest with ourselves, an engineering organisation. It doesn’t particularly concern itself with the “failure modes”; you only have to provide “running code”, not a safety case. There is no demand that you demonstrate the trustworthiness of your ideas with a model of the world with understood error and infidelity to reality. You are never asked to prove what kinds of loads your architecture can safely accept, and what its capability limits might be.

This is partly a result of the widespread industry neglect of the core science and engineering of performance. We also see serious and unfixable problems with the Internet’s architecture when it comes to security, resilience and mobility. These difficulties result in several consequent problems, which if left unattended to, will severely damage the IETF’s technical credibility and social legitimacy.

The first issue is that the IETF takes on problems for which it lacks an ontological and epistemological framework to resolve. (This is a very posh way of saying “people don’t know that they don’t know what they are doing”.)

Perhaps the best example is “bufferbloat” and the resulting “active queue management” proposals. These, regrettably, create a whole raft of new and even worse network performance management problems. These “failure modes” will emerge suddenly and unexpectedly in operation, which will prompt a whole new round of “fixes” to reconstruct “success”. This, in turn, guarantees future disappointment and further disaster.

Another is that you find some efforts are misdirected as they perpetuate poor initial architecture choices in the 1970s. For instance, we are approaching nearly two decades of the IPv4 to IPv6 transition. If I swapped your iPhone 7 for your monochrome feature phone of 2000, I think you’d get the point about technical change: we’ve moved from Windows 98 on the desktop to wearable and even ingestible nanocomputing in that period.

Such sub-glacial IPv6 adoption tells us there is something fundamentally wrong: the proposed benefits simply don’t exist in the minds of commercially-minded ISPs. Indeed, there are many new costs, such as an explosion in the security attack surface to be managed. Yet nobody dares step back and say “maybe we’ve got something fundamental wrong here, like swapping scopes and layers, or confusing names and addresses.”

There is an endless cycle of new problems, superficial diagnoses, and temporarily fixes to restore “success”. These “fixes” in turn results in an ever-growing “technical debt” and unmanaged complexity and new RFCs have to find out how to relate to a tangled morass of past RFCs.

The IETF (and industry at large) lacks a robust enough theory of distributed computing to collapse that complexity. Hence the potential for problems of protocol interaction explode over time. The operational economics and technical scalability of the Internet are now being called into doubt.

Yet the most serious problem the IETF faces is not a limit on its ability to construct new “success modes”. Rather, it is the fundamental incompatibility of the claim to “engineering” with its ethos.

Architects and engineers are professions that engage in safety-critical structures and activities. The Internet, due to its unmitigated success, is now integral to the operation of many social and economic activities. No matter how many disclaimers you put on your work, people can and do use it for home automation, healthcare, education, telework and other core social and economic needs.

Yet the IETF is lacking in the mindset and methods for taking responsibility for engineering failure. This exclusive focus on “success” was an acceptable trade-off in the 1980s and 1990s, as we engaged in pure experiment and exploration. It is increasingly unacceptable for the 2010s and 2020s. We already embed the Internet into every device and activity, and that will only intensify as meatspace blends with cyberspace, with us all living as cyborgs in a hybrid metaverse.

The lack of “skin in the game” means many people are taking credit (and zillions of frequent flyer miles) for the “success modes” based on claiming the benefits of “engineering”, without experiencing personal consequences for the unexamined and unquantified technical risks they create for others. This is unethical.

As we move to IoT and intimate sensed biodata, it becomes rather scary. You might Web-based think adtech is bad, but the absence of privacy-by-design makes the Internet a dangerous place for our descendants.

There are lots of similarly serious problems ahead. One of the big ones is that the Internet is not a scale-free architecture, as there is no “performance by design”. A single counter-example suffices to resolve this question: there are real scaling limits that real networks have encountered. Society is betting its digital future on a set of protocols and standards whose load limits are unknown.

There are good reasons to be concerned that we are going to get some unpleasant performance surprises. This kind of problem cannot be resolved through “rough consensus and running code”. It requires rigorous and quantified performance engineering, with systems of invariants, and a semantic framework to turn specifications into operational systems.

The danger the IETF now faces is that the Internet falls ever further below the level of predictability, performance and safety that we take for granted in every other aspect of modern life. No other utility or engineering discipline could get away with such sloppiness. It’s time for the Internet and its supporting institutions to grow up and take responsibility as it industrialises.

If there is no action by the IETF, eventually the public will demand change. “The Internet is a bit shit” is already a meme floating in the zeitgeist. Politicians will seek scapegoats for the lack of benefit of public investments. The telcos have lobbyists and never were loved anyway. In contrast, the IETF is not in a position to defend itself.

The backlash might even see a radical shift in power away from its open and democratic processes. Instead, we will get “backroom deals” between cloud and telco giants, in which the fates of economies and societies are sealed in private as the billing API is defined. An “Industrial Internet” may see the IETF’s whole existence eclipsed.

The root issue is the dissonance between a title that includes the word “engineering”, and an organisation that fails to enact this claim. The result is a serious competency issue, that results in an accountability deficit, that then risks a legitimacy crisis. After all, to be an engineer you need to adhere to rules of conduct and a code of ethics.

My own father was just a fitter on Boeing 747s, but needed constant exams and licensing just like a medical doctor. An architect in Babylon could be put to death for a building that collapsed and killed someone! Why not accountability for the network architects designing core protocols necessary to the functioning of society?

As a consequence of changing times and user needs, I believe that the IETF needs to begin a period of deep reflection and introspection:

  • What is its technical purpose? We have proven that packet networks can work at scale, and have value over other approaches. Is the initial experimental phase over?
  • What are its ethical values? What kind of rewards does it offer, and what risks does it create? Do people experience consequences and accountability either way?
  • How should the IETF respond to new architectures that incorporate our learning from decades of Internet Protocol?
  • What is the IETF’s role in basic science and engineering, if any, given their growing importance as the Internet matures?

The easy (and wrong) way forward is to put the existing disclaimers into large flashing bold, and issue an RFC apologising for the lack of engineering rigour. That doesn’t cut the ethical mustard. A simple name change to expunge “engineering” from the title (which would provoke howls of rage and never happen) also doesn’t address the core problem of a capability and credibility gap.

The right way is to make a difficult choice: to scale up, scale down, or scale out?

One option is to “scale up”, and make its actions align with its titular claim to being a true engineering institution. This requires a painful process to identify the capability gaps, and to gather the necessary resources to fill them. This could be directly by developing the missing science and mathematics, or through building alliances with other organisations who might be better equipped.

Licensed engineers with relevant understanding may be needed to approve processes and proposals; experts in security and performance risk and safety would provide oversight and governance. It would be a serious rebuild of the IETF’s core mission and methods of operation. The amateur ethos would be lost, but that’s a price worth paying for professional legitimacy.

In this model, RFCs of an “information infrastructure” nature would be reviewed more like how a novel suspension bridge or space rocket has a risk analysis. After all, building packet networks is now merely “rocket science”, applying well-understood principles and proven engineering processes. This doesn’t require any new inventions or breakthroughs.

An alternative is for the IETF to define an “end game”, and the scaling down of its activities. Some would transfer to other entities with professional memberships, enforced codes of behaviour, and licensed practitioners for safety-related activities. Others would cease entirely. Rather like the initial pioneers of the railroad or telegraph, their job is done. IPv6 isn’t the answer, because Internet Protocol’s foundations are broken and cannot be fixed.

The final option that I see is to “scale out”, and begin a new core of exploration but focused on new architectures beyond TCP/IP. The basic social and collaboration processes of the IETF are sound, and the model for exploring “success modes” is proven. In this case, a renaming to the Internet Experiment Task Force for the spin-out might be seen as an acceptable and attractive one.

One thing is certain, and that is that the Internet is in a period of rapid maturation and fundamental structural change. If the IETF wishes to remain relevant and not reviled, then it needs to adapt to an emerging and improved Industrial Internet or perish along with the Prototype Internet it has nurtured so well.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Internet Protocol

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out? appeared on IPv6.net.

Read more here:: IPv6 News Aggregator