Internet Protocol

Avenue4 Helps IPv4 Sellers and Buyers Gain Market Access, Overcome Complexities

With more than 30 years combined experience providing legal counsel to the technology industry, Avenue4’s principals have established unique expertise in the IPv4 industry, driving transparency and professionalism in the IPv4 market.

Co-founded by Marc Lindsey and Janine Goodman, Avenue4 possesses a deep understanding of the current market conditions surrounding IPv4 address trading and transfers. Through a broad network of contacts within Fortune 500 organizations, Avenue4 has gathered a significant inventory of IPv4 numbers. Leveraging this inventory and its reputation within the IT and telecom industries, Avenue4 is creating value for sellers and helping buyers make IPv6 adoption decisions that maximize return on their existing IPv4 infrastructure investments.

Understanding the IPv4 Market

Internet Protocol addresses, or IP addresses, are essential to the operation of the Internet. Every device needs an IP address in order to connect to the Internet, and then to communicate with other devices, computers, and services. IPv4 is Version 4 of the Internet Protocol in use today. The finite quantity of IPv4 addresses, which had generally been available (for free) through Regional Internet Registries (RIRs) — such as American Registry for Internet Numbers (ARIN) are exhausted, and additional IPv4 addresses are only available in the North American, European and Asia Pacific regions through trading (or transfers) in the secondary market.

The next-generation Internet Protocol, IPv6, provides a near limitless free supply of IP addresses from the RIRs. However, IPv6 is not backward compatible with IPv4, which currently dominates the vast majority of Internet traffic. Migration to IPv6 can be costly — requiring significant upgrades to an organization’s IP network infrastructure (e.g., installing and configuring IPv6 routers, switches, firewalls, other security devices, and enhancing IP-enabled software, and then running both IPv4 and IPv6 networks concurrently). As a result, the global migration to IPv6 has progressed slowly — with many organizations planning their IPv6 deployments as long-term projects. Demand for IPv4 numbers will, therefore, likely continue to be strong for several more years.

Supplying Choice

Avenue4 specializes in connecting buyers and sellers of IPv4 addresses and provides access to a supply of IPv4 address space. The availability of this supply provides organizations with a viable choice to support their existing networks while the extended migration to IPv6 is underway. Although the supply of IPv4 address space has contracted relative to demand over the last 12 months, the IPv4 trading market provides network operators breathing room to develop and execute IPv6 deployment plans that are appropriate for their businesses.

Expertise Needed

Organizations in need of IPv4 addresses can purchase them from entities with unused addresses, and the transfer of control resulting from such sales can be effectively recorded in the regional Internet Registry (RIR) system pursuant to their market-based registration transfer policies. However, structuring and closing transactions can be complex, and essential information necessary to make smart buy/sell decisions are not readily available. Succeeding in the market requires advisors with up-to-date knowledge about the nuances of the commercial, contractual, and Internet governance policies that shape IPv4 market transactions. With its deep experience, Avenue4 LLC cultivates transactions most likely to reach closure, structures creative and value-enhancing arrangements, and then takes those transactions to closure through the negotiation and registration transfer processes. By successfully navigating these challenges to broker, structure and negotiate some of the largest and most complex IPv4 transactions to date, Avenue4 has emerged as one of the industry’s most trusted IPv4 market advisors.

Avenue4 has focused on providing the counsel and guidance necessary to complete high-value transactions that meet the sellers’ market objectives, and provide buyers with flexibility and choice. When Avenue4 is engaged in deals, we believe that sellers and buyers should feel confident that the transactions we originate will be structured with market-leading terms, executed ethically, and closed protecting the negotiated outcome.

Avenue4’s leadership team has advised some of the largest and most sophisticated holders of IPv4 number blocks. The principals of Avenue4, however, believe that technology enabled services are the key to making the market more accessible to all participants. With the launch of its new online trading platform, ACCELR/8, Avenue4 is now bringing the same level of expertise and process maturity to the small and mid-size block market.

Read more here:: www.circleid.com/rss/topics/ipv6

Avenue4 Helps IPv4 Sellers and Buyers Gain Market Access, Overcome Complexities

By News Aggregator

With more than 30 years combined experience providing legal counsel to the technology industry, Avenue4’s principals have established unique expertise in the IPv4 industry, driving transparency and professionalism in the IPv4 market.

Co-founded by Marc Lindsey and Janine Goodman, Avenue4 possesses a deep understanding of the current market conditions surrounding IPv4 address trading and transfers. Through a broad network of contacts within Fortune 500 organizations, Avenue4 has gathered a significant inventory of IPv4 numbers. Leveraging this inventory and its reputation within the IT and telecom industries, Avenue4 is creating value for sellers and helping buyers make IPv6 adoption decisions that maximize return on their existing IPv4 infrastructure investments.

Understanding the IPv4 Market

Internet Protocol addresses, or IP addresses, are essential to the operation of the Internet. Every device needs an IP address in order to connect to the Internet, and then to communicate with other devices, computers, and services. IPv4 is Version 4 of the Internet Protocol in use today. The finite quantity of IPv4 addresses, which had generally been available (for free) through Regional Internet Registries (RIRs) — such as American Registry for Internet Numbers (ARIN) are exhausted, and additional IPv4 addresses are only available in the North American, European and Asia Pacific regions through trading (or transfers) in the secondary market.

The next-generation Internet Protocol, IPv6, provides a near limitless free supply of IP addresses from the RIRs. However, IPv6 is not backward compatible with IPv4, which currently dominates the vast majority of Internet traffic. Migration to IPv6 can be costly — requiring significant upgrades to an organization’s IP network infrastructure (e.g., installing and configuring IPv6 routers, switches, firewalls, other security devices, and enhancing IP-enabled software, and then running both IPv4 and IPv6 networks concurrently). As a result, the global migration to IPv6 has progressed slowly — with many organizations planning their IPv6 deployments as long-term projects. Demand for IPv4 numbers will, therefore, likely continue to be strong for several more years.

Supplying Choice

Avenue4 specializes in connecting buyers and sellers of IPv4 addresses and provides access to a supply of IPv4 address space. The availability of this supply provides organizations with a viable choice to support their existing networks while the extended migration to IPv6 is underway. Although the supply of IPv4 address space has contracted relative to demand over the last 12 months, the IPv4 trading market provides network operators breathing room to develop and execute IPv6 deployment plans that are appropriate for their businesses.

Expertise Needed

Organizations in need of IPv4 addresses can purchase them from entities with unused addresses, and the transfer of control resulting from such sales can be effectively recorded in the regional Internet Registry (RIR) system pursuant to their market-based registration transfer policies. However, structuring and closing transactions can be complex, and essential information necessary to make smart buy/sell decisions are not readily available. Succeeding in the market requires advisors with up-to-date knowledge about the nuances of the commercial, contractual, and Internet governance policies that shape IPv4 market transactions. With its deep experience, Avenue4 LLC cultivates transactions most likely to reach closure, structures creative and value-enhancing arrangements, and then takes those transactions to closure through the negotiation and registration transfer processes. By successfully navigating these challenges to broker, structure and negotiate some of the largest and most complex IPv4 transactions to date, Avenue4 has emerged as one of the industry’s most trusted IPv4 market advisors.

Avenue4 has focused on providing the counsel and guidance necessary to complete high-value transactions that meet the sellers’ market objectives, and provide buyers with flexibility and choice. When Avenue4 is engaged in deals, we believe that sellers and buyers should feel confident that the transactions we originate will be structured with market-leading terms, executed ethically, and closed protecting the negotiated outcome.

Avenue4’s leadership team has advised some of the largest and most sophisticated holders of IPv4 number blocks. The principals of Avenue4, however, believe that technology enabled services are the key to making the market more accessible to all participants. With the launch of its new online trading platform, ACCELR/8, Avenue4 is now bringing the same level of expertise and process maturity to the small and mid-size block market.

Read more here:: www.circleid.com/rss/topics/ipv6

The post Avenue4 Helps IPv4 Sellers and Buyers Gain Market Access, Overcome Complexities appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

The Internet is Dead – Long Live the Internet

By Juha Holkkola

Back in the early 2000s, several notable Internet researchers were predicting the death of the Internet. Based on the narrative, the Internet infrastructure had not been designed for the scale that was being projected at the time, supposedly leading to fatal security and scalability issues. Yet somehow the Internet industry has always found a way to dodge the bullet at the very last minute.

While the experts projecting gloom and doom have been silent for the good part of the last 15 years, it seems that the discussion on the future of the Internet is now resurfacing. Some industry pundits such as Karl Auerbach have pointed out that essential parts of Internet infrastructure such as the Domain Name System (DNS) are fading from users’ views. Others such as Jay Turner are predicting the downright death of the Internet itself.

Looking at the developments over the last five years, there are indeed some powerful megatrends that seem to back up the arguments made by the two gentlemen:

  • As the mobile has penetrated the world, it has created a shift from browser-based services into mobile applications. Although not many people realize this, the users of mobile apps do not really have to interface the Internet infrastructure at all. Instead, they simply push the buttons in the app and the software is intelligent enough to take care off the rest. Because of these developments, key services in the Internet infrastructure are gradually disappearing from the plain sight of the regular users.
  • As Internet of Things (IoT) and cloud computing gain momentum, the enterprise side of the market is increasingly concerned about the level of information security. Because the majority of these threats originate from the public Internet, building walls between private networks and the public Internet has become an enormous business. With emerging technologies such as Software-Defined Networking (SDN), we are now heading towards a world littered with private networks that expand from traditional enterprise setups into public clouds, isolated machine networks and beyond.

Once these technology trends have played their course, it is quite likely that the public Internet infrastructure and the services it provides will no longer be directly used by most people. In this sense, I believe both Karl Auerbach and Jay Turner are quite correct in their assessments.

Yet at the same time, both the mobile applications and the secure private networks that move the data around will continue to be highly dependent on the underlying public Internet infrastructure. Without a bedrock on which the private networks and the public cloud services are built, it would be impossible to transmit the data. Due to this, I believe that the Internet will transform away from the open public network it was originally supposed to be.

As an outcome of this process, I further believe that the Internet infrastructure will become a utility that is very similar to the electricity grids of today. While almost everyone benefits from them on daily basis, only electric engineers are interested in their inner workings or have a direct access to them. So essentially, the Internet will become a ubiquitous transport layer for the data that flows within the information societies of tomorrow.

From the network management perspective, the emergence of the secure overlay networks running on top of the Internet will introduce a completely new set of challenges. While network automation can carry out much of the configuration and management work, it will cause networks to disappear from the plain sight in a similar way to mobile apps and public network services. This calls for new operational tools and processes required to navigate in this new world.

Once all has been said and done, the chances are that the Internet infrastructure we use today will still be there in 2030. However, instead of being viewed as an open network that connects the world, it will have evolved into a transport layer that is primarily used for transmitting encrypted data.

The Internet is Dead — Long Live the Internet.

Written by Juha Holkkola, Co-Founder and Chief Technologist at FusionLayer Inc.

Follow CircleID on Twitter

More under: Access Providers, Broadband, Cloud Computing, Cybersecurity, Data Center, DDoS, DNS, Domain Names, Internet of Things, Internet Protocol, IP Addressing, IPv6, Mobile Internet, Networks, Telecom, Web

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The Internet is Dead – Long Live the Internet

By News Aggregator

By Juha Holkkola

Back in the early 2000s, several notable Internet researchers were predicting the death of the Internet. Based on the narrative, the Internet infrastructure had not been designed for the scale that was being projected at the time, supposedly leading to fatal security and scalability issues. Yet somehow the Internet industry has always found a way to dodge the bullet at the very last minute.

While the experts projecting gloom and doom have been silent for the good part of the last 15 years, it seems that the discussion on the future of the Internet is now resurfacing. Some industry pundits such as Karl Auerbach have pointed out that essential parts of Internet infrastructure such as the Domain Name System (DNS) are fading from users’ views. Others such as Jay Turner are predicting the downright death of the Internet itself.

Looking at the developments over the last five years, there are indeed some powerful megatrends that seem to back up the arguments made by the two gentlemen:

  • As the mobile has penetrated the world, it has created a shift from browser-based services into mobile applications. Although not many people realize this, the users of mobile apps do not really have to interface the Internet infrastructure at all. Instead, they simply push the buttons in the app and the software is intelligent enough to take care off the rest. Because of these developments, key services in the Internet infrastructure are gradually disappearing from the plain sight of the regular users.
  • As Internet of Things (IoT) and cloud computing gain momentum, the enterprise side of the market is increasingly concerned about the level of information security. Because the majority of these threats originate from the public Internet, building walls between private networks and the public Internet has become an enormous business. With emerging technologies such as Software-Defined Networking (SDN), we are now heading towards a world littered with private networks that expand from traditional enterprise setups into public clouds, isolated machine networks and beyond.

Once these technology trends have played their course, it is quite likely that the public Internet infrastructure and the services it provides will no longer be directly used by most people. In this sense, I believe both Karl Auerbach and Jay Turner are quite correct in their assessments.

Yet at the same time, both the mobile applications and the secure private networks that move the data around will continue to be highly dependent on the underlying public Internet infrastructure. Without a bedrock on which the private networks and the public cloud services are built, it would be impossible to transmit the data. Due to this, I believe that the Internet will transform away from the open public network it was originally supposed to be.

As an outcome of this process, I further believe that the Internet infrastructure will become a utility that is very similar to the electricity grids of today. While almost everyone benefits from them on daily basis, only electric engineers are interested in their inner workings or have a direct access to them. So essentially, the Internet will become a ubiquitous transport layer for the data that flows within the information societies of tomorrow.

From the network management perspective, the emergence of the secure overlay networks running on top of the Internet will introduce a completely new set of challenges. While network automation can carry out much of the configuration and management work, it will cause networks to disappear from the plain sight in a similar way to mobile apps and public network services. This calls for new operational tools and processes required to navigate in this new world.

Once all has been said and done, the chances are that the Internet infrastructure we use today will still be there in 2030. However, instead of being viewed as an open network that connects the world, it will have evolved into a transport layer that is primarily used for transmitting encrypted data.

The Internet is Dead — Long Live the Internet.

Written by Juha Holkkola, Co-Founder and Chief Technologist at FusionLayer Inc.

Follow CircleID on Twitter

More under: Access Providers, Broadband, Cloud Computing, Cybersecurity, Data Center, DDoS, DNS, Domain Names, Internet of Things, Internet Protocol, IP Addressing, IPv6, Mobile Internet, Networks, Telecom, Web

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post The Internet is Dead – Long Live the Internet appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Five Questions: Making Sure Cable is at Consumer Tech’s Table

By Cablefax Staff

Once a group concentrated more on TV makers and other CE manufacturers, the Consumer Technology Association (formerly the Consumer Electronics Association) is increasingly overlapping with the cable industry. As the Internet of Things proliferates and streaming video’s play widens, there’s even more room for collaboration. Among the areas cable and CTA are working together is WAVE, an interoperability effort for commercial Internet video or OTT that includes Comcast, Cox and others. We chatted with CTA Research and Standards svp Brian Markwalter recently about the intersection of the two industries.

Are there any cable companies that stand out to you from a technology perspective?

Sure, I’d say one in particular is a member so I guess we have more interaction—Comcast. Most of the big cable companies participate in our standards process so we have exposure from them on the technical side. We also have worked with CableLabs off and on over the years. I think there’s a strong relationship between the consumer technology industry and cable in part because people love their technology products and so many things are now connected and entertainment and TV have always been a big part of the consumer experience. So as more and more devices are connected it’s pretty natural that cable companies are part of that experience.

Are the standards and regulations really what stand out to you from the technology side of cable?

Yes, there does tend to be quite a bit of coordinating. We’ve coordinated on things like Internet protocol IPv6 transition, we coordinate on how devices attached to cable systems and I think that will continue to work on those kinds of issues. I’m sure there will be ongoing conversations about security and improving the overall cyber security footprint for devices. We all want consumers to have secure networks. It will take kind of a layered approach where the devices have security and the networks have security too.

Where do you see cable and technology intersecting in the future?

I think there’s kind of a cycle where sometimes consumers are adopting technology first and then the providers. I’d put tablets in that realm where consumers started buying tablets pretty quickly and there was fast ramp up. People, cable companies and others had to learn how to get content to those devices. I think we’ll continue to see that kind of cycle of cable-making advances improving speed and services and then consumers finding things in the market that they like. So right now, there’s fast growth in connected product stuff. Consumer IoT marketplace, health and fitness devices, smart home type products, and we also see the more advanced cable operators are learning how to integrate those things into their offerings. For example, doing smart home or security services for their customers.

Is there anything about cable that you think might be irrelevant in the future?

No, I don’t think so. I think everybody is adapting. We’re all working together. Cable is participating in a very big CTA project [WAVE] around streaming media, streaming video to the home. People are trying to simplify that process, and create more common ways to do that built on HTML5. I think the services and models will adapt—as we all are from more of a broadcast structure—to more interactive and personalized content and services. But I think cable is positioned pretty well to do that.

Do cable companies have much of a presence at CES?

Yes, for sure. There’s three ways the industries have a presence. There’s on the show floor and exhibits, there’s meetings and meeting rooms, which is common, and then just being there to soak up the technology and being with people. So, we know for sure that there’s a large number of cable technologists that come to the show. We know that cable CTOs often do tours of the show and we’ve helped coordinate that.

The post Five Questions: Making Sure Cable is at Consumer Tech’s Table appeared first on Cablefax.

Read more here:: feeds.feedburner.com/cable360/ct/operations?format=xml

Five Questions: Making Sure Cable is at Consumer Tech’s Table

By News Aggregator

By Cablefax Staff

Once a group concentrated more on TV makers and other CE manufacturers, the Consumer Technology Association (formerly the Consumer Electronics Association) is increasingly overlapping with the cable industry. As the Internet of Things proliferates and streaming video’s play widens, there’s even more room for collaboration. Among the areas cable and CTA are working together is WAVE, an interoperability effort for commercial Internet video or OTT that includes Comcast, Cox and others. We chatted with CTA Research and Standards svp Brian Markwalter recently about the intersection of the two industries.

Are there any cable companies that stand out to you from a technology perspective?

Sure, I’d say one in particular is a member so I guess we have more interaction—Comcast. Most of the big cable companies participate in our standards process so we have exposure from them on the technical side. We also have worked with CableLabs off and on over the years. I think there’s a strong relationship between the consumer technology industry and cable in part because people love their technology products and so many things are now connected and entertainment and TV have always been a big part of the consumer experience. So as more and more devices are connected it’s pretty natural that cable companies are part of that experience.

Are the standards and regulations really what stand out to you from the technology side of cable?

Yes, there does tend to be quite a bit of coordinating. We’ve coordinated on things like Internet protocol IPv6 transition, we coordinate on how devices attached to cable systems and I think that will continue to work on those kinds of issues. I’m sure there will be ongoing conversations about security and improving the overall cyber security footprint for devices. We all want consumers to have secure networks. It will take kind of a layered approach where the devices have security and the networks have security too.

Where do you see cable and technology intersecting in the future?

I think there’s kind of a cycle where sometimes consumers are adopting technology first and then the providers. I’d put tablets in that realm where consumers started buying tablets pretty quickly and there was fast ramp up. People, cable companies and others had to learn how to get content to those devices. I think we’ll continue to see that kind of cycle of cable-making advances improving speed and services and then consumers finding things in the market that they like. So right now, there’s fast growth in connected product stuff. Consumer IoT marketplace, health and fitness devices, smart home type products, and we also see the more advanced cable operators are learning how to integrate those things into their offerings. For example, doing smart home or security services for their customers.

Is there anything about cable that you think might be irrelevant in the future?

No, I don’t think so. I think everybody is adapting. We’re all working together. Cable is participating in a very big CTA project [WAVE] around streaming media, streaming video to the home. People are trying to simplify that process, and create more common ways to do that built on HTML5. I think the services and models will adapt—as we all are from more of a broadcast structure—to more interactive and personalized content and services. But I think cable is positioned pretty well to do that.

Do cable companies have much of a presence at CES?

Yes, for sure. There’s three ways the industries have a presence. There’s on the show floor and exhibits, there’s meetings and meeting rooms, which is common, and then just being there to soak up the technology and being with people. So, we know for sure that there’s a large number of cable technologists that come to the show. We know that cable CTOs often do tours of the show and we’ve helped coordinate that.

The post Five Questions: Making Sure Cable is at Consumer Tech’s Table appeared first on Cablefax.

Read more here:: feeds.feedburner.com/cable360/ct/operations?format=xml

The post Five Questions: Making Sure Cable is at Consumer Tech’s Table appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Leading Lights 2017 Finalists: Most Innovative IoT/M2M Strategy (Service Provider)

By Iain Morris Our short list for the most innovative IoT/M2M strategy service provider includes a mainstream network operator, the world’s biggest maker of Internet Protocol network equipment, a smart grid specialist and a ‘digital agriculture’ hub.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

Leading Lights 2017 Finalists: Most Innovative IoT/M2M Strategy (Service Provider)

By News Aggregator

By Iain Morris Our short list for the most innovative IoT/M2M strategy service provider includes a mainstream network operator, the world’s biggest maker of Internet Protocol network equipment, a smart grid specialist and a ‘digital agriculture’ hub.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

The post Leading Lights 2017 Finalists: Most Innovative IoT/M2M Strategy (Service Provider) appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out?

By Martin Geddes

The IETF has the final day of its 98th meeting in Chicago today (Friday 31 Mar), far away from here in Vilnius. The Internet is maturing and becoming indispensable to modern life, and is transitioning to industrial types of use. Are the IETF’s methods fit-for-purpose for the future, and if not, what to do about it?

My assertion is that the Internet Engineering Task Force (IETF) is an institution whose remit is coming to a natural end. This is the result of spectacular success, not failure. However, continuing along the present path risks turning that success into a serious act of wrongdoing. This will leave a social and political legacy that will tarnish the collaborative technical achievements that have been accumulated thus far.

Before we give gracious thanks for the benefits the IETF has brought us all, let’s pause to lay out the basic facts about the IETF: its purpose, processes and resulting product. It is a quasi-standards body that issues non-binding memos called Requests for Comments (RFCs). These define the core Internet interoperability protocols and architecture, as used by nearly all modern packet-based networks.

The organisation also extends its activities to software infrastructure closely associated with packet handling, or that co-evolved with it. Examples of those general-purpose needs include email message exchange or performance data capture and collation. There is a fully functioning governance structure provided by the Internet Society to review its remit and activities.

This remit expressly excludes transmission and computing hardware, as well as end user services and any application-specific protocols. It has reasonably well-defined boundaries of competence and concern, neighbouring with institutions like the IEEE, CableLabs, W3C, 3GPP, ITU, ICANN, GSMA, IET, TMF, ACM, and many others.

The IETF is not a testing, inspection or certification body; there’s no IETF seal of approval you can pay for. Nor does it have a formal governmental or transnational charter. It doesn’t have an “IETF tax” to levy like ICANN does, so can’t be fracked for cash. Nobody got rich merely from attending IETF meetings, although possibly a few got drunk or stoned afterwards.

The IETF’s ethos is one which also embraces widespread industry and individual participation, and a dispersal of decision-making power. It has an aversion to overt displays of power and authority, a product of being a voluntary cooperative association. It has no significant de jure powers of coercion over members, and only very weak de facto ones.

All technology standards choices are necessarily political (as some parties are favoured or disfavoured), yet overall the IETF has proven to be a model of collaborative behaviour and pragmatic compromise. You might disagree with its technical choices, but few could argue they are the result of abuses of over-concentrated and unaccountable power.

Inevitably many of the active participants and stakeholders do come from today’s incumbent ISPs, equipment vendors and application service providers. Whether Comcast, Cisco or Google are your personal heroes or villains does not detract from the IETFs essential story of success. It is a socio-technical ecosystem whose existence is amply justified by sustained and widespread adoption of its specification and standards products.

Having met many active participants over many years, I can myself attest to their good conscience and conduct. This is an institution that has many meritocratic attributes, with influence coming from reputational stature and sustained engagement.

As a result of their efforts, we have an Internet that has appeared in a remarkably short period of human history. It has delivered extraordinary benefits that have positively affected most of humanity. That rapid development process is bound to be messy in many ways, as is the nature of the world. The IETF should carry no shame or guilt for the Internet being less than ideal.

To celebrate and to summarise thus far: we have for nearly half a century been enjoying the fruits of a first-generation Internet based on a first-generation core architecture. The IETF has been a core driver and enabler of this grand technical experiment. Its greatest success is that it has helped us to explore the manifest possibilities of pervasive computing and ubiquitous and cheap communications.

Gratitude is the only respectable response.

OK, so that’s the upside. Now for the downside. First steps are fateful, and the IETF and resulting Internet were born by stepping away from the slow and stodgy standards processes of the mainstream telecoms industry, and its rigorous insistence on predictable and managed quality. The computing industry is also famous for its chaotic power struggles since application platform standards (like Windows and Office) can define who controls the profit pool in a whole ecosystem (like PCs).

The telecoms and computing worlds have long existed in a kind of techno-economic “hot war” over who controls the application services and their revenues. That the IETF has managed to function and survive as a kind of “demilitarised zone for distributed computing” is close to miraculous. This war for power and profit continues to rage, and may never cease. The IETF’s existence is partly attributable to the necessity of these parties to have a safe space to find compromise.

The core benefit of packet networking is to enable the statistical sharing of costly physical transmission resources. This “statistical multiplexing” allows you to perform this for a wide range of application types concurrently (as long as the traffic is scheduled appropriately). The exponential growth of PCs and smartphones has created intense and relentlessly growing application demand, especially when coupled with spectacular innovation in functionality.

So the IETF was born and grew up in an environment where there was both a strong political and economic need for a universal way of interoperating packet networks. The US government supplied all the basic technology for free and mandated its use over rival technologies and approaches.

With that as its context, it hasn’t always been necessary to make the best possible technical and architectural choices for it to stay in business. Nonetheless, the IETF has worked tirelessly to (re)design protocols and interfaces that enable suitable network supply for the evolving demand.

In the process of abandoning the form and formality of telco standards bodies, the IETF adopted a mantra of “rough consensus and running code”. Every technical standards RFC is essentially a “success recipe” for information interchange. This ensures a “semantic impedance match” across any management, administration or technological boundary.

The emphasis on ensuring “success” is reinforced by being “conservative in what you send and liberal in what you accept” in any protocol exchange. Even the April Fool RFC begins “It Has To Work”, i.e. constructing “success modes” are the IETF’s raison d’être.

Since RFCs exist to scratch the itches of real practitioners, they mostly have found an immediate and willing audience of “success seekers” to adopt them. This has had the benefit of maximising the pace at which the possibility space could be explored. A virtuous cycle was created of more users, new applications, fresh network demand, and new protocol needs that the IETF has satisfied.

Yet if you had to apply a “truth in advertising” test to the IETF, you would probably call it the Experimental Internet Interface Exploration Task Force. This is really a prototype Internet still, with the protocols being experimental in nature. Driven by operational need, RFCs only define how to construct the “success modes” that enable the Internet to meet growing demands. And that’s the essential problem…

The IETF isn’t, if we are honest with ourselves, an engineering organisation. It doesn’t particularly concern itself with the “failure modes”; you only have to provide “running code”, not a safety case. There is no demand that you demonstrate the trustworthiness of your ideas with a model of the world with understood error and infidelity to reality. You are never asked to prove what kinds of loads your architecture can safely accept, and what its capability limits might be.

This is partly a result of the widespread industry neglect of the core science and engineering of performance. We also see serious and unfixable problems with the Internet’s architecture when it comes to security, resilience and mobility. These difficulties result in several consequent problems, which if left unattended to, will severely damage the IETF’s technical credibility and social legitimacy.

The first issue is that the IETF takes on problems for which it lacks an ontological and epistemological framework to resolve. (This is a very posh way of saying “people don’t know that they don’t know what they are doing”.)

Perhaps the best example is “bufferbloat” and the resulting “active queue management” proposals. These, regrettably, create a whole raft of new and even worse network performance management problems. These “failure modes” will emerge suddenly and unexpectedly in operation, which will prompt a whole new round of “fixes” to reconstruct “success”. This, in turn, guarantees future disappointment and further disaster.

Another is that you find some efforts are misdirected as they perpetuate poor initial architecture choices in the 1970s. For instance, we are approaching nearly two decades of the IPv4 to IPv6 transition. If I swapped your iPhone 7 for your monochrome feature phone of 2000, I think you’d get the point about technical change: we’ve moved from Windows 98 on the desktop to wearable and even ingestible nanocomputing in that period.

Such sub-glacial IPv6 adoption tells us there is something fundamentally wrong: the proposed benefits simply don’t exist in the minds of commercially-minded ISPs. Indeed, there are many new costs, such as an explosion in the security attack surface to be managed. Yet nobody dares step back and say “maybe we’ve got something fundamental wrong here, like swapping scopes and layers, or confusing names and addresses.”

There is an endless cycle of new problems, superficial diagnoses, and temporarily fixes to restore “success”. These “fixes” in turn results in an ever-growing “technical debt” and unmanaged complexity and new RFCs have to find out how to relate to a tangled morass of past RFCs.

The IETF (and industry at large) lacks a robust enough theory of distributed computing to collapse that complexity. Hence the potential for problems of protocol interaction explode over time. The operational economics and technical scalability of the Internet are now being called into doubt.

Yet the most serious problem the IETF faces is not a limit on its ability to construct new “success modes”. Rather, it is the fundamental incompatibility of the claim to “engineering” with its ethos.

Architects and engineers are professions that engage in safety-critical structures and activities. The Internet, due to its unmitigated success, is now integral to the operation of many social and economic activities. No matter how many disclaimers you put on your work, people can and do use it for home automation, healthcare, education, telework and other core social and economic needs.

Yet the IETF is lacking in the mindset and methods for taking responsibility for engineering failure. This exclusive focus on “success” was an acceptable trade-off in the 1980s and 1990s, as we engaged in pure experiment and exploration. It is increasingly unacceptable for the 2010s and 2020s. We already embed the Internet into every device and activity, and that will only intensify as meatspace blends with cyberspace, with us all living as cyborgs in a hybrid metaverse.

The lack of “skin in the game” means many people are taking credit (and zillions of frequent flyer miles) for the “success modes” based on claiming the benefits of “engineering”, without experiencing personal consequences for the unexamined and unquantified technical risks they create for others. This is unethical.

As we move to IoT and intimate sensed biodata, it becomes rather scary. You might Web-based think adtech is bad, but the absence of privacy-by-design makes the Internet a dangerous place for our descendants.

There are lots of similarly serious problems ahead. One of the big ones is that the Internet is not a scale-free architecture, as there is no “performance by design”. A single counter-example suffices to resolve this question: there are real scaling limits that real networks have encountered. Society is betting its digital future on a set of protocols and standards whose load limits are unknown.

There are good reasons to be concerned that we are going to get some unpleasant performance surprises. This kind of problem cannot be resolved through “rough consensus and running code”. It requires rigorous and quantified performance engineering, with systems of invariants, and a semantic framework to turn specifications into operational systems.

The danger the IETF now faces is that the Internet falls ever further below the level of predictability, performance and safety that we take for granted in every other aspect of modern life. No other utility or engineering discipline could get away with such sloppiness. It’s time for the Internet and its supporting institutions to grow up and take responsibility as it industrialises.

If there is no action by the IETF, eventually the public will demand change. “The Internet is a bit shit” is already a meme floating in the zeitgeist. Politicians will seek scapegoats for the lack of benefit of public investments. The telcos have lobbyists and never were loved anyway. In contrast, the IETF is not in a position to defend itself.

The backlash might even see a radical shift in power away from its open and democratic processes. Instead, we will get “backroom deals” between cloud and telco giants, in which the fates of economies and societies are sealed in private as the billing API is defined. An “Industrial Internet” may see the IETF’s whole existence eclipsed.

The root issue is the dissonance between a title that includes the word “engineering”, and an organisation that fails to enact this claim. The result is a serious competency issue, that results in an accountability deficit, that then risks a legitimacy crisis. After all, to be an engineer you need to adhere to rules of conduct and a code of ethics.

My own father was just a fitter on Boeing 747s, but needed constant exams and licensing just like a medical doctor. An architect in Babylon could be put to death for a building that collapsed and killed someone! Why not accountability for the network architects designing core protocols necessary to the functioning of society?

As a consequence of changing times and user needs, I believe that the IETF needs to begin a period of deep reflection and introspection:

  • What is its technical purpose? We have proven that packet networks can work at scale, and have value over other approaches. Is the initial experimental phase over?
  • What are its ethical values? What kind of rewards does it offer, and what risks does it create? Do people experience consequences and accountability either way?
  • How should the IETF respond to new architectures that incorporate our learning from decades of Internet Protocol?
  • What is the IETF’s role in basic science and engineering, if any, given their growing importance as the Internet matures?

The easy (and wrong) way forward is to put the existing disclaimers into large flashing bold, and issue an RFC apologising for the lack of engineering rigour. That doesn’t cut the ethical mustard. A simple name change to expunge “engineering” from the title (which would provoke howls of rage and never happen) also doesn’t address the core problem of a capability and credibility gap.

The right way is to make a difficult choice: to scale up, scale down, or scale out?

One option is to “scale up”, and make its actions align with its titular claim to being a true engineering institution. This requires a painful process to identify the capability gaps, and to gather the necessary resources to fill them. This could be directly by developing the missing science and mathematics, or through building alliances with other organisations who might be better equipped.

Licensed engineers with relevant understanding may be needed to approve processes and proposals; experts in security and performance risk and safety would provide oversight and governance. It would be a serious rebuild of the IETF’s core mission and methods of operation. The amateur ethos would be lost, but that’s a price worth paying for professional legitimacy.

In this model, RFCs of an “information infrastructure” nature would be reviewed more like how a novel suspension bridge or space rocket has a risk analysis. After all, building packet networks is now merely “rocket science”, applying well-understood principles and proven engineering processes. This doesn’t require any new inventions or breakthroughs.

An alternative is for the IETF to define an “end game”, and the scaling down of its activities. Some would transfer to other entities with professional memberships, enforced codes of behaviour, and licensed practitioners for safety-related activities. Others would cease entirely. Rather like the initial pioneers of the railroad or telegraph, their job is done. IPv6 isn’t the answer, because Internet Protocol’s foundations are broken and cannot be fixed.

The final option that I see is to “scale out”, and begin a new core of exploration but focused on new architectures beyond TCP/IP. The basic social and collaboration processes of the IETF are sound, and the model for exploring “success modes” is proven. In this case, a renaming to the Internet Experiment Task Force for the spin-out might be seen as an acceptable and attractive one.

One thing is certain, and that is that the Internet is in a period of rapid maturation and fundamental structural change. If the IETF wishes to remain relevant and not reviled, then it needs to adapt to an emerging and improved Industrial Internet or perish along with the Prototype Internet it has nurtured so well.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Internet Protocol

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out?

By News Aggregator

By Martin Geddes

The IETF has the final day of its 98th meeting in Chicago today (Friday 31 Mar), far away from here in Vilnius. The Internet is maturing and becoming indispensable to modern life, and is transitioning to industrial types of use. Are the IETF’s methods fit-for-purpose for the future, and if not, what to do about it?

My assertion is that the Internet Engineering Task Force (IETF) is an institution whose remit is coming to a natural end. This is the result of spectacular success, not failure. However, continuing along the present path risks turning that success into a serious act of wrongdoing. This will leave a social and political legacy that will tarnish the collaborative technical achievements that have been accumulated thus far.

Before we give gracious thanks for the benefits the IETF has brought us all, let’s pause to lay out the basic facts about the IETF: its purpose, processes and resulting product. It is a quasi-standards body that issues non-binding memos called Requests for Comments (RFCs). These define the core Internet interoperability protocols and architecture, as used by nearly all modern packet-based networks.

The organisation also extends its activities to software infrastructure closely associated with packet handling, or that co-evolved with it. Examples of those general-purpose needs include email message exchange or performance data capture and collation. There is a fully functioning governance structure provided by the Internet Society to review its remit and activities.

This remit expressly excludes transmission and computing hardware, as well as end user services and any application-specific protocols. It has reasonably well-defined boundaries of competence and concern, neighbouring with institutions like the IEEE, CableLabs, W3C, 3GPP, ITU, ICANN, GSMA, IET, TMF, ACM, and many others.

The IETF is not a testing, inspection or certification body; there’s no IETF seal of approval you can pay for. Nor does it have a formal governmental or transnational charter. It doesn’t have an “IETF tax” to levy like ICANN does, so can’t be fracked for cash. Nobody got rich merely from attending IETF meetings, although possibly a few got drunk or stoned afterwards.

The IETF’s ethos is one which also embraces widespread industry and individual participation, and a dispersal of decision-making power. It has an aversion to overt displays of power and authority, a product of being a voluntary cooperative association. It has no significant de jure powers of coercion over members, and only very weak de facto ones.

All technology standards choices are necessarily political (as some parties are favoured or disfavoured), yet overall the IETF has proven to be a model of collaborative behaviour and pragmatic compromise. You might disagree with its technical choices, but few could argue they are the result of abuses of over-concentrated and unaccountable power.

Inevitably many of the active participants and stakeholders do come from today’s incumbent ISPs, equipment vendors and application service providers. Whether Comcast, Cisco or Google are your personal heroes or villains does not detract from the IETFs essential story of success. It is a socio-technical ecosystem whose existence is amply justified by sustained and widespread adoption of its specification and standards products.

Having met many active participants over many years, I can myself attest to their good conscience and conduct. This is an institution that has many meritocratic attributes, with influence coming from reputational stature and sustained engagement.

As a result of their efforts, we have an Internet that has appeared in a remarkably short period of human history. It has delivered extraordinary benefits that have positively affected most of humanity. That rapid development process is bound to be messy in many ways, as is the nature of the world. The IETF should carry no shame or guilt for the Internet being less than ideal.

To celebrate and to summarise thus far: we have for nearly half a century been enjoying the fruits of a first-generation Internet based on a first-generation core architecture. The IETF has been a core driver and enabler of this grand technical experiment. Its greatest success is that it has helped us to explore the manifest possibilities of pervasive computing and ubiquitous and cheap communications.

Gratitude is the only respectable response.

OK, so that’s the upside. Now for the downside. First steps are fateful, and the IETF and resulting Internet were born by stepping away from the slow and stodgy standards processes of the mainstream telecoms industry, and its rigorous insistence on predictable and managed quality. The computing industry is also famous for its chaotic power struggles since application platform standards (like Windows and Office) can define who controls the profit pool in a whole ecosystem (like PCs).

The telecoms and computing worlds have long existed in a kind of techno-economic “hot war” over who controls the application services and their revenues. That the IETF has managed to function and survive as a kind of “demilitarised zone for distributed computing” is close to miraculous. This war for power and profit continues to rage, and may never cease. The IETF’s existence is partly attributable to the necessity of these parties to have a safe space to find compromise.

The core benefit of packet networking is to enable the statistical sharing of costly physical transmission resources. This “statistical multiplexing” allows you to perform this for a wide range of application types concurrently (as long as the traffic is scheduled appropriately). The exponential growth of PCs and smartphones has created intense and relentlessly growing application demand, especially when coupled with spectacular innovation in functionality.

So the IETF was born and grew up in an environment where there was both a strong political and economic need for a universal way of interoperating packet networks. The US government supplied all the basic technology for free and mandated its use over rival technologies and approaches.

With that as its context, it hasn’t always been necessary to make the best possible technical and architectural choices for it to stay in business. Nonetheless, the IETF has worked tirelessly to (re)design protocols and interfaces that enable suitable network supply for the evolving demand.

In the process of abandoning the form and formality of telco standards bodies, the IETF adopted a mantra of “rough consensus and running code”. Every technical standards RFC is essentially a “success recipe” for information interchange. This ensures a “semantic impedance match” across any management, administration or technological boundary.

The emphasis on ensuring “success” is reinforced by being “conservative in what you send and liberal in what you accept” in any protocol exchange. Even the April Fool RFC begins “It Has To Work”, i.e. constructing “success modes” are the IETF’s raison d’être.

Since RFCs exist to scratch the itches of real practitioners, they mostly have found an immediate and willing audience of “success seekers” to adopt them. This has had the benefit of maximising the pace at which the possibility space could be explored. A virtuous cycle was created of more users, new applications, fresh network demand, and new protocol needs that the IETF has satisfied.

Yet if you had to apply a “truth in advertising” test to the IETF, you would probably call it the Experimental Internet Interface Exploration Task Force. This is really a prototype Internet still, with the protocols being experimental in nature. Driven by operational need, RFCs only define how to construct the “success modes” that enable the Internet to meet growing demands. And that’s the essential problem…

The IETF isn’t, if we are honest with ourselves, an engineering organisation. It doesn’t particularly concern itself with the “failure modes”; you only have to provide “running code”, not a safety case. There is no demand that you demonstrate the trustworthiness of your ideas with a model of the world with understood error and infidelity to reality. You are never asked to prove what kinds of loads your architecture can safely accept, and what its capability limits might be.

This is partly a result of the widespread industry neglect of the core science and engineering of performance. We also see serious and unfixable problems with the Internet’s architecture when it comes to security, resilience and mobility. These difficulties result in several consequent problems, which if left unattended to, will severely damage the IETF’s technical credibility and social legitimacy.

The first issue is that the IETF takes on problems for which it lacks an ontological and epistemological framework to resolve. (This is a very posh way of saying “people don’t know that they don’t know what they are doing”.)

Perhaps the best example is “bufferbloat” and the resulting “active queue management” proposals. These, regrettably, create a whole raft of new and even worse network performance management problems. These “failure modes” will emerge suddenly and unexpectedly in operation, which will prompt a whole new round of “fixes” to reconstruct “success”. This, in turn, guarantees future disappointment and further disaster.

Another is that you find some efforts are misdirected as they perpetuate poor initial architecture choices in the 1970s. For instance, we are approaching nearly two decades of the IPv4 to IPv6 transition. If I swapped your iPhone 7 for your monochrome feature phone of 2000, I think you’d get the point about technical change: we’ve moved from Windows 98 on the desktop to wearable and even ingestible nanocomputing in that period.

Such sub-glacial IPv6 adoption tells us there is something fundamentally wrong: the proposed benefits simply don’t exist in the minds of commercially-minded ISPs. Indeed, there are many new costs, such as an explosion in the security attack surface to be managed. Yet nobody dares step back and say “maybe we’ve got something fundamental wrong here, like swapping scopes and layers, or confusing names and addresses.”

There is an endless cycle of new problems, superficial diagnoses, and temporarily fixes to restore “success”. These “fixes” in turn results in an ever-growing “technical debt” and unmanaged complexity and new RFCs have to find out how to relate to a tangled morass of past RFCs.

The IETF (and industry at large) lacks a robust enough theory of distributed computing to collapse that complexity. Hence the potential for problems of protocol interaction explode over time. The operational economics and technical scalability of the Internet are now being called into doubt.

Yet the most serious problem the IETF faces is not a limit on its ability to construct new “success modes”. Rather, it is the fundamental incompatibility of the claim to “engineering” with its ethos.

Architects and engineers are professions that engage in safety-critical structures and activities. The Internet, due to its unmitigated success, is now integral to the operation of many social and economic activities. No matter how many disclaimers you put on your work, people can and do use it for home automation, healthcare, education, telework and other core social and economic needs.

Yet the IETF is lacking in the mindset and methods for taking responsibility for engineering failure. This exclusive focus on “success” was an acceptable trade-off in the 1980s and 1990s, as we engaged in pure experiment and exploration. It is increasingly unacceptable for the 2010s and 2020s. We already embed the Internet into every device and activity, and that will only intensify as meatspace blends with cyberspace, with us all living as cyborgs in a hybrid metaverse.

The lack of “skin in the game” means many people are taking credit (and zillions of frequent flyer miles) for the “success modes” based on claiming the benefits of “engineering”, without experiencing personal consequences for the unexamined and unquantified technical risks they create for others. This is unethical.

As we move to IoT and intimate sensed biodata, it becomes rather scary. You might Web-based think adtech is bad, but the absence of privacy-by-design makes the Internet a dangerous place for our descendants.

There are lots of similarly serious problems ahead. One of the big ones is that the Internet is not a scale-free architecture, as there is no “performance by design”. A single counter-example suffices to resolve this question: there are real scaling limits that real networks have encountered. Society is betting its digital future on a set of protocols and standards whose load limits are unknown.

There are good reasons to be concerned that we are going to get some unpleasant performance surprises. This kind of problem cannot be resolved through “rough consensus and running code”. It requires rigorous and quantified performance engineering, with systems of invariants, and a semantic framework to turn specifications into operational systems.

The danger the IETF now faces is that the Internet falls ever further below the level of predictability, performance and safety that we take for granted in every other aspect of modern life. No other utility or engineering discipline could get away with such sloppiness. It’s time for the Internet and its supporting institutions to grow up and take responsibility as it industrialises.

If there is no action by the IETF, eventually the public will demand change. “The Internet is a bit shit” is already a meme floating in the zeitgeist. Politicians will seek scapegoats for the lack of benefit of public investments. The telcos have lobbyists and never were loved anyway. In contrast, the IETF is not in a position to defend itself.

The backlash might even see a radical shift in power away from its open and democratic processes. Instead, we will get “backroom deals” between cloud and telco giants, in which the fates of economies and societies are sealed in private as the billing API is defined. An “Industrial Internet” may see the IETF’s whole existence eclipsed.

The root issue is the dissonance between a title that includes the word “engineering”, and an organisation that fails to enact this claim. The result is a serious competency issue, that results in an accountability deficit, that then risks a legitimacy crisis. After all, to be an engineer you need to adhere to rules of conduct and a code of ethics.

My own father was just a fitter on Boeing 747s, but needed constant exams and licensing just like a medical doctor. An architect in Babylon could be put to death for a building that collapsed and killed someone! Why not accountability for the network architects designing core protocols necessary to the functioning of society?

As a consequence of changing times and user needs, I believe that the IETF needs to begin a period of deep reflection and introspection:

  • What is its technical purpose? We have proven that packet networks can work at scale, and have value over other approaches. Is the initial experimental phase over?
  • What are its ethical values? What kind of rewards does it offer, and what risks does it create? Do people experience consequences and accountability either way?
  • How should the IETF respond to new architectures that incorporate our learning from decades of Internet Protocol?
  • What is the IETF’s role in basic science and engineering, if any, given their growing importance as the Internet matures?

The easy (and wrong) way forward is to put the existing disclaimers into large flashing bold, and issue an RFC apologising for the lack of engineering rigour. That doesn’t cut the ethical mustard. A simple name change to expunge “engineering” from the title (which would provoke howls of rage and never happen) also doesn’t address the core problem of a capability and credibility gap.

The right way is to make a difficult choice: to scale up, scale down, or scale out?

One option is to “scale up”, and make its actions align with its titular claim to being a true engineering institution. This requires a painful process to identify the capability gaps, and to gather the necessary resources to fill them. This could be directly by developing the missing science and mathematics, or through building alliances with other organisations who might be better equipped.

Licensed engineers with relevant understanding may be needed to approve processes and proposals; experts in security and performance risk and safety would provide oversight and governance. It would be a serious rebuild of the IETF’s core mission and methods of operation. The amateur ethos would be lost, but that’s a price worth paying for professional legitimacy.

In this model, RFCs of an “information infrastructure” nature would be reviewed more like how a novel suspension bridge or space rocket has a risk analysis. After all, building packet networks is now merely “rocket science”, applying well-understood principles and proven engineering processes. This doesn’t require any new inventions or breakthroughs.

An alternative is for the IETF to define an “end game”, and the scaling down of its activities. Some would transfer to other entities with professional memberships, enforced codes of behaviour, and licensed practitioners for safety-related activities. Others would cease entirely. Rather like the initial pioneers of the railroad or telegraph, their job is done. IPv6 isn’t the answer, because Internet Protocol’s foundations are broken and cannot be fixed.

The final option that I see is to “scale out”, and begin a new core of exploration but focused on new architectures beyond TCP/IP. The basic social and collaboration processes of the IETF are sound, and the model for exploring “success modes” is proven. In this case, a renaming to the Internet Experiment Task Force for the spin-out might be seen as an acceptable and attractive one.

One thing is certain, and that is that the Internet is in a period of rapid maturation and fundamental structural change. If the IETF wishes to remain relevant and not reviled, then it needs to adapt to an emerging and improved Industrial Internet or perish along with the Prototype Internet it has nurtured so well.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Internet Protocol

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out? appeared on IPv6.net.

Read more here:: IPv6 News Aggregator