Internet Protocol

The IETF’s Job Is Complete – Should It Now Scale Up, Down or Out?

By Martin Geddes

The IETF has the final day of its 98th meeting in Chicago today (Friday 31 Mar), far away from here in Vilnius. The Internet is maturing and becoming indispensable to modern life, and is transitioning to industrial types of use. Are the IETF’s methods fit-for-purpose for the future, and if not, what to do about it?

My assertion is that the Internet Engineering Task Force (IETF) is an institution whose remit is coming to a natural end. This is the result of spectacular success, not failure. However, continuing along the present path risks turning that success into a serious act of wrongdoing. This will leave a social and political legacy that will tarnish the collaborative technical achievements that have been accumulated thus far.

Before we give gracious thanks for the benefits the IETF has brought us all, let’s pause to lay out the basic facts about the IETF: its purpose, processes and resulting product. It is a quasi-standards body that issues non-binding memos called Requests for Comments (RFCs). These define the core Internet interoperability protocols and architecture, as used by nearly all modern packet-based networks.

The organisation also extends its activities to software infrastructure closely associated with packet handling, or that co-evolved with it. Examples of those general-purpose needs include email message exchange or performance data capture and collation. There is a fully functioning governance structure provided by the Internet Society to review its remit and activities.

This remit expressly excludes transmission and computing hardware, as well as end user services and any application-specific protocols. It has reasonably well-defined boundaries of competence and concern, neighbouring with institutions like the IEEE, CableLabs, W3C, 3GPP, ITU, ICANN, GSMA, IET, TMF, ACM, and many others.

The IETF is not a testing, inspection or certification body; there’s no IETF seal of approval you can pay for. Nor does it have a formal governmental or transnational charter. It doesn’t have an “IETF tax” to levy like ICANN does, so can’t be fracked for cash. Nobody got rich merely from attending IETF meetings, although possibly a few got drunk or stoned afterwards.

The IETF’s ethos is one which also embraces widespread industry and individual participation, and a dispersal of decision-making power. It has an aversion to overt displays of power and authority, a product of being a voluntary cooperative association. It has no significant de jure powers of coercion over members, and only very weak de facto ones.

All technology standards choices are necessarily political (as some parties are favoured or disfavoured), yet overall the IETF has proven to be a model of collaborative behaviour and pragmatic compromise. You might disagree with its technical choices, but few could argue they are the result of abuses of over-concentrated and unaccountable power.

Inevitably many of the active participants and stakeholders do come from today’s incumbent ISPs, equipment vendors and application service providers. Whether Comcast, Cisco or Google are your personal heroes or villains does not detract from the IETFs essential story of success. It is a socio-technical ecosystem whose existence is amply justified by sustained and widespread adoption of its specification and standards products.

Having met many active participants over many years, I can myself attest to their good conscience and conduct. This is an institution that has many meritocratic attributes, with influence coming from reputational stature and sustained engagement.

As a result of their efforts, we have an Internet that has appeared in a remarkably short period of human history. It has delivered extraordinary benefits that have positively affected most of humanity. That rapid development process is bound to be messy in many ways, as is the nature of the world. The IETF should carry no shame or guilt for the Internet being less than ideal.

To celebrate and to summarise thus far: we have for nearly half a century been enjoying the fruits of a first-generation Internet based on a first-generation core architecture. The IETF has been a core driver and enabler of this grand technical experiment. Its greatest success is that it has helped us to explore the manifest possibilities of pervasive computing and ubiquitous and cheap communications.

Gratitude is the only respectable response.

OK, so that’s the upside. Now for the downside. First steps are fateful, and the IETF and resulting Internet were born by stepping away from the slow and stodgy standards processes of the mainstream telecoms industry, and its rigorous insistence on predictable and managed quality. The computing industry is also famous for its chaotic power struggles since application platform standards (like Windows and Office) can define who controls the profit pool in a whole ecosystem (like PCs).

The telecoms and computing worlds have long existed in a kind of techno-economic “hot war” over who controls the application services and their revenues. That the IETF has managed to function and survive as a kind of “demilitarised zone for distributed computing” is close to miraculous. This war for power and profit continues to rage, and may never cease. The IETF’s existence is partly attributable to the necessity of these parties to have a safe space to find compromise.

The core benefit of packet networking is to enable the statistical sharing of costly physical transmission resources. This “statistical multiplexing” allows you to perform this for a wide range of application types concurrently (as long as the traffic is scheduled appropriately). The exponential growth of PCs and smartphones has created intense and relentlessly growing application demand, especially when coupled with spectacular innovation in functionality.

So the IETF was born and grew up in an environment where there was both a strong political and economic need for a universal way of interoperating packet networks. The US government supplied all the basic technology for free and mandated its use over rival technologies and approaches.

With that as its context, it hasn’t always been necessary to make the best possible technical and architectural choices for it to stay in business. Nonetheless, the IETF has worked tirelessly to (re)design protocols and interfaces that enable suitable network supply for the evolving demand.

In the process of abandoning the form and formality of telco standards bodies, the IETF adopted a mantra of “rough consensus and running code”. Every technical standards RFC is essentially a “success recipe” for information interchange. This ensures a “semantic impedance match” across any management, administration or technological boundary.

The emphasis on ensuring “success” is reinforced by being “conservative in what you send and liberal in what you accept” in any protocol exchange. Even the April Fool RFC begins “It Has To Work”, i.e. constructing “success modes” are the IETF’s raison d’être.

Since RFCs exist to scratch the itches of real practitioners, they mostly have found an immediate and willing audience of “success seekers” to adopt them. This has had the benefit of maximising the pace at which the possibility space could be explored. A virtuous cycle was created of more users, new applications, fresh network demand, and new protocol needs that the IETF has satisfied.

Yet if you had to apply a “truth in advertising” test to the IETF, you would probably call it the Experimental Internet Interface Exploration Task Force. This is really a prototype Internet still, with the protocols being experimental in nature. Driven by operational need, RFCs only define how to construct the “success modes” that enable the Internet to meet growing demands. And that’s the essential problem…

The IETF isn’t, if we are honest with ourselves, an engineering organisation. It doesn’t particularly concern itself with the “failure modes”; you only have to provide “running code”, not a safety case. There is no demand that you demonstrate the trustworthiness of your ideas with a model of the world with understood error and infidelity to reality. You are never asked to prove what kinds of loads your architecture can safely accept, and what its capability limits might be.

This is partly a result of the widespread industry neglect of the core science and engineering of performance. We also see serious and unfixable problems with the Internet’s architecture when it comes to security, resilience and mobility. These difficulties result in several consequent problems, which if left unattended to, will severely damage the IETF’s technical credibility and social legitimacy.

The first issue is that the IETF takes on problems for which it lacks an ontological and epistemological framework to resolve. (This is a very posh way of saying “people don’t know that they don’t know what they are doing”.)

Perhaps the best example is “bufferbloat” and the resulting “active queue management” proposals. These, regrettably, create a whole raft of new and even worse network performance management problems. These “failure modes” will emerge suddenly and unexpectedly in operation, which will prompt a whole new round of “fixes” to reconstruct “success”. This, in turn, guarantees future disappointment and further disaster.

Another is that you find some efforts are misdirected as they perpetuate poor initial architecture choices in the 1970s. For instance, we are approaching nearly two decades of the IPv4 to IPv6 transition. If I swapped your iPhone 7 for your monochrome feature phone of 2000, I think you’d get the point about technical change: we’ve moved from Windows 98 on the desktop to wearable and even ingestible nanocomputing in that period.

Such sub-glacial IPv6 adoption tells us there is something fundamentally wrong: the proposed benefits simply don’t exist in the minds of commercially-minded ISPs. Indeed, there are many new costs, such as an explosion in the security attack surface to be managed. Yet nobody dares step back and say “maybe we’ve got something fundamental wrong here, like swapping scopes and layers, or confusing names and addresses.”

There is an endless cycle of new problems, superficial diagnoses, and temporarily fixes to restore “success”. These “fixes” in turn results in an ever-growing “technical debt” and unmanaged complexity and new RFCs have to find out how to relate to a tangled morass of past RFCs.

The IETF (and industry at large) lacks a robust enough theory of distributed computing to collapse that complexity. Hence the potential for problems of protocol interaction explode over time. The operational economics and technical scalability of the Internet are now being called into doubt.

Yet the most serious problem the IETF faces is not a limit on its ability to construct new “success modes”. Rather, it is the fundamental incompatibility of the claim to “engineering” with its ethos.

Architects and engineers are professions that engage in safety-critical structures and activities. The Internet, due to its unmitigated success, is now integral to the operation of many social and economic activities. No matter how many disclaimers you put on your work, people can and do use it for home automation, healthcare, education, telework and other core social and economic needs.

Yet the IETF is lacking in the mindset and methods for taking responsibility for engineering failure. This exclusive focus on “success” was an acceptable trade-off in the 1980s and 1990s, as we engaged in pure experiment and exploration. It is increasingly unacceptable for the 2010s and 2020s. We already embed the Internet into every device and activity, and that will only intensify as meatspace blends with cyberspace, with us all living as cyborgs in a hybrid metaverse.

The lack of “skin in the game” means many people are taking credit (and zillions of frequent flyer miles) for the “success modes” based on claiming the benefits of “engineering”, without experiencing personal consequences for the unexamined and unquantified technical risks they create for others. This is unethical.

As we move to IoT and intimate sensed biodata, it becomes rather scary. You might Web-based think adtech is bad, but the absence of privacy-by-design makes the Internet a dangerous place for our descendants.

There are lots of similarly serious problems ahead. One of the big ones is that the Internet is not a scale-free architecture, as there is no “performance by design”. A single counter-example suffices to resolve this question: there are real scaling limits that real networks have encountered. Society is betting its digital future on a set of protocols and standards whose load limits are unknown.

There are good reasons to be concerned that we are going to get some unpleasant performance surprises. This kind of problem cannot be resolved through “rough consensus and running code”. It requires rigorous and quantified performance engineering, with systems of invariants, and a semantic framework to turn specifications into operational systems.

The danger the IETF now faces is that the Internet falls ever further below the level of predictability, performance and safety that we take for granted in every other aspect of modern life. No other utility or engineering discipline could get away with such sloppiness. It’s time for the Internet and its supporting institutions to grow up and take responsibility as it industrialises.

If there is no action by the IETF, eventually the public will demand change. “The Internet is a bit shit” is already a meme floating in the zeitgeist. Politicians will seek scapegoats for the lack of benefit of public investments. The telcos have lobbyists and never were loved anyway. In contrast, the IETF is not in a position to defend itself.

The backlash might even see a radical shift in power away from its open and democratic processes. Instead, we will get “backroom deals” between cloud and telco giants, in which the fates of economies and societies are sealed in private as the billing API is defined. An “Industrial Internet” may see the IETF’s whole existence eclipsed.

The root issue is the dissonance between a title that includes the word “engineering”, and an organisation that fails to enact this claim. The result is a serious competency issue, that results in an accountability deficit, that then risks a legitimacy crisis. After all, to be an engineer you need to adhere to rules of conduct and a code of ethics.

My own father was just a fitter on Boeing 747s, but needed constant exams and licensing just like a medical doctor. An architect in Babylon could be put to death for a building that collapsed and killed someone! Why not accountability for the network architects designing core protocols necessary to the functioning of society?

As a consequence of changing times and user needs, I believe that the IETF needs to begin a period of deep reflection and introspection:

  • What is its technical purpose? We have proven that packet networks can work at scale, and have value over other approaches. Is the initial experimental phase over?
  • What are its ethical values? What kind of rewards does it offer, and what risks does it create? Do people experience consequences and accountability either way?
  • How should the IETF respond to new architectures that incorporate our learning from decades of Internet Protocol?
  • What is the IETF’s role in basic science and engineering, if any, given their growing importance as the Internet matures?

The easy (and wrong) way forward is to put the existing disclaimers into large flashing bold, and issue an RFC apologising for the lack of engineering rigour. That doesn’t cut the ethical mustard. A simple name change to expunge “engineering” from the title (which would provoke howls of rage and never happen) also doesn’t address the core problem of a capability and credibility gap.

The right way is to make a difficult choice: to scale up, scale down, or scale out?

One option is to “scale up”, and make its actions align with its titular claim to being a true engineering institution. This requires a painful process to identify the capability gaps, and to gather the necessary resources to fill them. This could be directly by developing the missing science and mathematics, or through building alliances with other organisations who might be better equipped.

Licensed engineers with relevant understanding may be needed to approve processes and proposals; experts in security and performance risk and safety would provide oversight and governance. It would be a serious rebuild of the IETF’s core mission and methods of operation. The amateur ethos would be lost, but that’s a price worth paying for professional legitimacy.

In this model, RFCs of an “information infrastructure” nature would be reviewed more like how a novel suspension bridge or space rocket has a risk analysis. After all, building packet networks is now merely “rocket science”, applying well-understood principles and proven engineering processes. This doesn’t require any new inventions or breakthroughs.

An alternative is for the IETF to define an “end game”, and the scaling down of its activities. Some would transfer to other entities with professional memberships, enforced codes of behaviour, and licensed practitioners for safety-related activities. Others would cease entirely. Rather like the initial pioneers of the railroad or telegraph, their job is done. IPv6 isn’t the answer, because Internet Protocol’s foundations are broken and cannot be fixed.

The final option that I see is to “scale out”, and begin a new core of exploration but focused on new architectures beyond TCP/IP. The basic social and collaboration processes of the IETF are sound, and the model for exploring “success modes” is proven. In this case, a renaming to the Internet Experiment Task Force for the spin-out might be seen as an acceptable and attractive one.

One thing is certain, and that is that the Internet is in a period of rapid maturation and fundamental structural change. If the IETF wishes to remain relevant and not reviled, then it needs to adapt to an emerging and improved Industrial Internet or perish along with the Prototype Internet it has nurtured so well.

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Internet Protocol

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Trusted Objects and partners deliver a secure TLS protocol to both IP and non-IP objects

By Sheetal Kumbhar

Trusted Objects, a security expert for the Internet of Things (IoT), in partnership with Avnet Silica and UbiquiOS technology, have announced the availability of its secure TLS solution, allowing end-to-end sensor-to-server security to billions of sensors regardless whether they support the IP (Internet Protocol) or not. Now IoT developers are, say the partners, able to benefit […]

The post Trusted Objects and partners deliver a secure TLS protocol to both IP and non-IP objects appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The Three Pillars of ICANN’s Technical Engagement Strategy

ICANN’s technical engagement team was established two years ago. Since then, we have made a great deal of progress in better engaging with our peers throughout the Internet Assigned Numbers Authority (IANA) stewardship transition proposal process and currently during the implementation phase. Over the past few months, the Office of the CTO has been reinforced with a dedicated research team composed of experienced Internet technologists. These experts are working hard to raise the level of ICANN engagement on Internet identifiers technology usage measurement, its evolution, and are collecting and sharing data that can further support the community in its policy development processes. They are also focusing on helping to build bridges with other relevant technical partners.

Our overall strategy for technical engagement is based on three pillars:

  • Continue building trust with our technical partners and peers within the ecosystem.
  • Expand our participation in relevant forums and events where we can further raise awareness about ICANN’s mission, while encouraging more diversity in participation in our community policy development processes.
  • Continue contributing ICANN’s positions on technical topics discussed outside our regular forums, but ones affecting our mission, keeping the focus on our shared responsibilities and effective coordination.

We can highlight in this blog some ongoing activities toward each goal:

Expanding Participation in Technical Forums

To continue building a sustainable relationship with our peers, we have increased, in number and in quality, our participation and contribution to various technical forums led by our partner organizations, including:

  • Internet Engineering Task Force (IETF)
  • Regional Internet Registries (RIRs): African Network Information Center (AFRINIC), Asia-Pacific Network Information Centre (APNIC), American Registry for Internet Numbers (ARIN), Latin American and Caribbean Network Information Centre (LACNIC) and Réseaux IP Européens Network Coordination Centre (RIPE NCC)
  • Regional country code top-level domain organizations: African TLD Organization (AFTLD), Council of European National TLD Registries (CENTR), Asia Pacific TLD Organization (APTLD), Latin American and Carribean TLD Organization (LACTLD)
  • And many others …

Encouraging Diversity of Participants

As a community, we face the challenge of strengthening the bottom-up, multistakeholder policy development process, while at the same time ensuring that participation becomes more diverse. Looking beyond regional and gender diversity, we must also achieve technical diversity. For example, when we work on domain name policies that affect online services, how do we ensure that we have Internet service operators, application developers and software designers around the table to give their operational perspectives? And as mobile technology becomes an increasingly prevalent way of consuming Internet services, and mobile operators are important players in that sector, how do we ensure that they engage with and contribute to our policy development processes?

We have also seen a growing interest from the Internet services abuse mitigation community in understanding and engaging more actively in our community-led policy development processes. As a result, the output of these processes is taking their needs into consideration. Our Security, Stability and Resiliency (SSR) and Global Stakeholder Engagement (GSE) teams have worked together to provide capability-building programs dedicated to this community. We are exploring ways to cover more ground (particularly in emerging regions). Our recent participation in the Governmental Advisory Committee (GAC) Public Safety Working Group’s workshop in Nairobi has confirmed this need. A follow-up mechanism is under discussion to make sure our engagement efforts meet these needs.

Engaging in Technical Topics that Affect Our Ecosystem

Finally, within our technical scope, we have launched an Internet Protocol version 6 (IPv6) initiative to refine ICANN’s position on IPv6. The initiative defines actions that will ensure that, as organization, we do our part to provide online services that our community can transparently access over both IPv6 and Internet Protocol version 4 (IPv4). Read more about our IPv6 initiative.

Read more here:: www.icann.org/news/blog.rss

A Conversation with Community Leader Lise Fuhr

Lise Fuhr is a leader in the Internet community in Denmark. Here, she reflects on what ICANN58 means for Denmark – and what are the key issues she will focus on at the meeting.

Tell us a little about yourself and your involvement in ICANN.

I’m currently Director General at the European Telecommunications Network Operators’ Association (ETNO), the association that includes Europe’s leading providers of telecommunication and digital services. In ICANN, ETNO is an active in the Internet Service Providers and Connectivity Providers (ISPCP) and the Business Constituency (BC).

I’ve had several roles in the ICANN community, as a member of the Second Accountability and Transparency Review Team (ATRT2) and as co-chair of the Cross-Community Working Group that developed the proposal for the Internet Assigned Names Authority (IANA) stewardship transition. At present, I am a Board member of the ICANN affiliate Public Technical Identifiers (PTI), which is responsible for the operation of the IANA functions.

In the past, I was COO of Danish registry DIFO and DK Hostmaster, the entities responsible for the country code top-level domain (ccTLD) .dk. I have also worked for the Danish Ministry of Science, Technology and Innovation and for Telia Networks.

ICANN is all about the multistakeholder model. We actively seek participation from diverse cross-sections of society. From your perspective, what does the multistakeholder model of governance mean for the Denmark?

Having ICANN58 in Copenhagen will help build an even stronger awareness of the role of Internet governance and of the multistakeholder model in Denmark. Today’s Internet ecosystem is broad – most societal and industrial sectors rely on the Internet. Almost every sector needs to take part in how the Internet is governed.

What relationship do you see between ICANN and its stakeholders and how would you like to see it evolve?

ETNO has always advocated for an active role in Internet governance. For this reason, we support the multistakeholder model, embodied by ICANN and its activities. We want to support ICANN as it takes its first steps after the transition. The multistakeholder model is an opportunity to bring positive values to the global Internet community. Freedom to invest and freedom to innovate both remain crucial to a thriving and diverse Internet environment.

What issues will you be following at ICANN58?

The discussion around the new generic top-level domains (gTLDs) will be very important. The program should be balanced and consider both the opportunities and the risks to be addressed. In addition, the work on enhancing ICANN’s accountability will also be essential to rounding out the good work done so far with the transition. Another important issue is the debate on the migration from Internet Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6). Last but not least, trust is a top priority, so it’s important to participate in the discussions around security.

Read more here:: www.icann.org/news/blog.rss

Reaction: Do We Really Need a New Internet?

By Russ White

The other day several of us were gathered in a conference room on the 17th floor of the LinkedIn building in San Francisco, looking out of the windows as we discussed some various technical matters. All around us, there were new buildings under construction, with that tall towering crane anchored to the building in several places. We wondered how that crane was built, and considered how precise the building process seemed to be to the complete mess building a network seems to be.

And then, this week, I ran across a couple of articles (Feb 14 & Feb 15) arguing that we need a new Internet. For instance, from Feb 14 post:

What we really have today is a Prototype Internet. It has shown us what is possible when we have a cheap and ubiquitous digital infrastructure. Everyone who uses it has had joyous moments when they have spoken to family far away, found a hot new lover, discovered their perfect house, or booked a wonderful holiday somewhere exotic. For this, we should be grateful and have no regrets. Yet we have not only learned about the possibilities, but also about the problems. The Prototype Internet is not fit for purpose for the safety-critical and socially sensitive types of uses we foresee in the future. It simply wasn’t designed with healthcare, transport or energy grids in mind, to the extent it was ‘designed’ at all. Every “circle of death” watching a video, or DDoS attack that takes a major website offline, is a reminder of this. What we have is an endless series of patches with ever growing unmanaged complexity, and this is not a stable foundation for the future.

So the Internet is broken. Completely. We need a new one.

Really?

First, I’d like to point out that much of what people complain about in terms of the Internet, such as the lack of security, or the lack of privacy, are actually a matter of tradeoffs. You could choose a different set of tradeoffs, of course, but then you would get a different “Internet” — one that may not, in fact, support what we support today. Whether the things it would support would be better or worse, I cannot answer, but the entire concept of a “new Internet” that supports everything we want it to support in a way that has none of the flaws of the current one, and no new flaws we have not thought about before — this is simply impossible.

So lets leave that idea aside, and think about some of the other complaints.

The Internet is not secure. Well, of course not. But that does not mean it needs to be this way. The reality is that security is a hot potato that application developers, network operators, and end users like to throw at one another, rather than something anyone tries to fix. Rather than considering each piece of the security puzzle, and thinking about how and where it might be best solved, application developers just build applications without security at all, and say “let the network fix it.” At the same time, network engineers say either: “sure, I can give you perfect security, let me just install this firewall,” or “I don’t have anything to do with security, fix that in the application.” On the other end, users choose really horrible passwords, and blame the network for losing their credit card number, or say “just let my use my thumbprint,” without ever wondering where they are going to go to get a new one when their thumbprint has been compromised. Is this “fixable?” sure, for some strong measure of security — but a “new Internet” isn’t going to fare any better than the current one unless people start talking to one another.

The Internet cannot scale. Well, that all depends on what you mean by “scale.” It seems pretty large to me, and it seems to be getting larger. The problem is that it is often harder to design in scaling than you might think. You often do not know what problems you are going to encounter until you actually encounter them. To think that we can just “apply some math,” and make the problem go away shows a complete lack of historical understanding. What you need to do is build in the flexibility that allows you to overcome scaling issues as they arise, rather than assuming you can “fix” the problem at the first point and not worry about it ever again. The “foundation” analogy does not really work here; when you are building a structure, you have a good idea of what it will be used for, and how large it will be. You do not build a building today and then say, “hey, let’s add a library on the 40th floor with a million books, and then three large swimming pools and a new eatery on those four new floors we decided to stick on the top.” The foundation limits scaling as well as ensures it; sometimes the foundation needs to be flexible, rather than fixed.

There have been too many protocol mistakes. Witness IPv6. Well, yes, there have been many protocol mistakes. For instance, IPv6. But the problem with IPv6 is not that we didn’t need it, not that there was not a problem, nor even that all bad decisions were made. Rather, the problem with IPv6 is the technical community became fixated on Network Address Translators, effectively designing an entire protocol around eliminating a single problem. Narrow fixations always result in bad engineering solutions — it’s just a fact of life. What IPv6 did get right was eliminating fragmentation, a larger address space, and a few other areas.

That IPv6 exists at all, and is even being deployed at all, shows just the entire problem with “the Internet is broken” line of thinking. It shows that the foundations of the Internet are flexible enough to take on a new protocol, and to fix problems up in the higher layers. The original design worked, in fact — parts and pieces can be replaced if we get something wrong. This is more valuable than all the iron clad promises of a perfect future Internet you can ever make.

We are missing a layer. This is grounded in the RINA model, which I like, and I actually use in teaching networking a lot more than any other model. In fact, I consider the OSI model a historical curiosity, a byway that was probably useful for a time, but is no longer useful. But the RINA model implies a fixed number of layers, in even numbers. The argument, boiled down to its essential point, is that since we have seven, we must be wrong.

The problem with the argument is twofold. First, sometimes six layers is right, and at other times eight might be. Second, we do have another layer in the Internet model; it’s just generally buried in the applications themselves. The network does not end with TCP, or even HTTP; it ends with the application. Applications often have their own flow control and error management embedded, if they need them. Some don’t, so exposing all those layers, and forcing every application to use them all, would actually be a waste of resources.

The Internet assumes a flawed model of end to end connectivity. Specifically, that the network will never drop packets. Well, TCP does assume this, but TCP isn’t the only transport protocol on the network. There is also something called “UDP,” and there are others out there as well (at least the last time I looked). It’s not that the network doesn’t provide more lossy services, it’s that most application developers have availed themselves of the one available service, no matter whether or not it is needed for their specific application.

The bottom line.

When I left San Francisco to fly home, 2nd street was closed. Why? Because a piece of concrete had come lose on one of the buildings nearby, and seemed to be just about ready to fall to the street. On the way to the airport, the driver told me stories of several other buildings in the area that were problematic, some that might need to be taken down and rebuilt. The image of the industrial building process, almost perfect every time, is an illusion. You can’t just “build a solid foundation” and then “build as high as you like.”

Sure, the Internet is broken. But anything we invent will, ultimately, be broken in some way or another. Sure the IETF is broken, and so is open source, and so is… whatever we might invent next. We don’t need a new Internet, we need a little less ego, a lot less mud slinging, and a lot more communication. We don’t need the perfect fix, we need people who will seriously think about where the layers and lines are today, why they are there, and why and how we should change them. We don’t need grand designs, we need serious people who are seriously interested in working on fixing what we have, and people who are interested in being engineers, rather than console jockeys or system administrators.

Written by Russ White, Network Architect at LinkedIn

Follow CircleID on Twitter

More under: Internet Protocol, Security, Web

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml