(ipv6 and security) -ipv4

Huawei and Hundsun Technologies release all-flash acceleration solution for centralised transaction systems

By Sheetal Kumbhar

At HUAWEI CONNECT 2017, Huawei and Hundsun Technologies jointly released an all-flash acceleration solution for centralised transaction systems powered on the new-gen OceanStor Dorado V3 AFAs and Hundsun UF2.0. The solution delivers 24/7 high reliability and low latency for the securities industry. Hundsun’s UF2.0 centralised transaction system and service architecture yield compelling advantages in stability […]

The post Huawei and Hundsun Technologies release all-flash acceleration solution for centralised transaction systems appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Huawei and Hundsun Technologies release all-flash acceleration solution for centralised transaction systems

By News Aggregator

By Sheetal Kumbhar

At HUAWEI CONNECT 2017, Huawei and Hundsun Technologies jointly released an all-flash acceleration solution for centralised transaction systems powered on the new-gen OceanStor Dorado V3 AFAs and Hundsun UF2.0. The solution delivers 24/7 high reliability and low latency for the securities industry. Hundsun’s UF2.0 centralised transaction system and service architecture yield compelling advantages in stability […]

The post Huawei and Hundsun Technologies release all-flash acceleration solution for centralised transaction systems appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Huawei and Hundsun Technologies release all-flash acceleration solution for centralised transaction systems appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

How to Check If You’re Exposed to Those Scary BlueBorne Bluetooth Flaws

By David Meyer

The security research firm Armis Labs had identified massive vulnerabilities in the Bluetooth wireless technology that can allow attackers to take over people’s devices, whether they be smartphones, PCs or even Internet of Things devices such as smart TVs and watches.

The “BlueBorne” flaws would allow a virus to leap from device to device, regardless of the operating system being used.

They can even allow attackers to access so-called “air-gapped” computer networks that aren’t connected to the Internet, Armis warned Tuesday. Bluetooth-equipped devices do not need to be in discoverable mode, or paired with the attacker’s device, in order to be vulnerable.

“These silent attacks are invisible to traditional security controls and procedures. Companies don’t monitor these types of device-to-device connections in their environment, so they can’t see these attacks or stop them,” Armis CEO Yevgeny Dibrov said in a statement. “The research illustrates the types of threats facing us in this new connected age.”

So, are your Bluetooth-equipped devices vulnerable? Armis told many of the affected tech companies about the flaws well before informing the public–an approach known in the industry as responsible disclosure–so they’ve had a chance to push out patches.

Not everyone has, though.

According to Armis, Google


googl



put out an Android security update last month and Microsoft


msft



planned a Windows update for Tuesday. The team working on security for the open-source Linux operating system was also targeting an update for Tuesday.

Apple


aapl



fans will be delighted to hear that the current versions of its software are not vulnerable. That means anything more recent than iOS 9.3.5 or, for Apple TV users, version 7.2.2 of the software for that device. iOS 10 is definitely OK, Armis said.

Samsung


ssnlf



fans will be less pleased to read this from Armis: “Contact on three separate occasions in April, May, and June. No response was received back from any outreach.”

Those using non-Google-branded Android devices will just have to hope that the manufacturers issue security updates to keep them safe. Google automatically updates its own devices, such as the Pixel, but when it comes to the wider Android ecosystem, all it can do is make updates available to manufacturers and hope they relay them to their customers’ phones and tablets.

Armis has released an Android app to help people check if they are vulnerable.

In short, install the latest updates for everything, and unless you’re sure that your devices have been updated with a fix, it might be a good idea to turn off Bluetooth for now.

Read more here:: fortune.com/tech/feed/

How to Check If You’re Exposed to Those Scary BlueBorne Bluetooth Flaws

By News Aggregator

By David Meyer

The security research firm Armis Labs had identified massive vulnerabilities in the Bluetooth wireless technology that can allow attackers to take over people’s devices, whether they be smartphones, PCs or even Internet of Things devices such as smart TVs and watches.

The “BlueBorne” flaws would allow a virus to leap from device to device, regardless of the operating system being used.

They can even allow attackers to access so-called “air-gapped” computer networks that aren’t connected to the Internet, Armis warned Tuesday. Bluetooth-equipped devices do not need to be in discoverable mode, or paired with the attacker’s device, in order to be vulnerable.

“These silent attacks are invisible to traditional security controls and procedures. Companies don’t monitor these types of device-to-device connections in their environment, so they can’t see these attacks or stop them,” Armis CEO Yevgeny Dibrov said in a statement. “The research illustrates the types of threats facing us in this new connected age.”

So, are your Bluetooth-equipped devices vulnerable? Armis told many of the affected tech companies about the flaws well before informing the public–an approach known in the industry as responsible disclosure–so they’ve had a chance to push out patches.

Not everyone has, though.

According to Armis, Google


googl



put out an Android security update last month and Microsoft


msft



planned a Windows update for Tuesday. The team working on security for the open-source Linux operating system was also targeting an update for Tuesday.

Apple


aapl



fans will be delighted to hear that the current versions of its software are not vulnerable. That means anything more recent than iOS 9.3.5 or, for Apple TV users, version 7.2.2 of the software for that device. iOS 10 is definitely OK, Armis said.

Samsung


ssnlf



fans will be less pleased to read this from Armis: “Contact on three separate occasions in April, May, and June. No response was received back from any outreach.”

Those using non-Google-branded Android devices will just have to hope that the manufacturers issue security updates to keep them safe. Google automatically updates its own devices, such as the Pixel, but when it comes to the wider Android ecosystem, all it can do is make updates available to manufacturers and hope they relay them to their customers’ phones and tablets.

Armis has released an Android app to help people check if they are vulnerable.

In short, install the latest updates for everything, and unless you’re sure that your devices have been updated with a fix, it might be a good idea to turn off Bluetooth for now.

Read more here:: fortune.com/tech/feed/

The post How to Check If You’re Exposed to Those Scary BlueBorne Bluetooth Flaws appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Machine Learning’s Adoption Gap: Assessing the Consequences

By Alex Woodie

Despite its position as a key element driving big data analytics and artificial intelligence, machine learning is scarcely being adopted by companies at large. That’s the conclusion of a new report on digital transformation conducted by SAP. But despite the slow start, there is a real potential to catch up.

The German software giant found that only 7% of the 3,100 companies that participated in a survey are investing in machine learning technology. By comparison, 50% of the companies that the SAP Center for Business Insight defined as “leaders” have made machine learning investments.

The survey, which was conducted with Oxford Economics, found similar differences in the adoption of “big data/analytics” (94% for leaders, 60% for the population as a whole) and IoT (76% adoption among leaders, compared to 52% adoption by the rest of the pack).

Together, these technologies – and you can throw artificial intelligence (AI) and cloud computing into the mix for good measure — are some of the primary breakthroughs that experts expect to disrupt markets, displace entrenched commercial interests, and spur the creation of new products and services over the coming decades.

However, the gap between the leaders and the rest of the pack is a stark reminder of how new these emerging technologies really are, how much investment is required to get projects going, and the difficulty that many experience in getting get tangible results out of them.

Tesla may have an “insurmountable lead” in road condition data through widespread data gathering and machine learning, SAP’s Elliot says (Taina Sohlman/Shutterstock)

Timo Elliot, an innovation evangelist for SAP, says there are potentially enormous consequences to the machine learning gap. One of them is the inability to capitalize on the network effect that occurs when the use of machine learning helps to improve products and services automatically over time as more people use them.

“For example, Tesla cars equipped with Autopilot gather information on every journey and share it with the other cars – constantly updating and improving the driving experience,” Elliot tells Datanami. “That data doesn’t just improve Tesla’s current products; it will also provide the foundation for new business opportunities in the future.”

Eventually, every product and service will have a learning component, Elliot says. The benefits of the “virtuous circle of growth” – where feedback leads to a better customer experience, which leads to more product sales, which in return generates more and better data that further improves the product — is just too great to pass up.

But in the meantime, there is confusion about how best to apply machine learning in the enterprise. “A lot of the public awareness [around machine learning] has been focused on consumer uses such as image tagging of Google photos, or complex ‘cognitive’ uses,” Elliot says. “But it’s the stuff that might actually be considered boring that is actually the most exciting.”

For example, chemicals giant BASF used machine learning to improve their invoice payments processes. “While it isn’t that glamorous,” he says, “the result is big and it directly benefits the bottom line.”

AI’s 1% Problem

The difficulty of applying machine learning in real-world companies (i.e. not the Googles or Teslas of the world) is something that Ali Ghodsi, the CEO and co-founder of Databricks, has also identified.

Most companies report investing in big data analytics, but few are doing machine learning (Source: SAP Center for Business Insight)

“They’re all excited about AI and big data,” Ghodsi told Datanami earlier this year. “But what we realized was there’s basically a big gap between what you, the media, is writing about and what’s happening in reality.”

Outside of the largest digital firms and vertical innovators like Tesla or Uber, very few companies are having sustained success with artificial intelligence and machine learning, he says.

“There are only 1% that are succeeding in AI,” says Ghodsi, who’s also a UC Berkeley professor and RISELab advisor. “In fact, it’s even less than 1%. The rest of the 99% are sort of left behind and they’re sort of struggling to get all this big data technology.”

The question becomes, then, is how the industry can help a bigger swath of companies benefit from AI and machine learning. One of the most obvious solutions is that the software – including the data science and the data management tools — needs to become simpler and easier to use. The stunting complexity around big data management platforms like Hadoop has been well documented in these pages and elsewhere.

In addition to simpler software, Ghodsi advocates that the cloud can provide more simplicity at the infrastructure level. The lack of highly trained data scientists and data engineers is also one of the most commonly cited reasons for the inability for organizations to take advantage of the power of machine learning.

Long-Term Prospects

Despite the reports of a usage gap, the long-term prospects for AI and machine learning remains bright. A recent PricewaterhouseCoopers study pegged AI’s potential impact on the global economy to be on the order of $15.7 trillion by 2030. The biggest chunk of the economic gains (about $9.1 trillion) will come from a stimulation of consumer demand through better and more personalized products, while increased employee productivity will drive $6.6 trillion.

AI’s long-term impact on the global economy is projected to be massive (Source: PwC)

That’s a lot of money that’s potential up for grabs for those organizations that can harness the power of AI and machine learning. PwC warns “Businesses that fail to adapt and adopt could quickly find themselves undercut on turnaround times as well as costs. They stand to lose a significant amount of their market share as a result.”

The upside is that there’s plenty of room for growth in the use of machine learning. That’s particularly among small and midsize companies, but it’s also true among larger firms that have not successfully leveraged digital transformation.

What’s uncertain at this point is whether the most successful users of machine learning and AI will leverage the network effect to expand their lead, or whether the lead is already, or will become, insurmountable.

The SAP report seems to indicate that the gap could get bigger before it gets smaller. “The top 100 companies,” the SAP report finds, “are 2.5 to 4 times more likely to report value from next-generation technologies, affecting every part of the enterprise, from customers and partners to brand value, employee engagement, and revenue growth.”

Companies that want to catch up must “pull an Elon Musk” and change the terms of the conversation, SAP says. “Stop doing piecemeal IT projects. Stop treating IT as the enabler of business rather than a strategic partner. Stop handing off responsibility for digital transformation to a siloed group and then complaining when it doesn’t deliver any significant changes.”

SAP’s Elliot is bullish that AI can be an equalizer — particularly thanks to the opportunity to automate the work that’s today done by the class of employees known as knowledge workers.

“It’s hard to think of a business process that doesn’t include some form of complex but repetitive decision-making,” he says. “The good news is that in these cases, it’s relatively easy to catch up with the leaders: the power of machine learning is being seamlessly embedded into the applications that companies already use.”

Related Items:

How AI Fares in Gartner’s Latest Hype Cycle

Exposing AI’s 1% Problem

Taking the Data Scientist Out of Data Science

The post Machine Learning’s Adoption Gap: Assessing the Consequences appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Machine Learning’s Adoption Gap: Assessing the Consequences

By News Aggregator

By Alex Woodie

Despite its position as a key element driving big data analytics and artificial intelligence, machine learning is scarcely being adopted by companies at large. That’s the conclusion of a new report on digital transformation conducted by SAP. But despite the slow start, there is a real potential to catch up.

The German software giant found that only 7% of the 3,100 companies that participated in a survey are investing in machine learning technology. By comparison, 50% of the companies that the SAP Center for Business Insight defined as “leaders” have made machine learning investments.

The survey, which was conducted with Oxford Economics, found similar differences in the adoption of “big data/analytics” (94% for leaders, 60% for the population as a whole) and IoT (76% adoption among leaders, compared to 52% adoption by the rest of the pack).

Together, these technologies – and you can throw artificial intelligence (AI) and cloud computing into the mix for good measure — are some of the primary breakthroughs that experts expect to disrupt markets, displace entrenched commercial interests, and spur the creation of new products and services over the coming decades.

However, the gap between the leaders and the rest of the pack is a stark reminder of how new these emerging technologies really are, how much investment is required to get projects going, and the difficulty that many experience in getting get tangible results out of them.

Tesla may have an “insurmountable lead” in road condition data through widespread data gathering and machine learning, SAP’s Elliot says (Taina Sohlman/Shutterstock)

Timo Elliot, an innovation evangelist for SAP, says there are potentially enormous consequences to the machine learning gap. One of them is the inability to capitalize on the network effect that occurs when the use of machine learning helps to improve products and services automatically over time as more people use them.

“For example, Tesla cars equipped with Autopilot gather information on every journey and share it with the other cars – constantly updating and improving the driving experience,” Elliot tells Datanami. “That data doesn’t just improve Tesla’s current products; it will also provide the foundation for new business opportunities in the future.”

Eventually, every product and service will have a learning component, Elliot says. The benefits of the “virtuous circle of growth” – where feedback leads to a better customer experience, which leads to more product sales, which in return generates more and better data that further improves the product — is just too great to pass up.

But in the meantime, there is confusion about how best to apply machine learning in the enterprise. “A lot of the public awareness [around machine learning] has been focused on consumer uses such as image tagging of Google photos, or complex ‘cognitive’ uses,” Elliot says. “But it’s the stuff that might actually be considered boring that is actually the most exciting.”

For example, chemicals giant BASF used machine learning to improve their invoice payments processes. “While it isn’t that glamorous,” he says, “the result is big and it directly benefits the bottom line.”

AI’s 1% Problem

The difficulty of applying machine learning in real-world companies (i.e. not the Googles or Teslas of the world) is something that Ali Ghodsi, the CEO and co-founder of Databricks, has also identified.

Most companies report investing in big data analytics, but few are doing machine learning (Source: SAP Center for Business Insight)

“They’re all excited about AI and big data,” Ghodsi told Datanami earlier this year. “But what we realized was there’s basically a big gap between what you, the media, is writing about and what’s happening in reality.”

Outside of the largest digital firms and vertical innovators like Tesla or Uber, very few companies are having sustained success with artificial intelligence and machine learning, he says.

“There are only 1% that are succeeding in AI,” says Ghodsi, who’s also a UC Berkeley professor and RISELab advisor. “In fact, it’s even less than 1%. The rest of the 99% are sort of left behind and they’re sort of struggling to get all this big data technology.”

The question becomes, then, is how the industry can help a bigger swath of companies benefit from AI and machine learning. One of the most obvious solutions is that the software – including the data science and the data management tools — needs to become simpler and easier to use. The stunting complexity around big data management platforms like Hadoop has been well documented in these pages and elsewhere.

In addition to simpler software, Ghodsi advocates that the cloud can provide more simplicity at the infrastructure level. The lack of highly trained data scientists and data engineers is also one of the most commonly cited reasons for the inability for organizations to take advantage of the power of machine learning.

Long-Term Prospects

Despite the reports of a usage gap, the long-term prospects for AI and machine learning remains bright. A recent PricewaterhouseCoopers study pegged AI’s potential impact on the global economy to be on the order of $15.7 trillion by 2030. The biggest chunk of the economic gains (about $9.1 trillion) will come from a stimulation of consumer demand through better and more personalized products, while increased employee productivity will drive $6.6 trillion.

AI’s long-term impact on the global economy is projected to be massive (Source: PwC)

That’s a lot of money that’s potential up for grabs for those organizations that can harness the power of AI and machine learning. PwC warns “Businesses that fail to adapt and adopt could quickly find themselves undercut on turnaround times as well as costs. They stand to lose a significant amount of their market share as a result.”

The upside is that there’s plenty of room for growth in the use of machine learning. That’s particularly among small and midsize companies, but it’s also true among larger firms that have not successfully leveraged digital transformation.

What’s uncertain at this point is whether the most successful users of machine learning and AI will leverage the network effect to expand their lead, or whether the lead is already, or will become, insurmountable.

The SAP report seems to indicate that the gap could get bigger before it gets smaller. “The top 100 companies,” the SAP report finds, “are 2.5 to 4 times more likely to report value from next-generation technologies, affecting every part of the enterprise, from customers and partners to brand value, employee engagement, and revenue growth.”

Companies that want to catch up must “pull an Elon Musk” and change the terms of the conversation, SAP says. “Stop doing piecemeal IT projects. Stop treating IT as the enabler of business rather than a strategic partner. Stop handing off responsibility for digital transformation to a siloed group and then complaining when it doesn’t deliver any significant changes.”

SAP’s Elliot is bullish that AI can be an equalizer — particularly thanks to the opportunity to automate the work that’s today done by the class of employees known as knowledge workers.

“It’s hard to think of a business process that doesn’t include some form of complex but repetitive decision-making,” he says. “The good news is that in these cases, it’s relatively easy to catch up with the leaders: the power of machine learning is being seamlessly embedded into the applications that companies already use.”

Related Items:

How AI Fares in Gartner’s Latest Hype Cycle

Exposing AI’s 1% Problem

Taking the Data Scientist Out of Data Science

The post Machine Learning’s Adoption Gap: Assessing the Consequences appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post Machine Learning’s Adoption Gap: Assessing the Consequences appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Is Your Whois Data Stuck in the Past?

By Suzanne Rogers

ARIN’s Whois database is widely used by many members of the Internet community and provides information regarding the registration of IPv4 addresses, IPv6 addresses, and Autonomous System Numbers (ASNs), collectively referred to as Internet number resources.

This blog will provide insight into how inaccurate Whois data can impact you and your organization’s ability to interact with ARIN in a quick and timely manner.

Why is it important to keep your Whois information up-to-date?

As your organization grows and time passes, information may change, including your Whois information. Whois information can become out of date for many reasons and a few examples are:

  • Organization moves to a new address,
  • POCs listed under the organization are no longer with that organization,
  • Company is acquired by another company,
  • Two companies merge,
  • Parent Organization consolidates some of its subsidiaries into one organization,
  • Organization changes its name,
  • Company converts from one state of incorporation to another,
  • Organization converts from an LLC entity to an Inc., or
  • Company dissolves.

What are the benefits to maintaining up-to-date Whois information?

There are many ways in which current and accurate Whois information may be beneficial to your organization.

  • Smart Business. It is smart business to maintain accurate information for your company in public forums, particularly one that is widely used and relied upon by network operators and other members of the Internet community.
  • Reduce the Possibilities for Delays. ARIN only registers or transfers Internet number resources to organizations within the ARIN region that are validly registered and in good standing within ARIN’s region. If your organization has changed its name or merged etc., your registration information has to be updated prior to ARIN being able to complete your resource requests.
  • Avoid Resource Revocation. Outdated Whois information could lead to invoices and other important communications failing to reach you. Without receipt of invoices, it is possible accounts would become delinquent and may lead to Internet number resource revocation.
  • Avoid Resource Loss. Companies can change hands numerous times over extended periods. To update your organization’s Whois record, ARIN requires receipt of documentation to establish the chain of custody of the assets that utilized the Internet number resources for each corporate transaction from the date of IP registration. It can be difficult to obtain older documentation to establish this chain of custody to complete a transfer process in order to gain or re-gain access to Internet number resources, therefore it is beneficial to keep your records up-to-date.
  • Avoid Misuse of Internet number resources. ARIN has noted that out of date Whois records have become a prime target of IP address hijackers and used for unsavory activities such as spamming and spoofing.

If you would like to discuss how to update your ARIN Whois data, please contact a member of our Registration Services team at 703.227.0660 Monday through Friday 7:00 AM to 7:00 PM EST or submit an Ask ARIN ticket from within your ARIN Online account.

The post Is Your Whois Data Stuck in the Past? appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

Is Your Whois Data Stuck in the Past?

By News Aggregator

By Suzanne Rogers

ARIN’s Whois database is widely used by many members of the Internet community and provides information regarding the registration of IPv4 addresses, IPv6 addresses, and Autonomous System Numbers (ASNs), collectively referred to as Internet number resources.

This blog will provide insight into how inaccurate Whois data can impact you and your organization’s ability to interact with ARIN in a quick and timely manner.

Why is it important to keep your Whois information up-to-date?

As your organization grows and time passes, information may change, including your Whois information. Whois information can become out of date for many reasons and a few examples are:

  • Organization moves to a new address,
  • POCs listed under the organization are no longer with that organization,
  • Company is acquired by another company,
  • Two companies merge,
  • Parent Organization consolidates some of its subsidiaries into one organization,
  • Organization changes its name,
  • Company converts from one state of incorporation to another,
  • Organization converts from an LLC entity to an Inc., or
  • Company dissolves.

What are the benefits to maintaining up-to-date Whois information?

There are many ways in which current and accurate Whois information may be beneficial to your organization.

  • Smart Business. It is smart business to maintain accurate information for your company in public forums, particularly one that is widely used and relied upon by network operators and other members of the Internet community.
  • Reduce the Possibilities for Delays. ARIN only registers or transfers Internet number resources to organizations within the ARIN region that are validly registered and in good standing within ARIN’s region. If your organization has changed its name or merged etc., your registration information has to be updated prior to ARIN being able to complete your resource requests.
  • Avoid Resource Revocation. Outdated Whois information could lead to invoices and other important communications failing to reach you. Without receipt of invoices, it is possible accounts would become delinquent and may lead to Internet number resource revocation.
  • Avoid Resource Loss. Companies can change hands numerous times over extended periods. To update your organization’s Whois record, ARIN requires receipt of documentation to establish the chain of custody of the assets that utilized the Internet number resources for each corporate transaction from the date of IP registration. It can be difficult to obtain older documentation to establish this chain of custody to complete a transfer process in order to gain or re-gain access to Internet number resources, therefore it is beneficial to keep your records up-to-date.
  • Avoid Misuse of Internet number resources. ARIN has noted that out of date Whois records have become a prime target of IP address hijackers and used for unsavory activities such as spamming and spoofing.

If you would like to discuss how to update your ARIN Whois data, please contact a member of our Registration Services team at 703.227.0660 Monday through Friday 7:00 AM to 7:00 PM EST or submit an Ask ARIN ticket from within your ARIN Online account.

The post Is Your Whois Data Stuck in the Past? appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

The post Is Your Whois Data Stuck in the Past? appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

An Opinion in Defence of NATs

Network Address Translation has often been described as an unfortunate aberration in the evolution of the Internet, and one that will be expunged with the completion of the transition of IPv6. I think that this view, which appears to form part of today’s conventional wisdom about the Internet unnecessarily vilifies NATs. In my opinion, NATs are far from being an aberration, and instead, I see them as an informative step in the evolution of the Internet, particularly as they relate to possibilities in the evolution of name-based networking. Here’s why.

Background

It was in 1989, some months after the US National Science Foundation-funded IP backbone network had been commissioned, and at a time when there was a visible momentum behind the adoption of IP as a communications protocol of choice, that the first inklings of the inherent finite nature of the IPv4 address became apparent in the Internet Engineering Task Force (IETF) [1].

Progressive iterations over the IP address consumption numbers reached the same general conclusion: that the momentum of deployment of IP meant that the critical parts of the 32-bit address space would be fully committed within 6 or so years. It was predicted that by 1996 we would have fully committed the pool of Class B networks, which encompassed one-quarter of the available IPv4 address space. At the same time, we were concerned at the pace of growth of the routing system, so stop gap measures that involved assigning multiple Class C networks to sites could’ve staved off exhaustion for a while, but perhaps at the expense of the viability of the routing system [2].

Other forms of temporary measures were considered by the IETF, and the stop gap measure that was adopted in early 1994 was the dropping of the implicit network/host partitioning of the address in classful addressing in favour of the use of an explicit network mask, or “classless” addressing. This directly addressed the pressing nature problem of the exhaustion of the Class B address pool, as the observation at the time was that while a Class C network was too small for many sites given the recent introduction of the personal computer, Class B networks were too large, and many sites were unable to realise reasonable levels of address use with Class B addresses. This move to classless addressing (and classless routing of course) gained some years of breathing space before the major impacts of address exhaustion, which was considered enough time to complete the specification and deployment of a successor IP protocol [3].

In the search for a successor IP protocol, several ideas were promulgated. The decisions around the design of IPv6 related to a desire to make minimal changes to the IPv4 specification, while changing the size of the address fields, and changing some of encoding of control functions through the use of the extension header concept, and the changing of the fragmentation behaviour to stop routers from performing fragmentation on the fly [4].

The common belief at the time was that the adoption of classless addressing in IPv4 bought sufficient time to allow the deployment of IPv6 to proceed. It was anticipated that IPv6 would be deployed across the entire Internet well before the remaining pools of IPv4 addresses were fully committed. This, together with a deliberate approach for hosts to prefer to use IPv6 for communication when both IPv4 and IPv6 was available for use would imply that the use of IPv4 would naturally dwindle away as more IPv6 was deployed, and that no ‘flag day’ or other means of coordinated action would be needed to complete this Internet wide protocol transition [5].

In the flurry of documents that explored concepts of a successor protocol was one paper that described a novel concept of source address sharing [6]. If a processing unit was placed on the wire, it was possible to intercept all outbound TCP and UDP packets and replace the source IP address with a different address and change the packet header checksum and then forward the packet on towards its intended destination. As long as this unit used one of its own addresses as the new address, then any response from the destination would be passed back to this unit. The unit could then use the other fields of the incoming IP packet header, namely the source address and the source and destination port addresses, to match this packet with the previous outgoing packet and perform the reverse address substitution, this time replacing the destination address with the original source address of the corresponding outgoing packet. This allowed a “public” address to be used by multiple internal end systems, provided that they were not all communicating simultaneously. More generally a pool of public addresses could be shared across a larger pool of internal systems.

It may not have been the original intent of the inventors of this address sharing concept, but the approach was enthusiastically taken up by the emerging ISP industry in the 1990’s. They were seeing the emergence of the home network and were unprepared to respond to it. The previous deployment model, used by dial-up modems, was that each active customer was assigned a single IP address as part of the session start process. A NAT in the gateway to the home network could extend this “single IP address per customer” model to include households with home networks and multiple attached devices. To do so efficiently a further refinement was added, namely that the source port was part of the translation. That way a single external address could theoretically be shared by up to 65,535 simultaneous TCP sessions, provided that the NAT could rewrite the source port along with the source address [7].

For the ensuing decade, NATs were deployed at the edge of the network, and have been used by the ISPs as a means of externalising the need to conserve IP addresses. The address sharing technology was essentially deployed by and operated by the end customer, and within the ISP network, each connected customer still required just a single IP address.

But perhaps that role is underselling the value of NATs in the evolution of the Internet. NATs provided a “firewall” between the end customer and the carrier. The telephony model shared the same end-to-end service philosophy, but it achieved this over exercising overarching control over all components of the service. For many decades telephone was a controlled monopoly that was intolerant of any form of competitive interest in the customer. The Internet did not go down this path, and one of the reasons why this didn’t happen is that NATs allowed the end customer to populate their home network with whatever equipment they chose, and via a NAT, present to the ISP carrier as a single “termination” with a single IP address. This effective segmentation of network created a parallel segmentation in the market, which allowed the consumer services segment to flourish without carrier-imposed constraint. And at the time that was critically important. The Internet wasn’t the next generation of the telephone service. It was an entirely different utility service operating in an entirely different manner.

More recently, NATs have appeared within the access networks themselves, performing the address sharing function across a larger set of customers. This was first associated with mobile access networks but has been used in almost all recent deployments of access networks, as a response to the visible scarcity in the supply of available IPv4 addresses.

NATs have not been universally applauded. Indeed, in many circles within the IETF NATs were deplored.

It was observed that NATs introduced active middleware into an end-to-end architecture, and divided the pool of attached devices into clients and servers. Clients (behind NATs) had no constant IP address and could not be the target of connection requests. Clients could only communicate with servers, not with each other. It appeared to some to be a step in a regressive direction that imposed a reliance on network middleware with its attendant fragility and imposed an asymmetry on communication [8].

For many years, the IETF did not produce standard specifications for the behaviour of NATs, particularly in the case of handling of UDP sessions. As UDP has no specific session controls, such as session opening and closing signals, how was a NAT meant to maintain its translation state? In the absence of a specific standard specification different implementations of this function made different assumptions and implemented different behaviour, introducing another detrimental aspect of NATs: namely variability.

How could an application operate through a NAT if the application used UDP? The result was the use of various NAT discovery protocols that attempted to provide the application with some understanding of the particular form of NAT behaviour that it was encountering [9].

NATs in Today’s Internet

Let’s now look at the situation today — the Internet of early 2017. The major hiatus in the supply of additional IPv4 addresses commenced in 2011 when the central IANA pool of unallocated IPv4 addresses was exhausted. Progressively the RIRs ran down their general allocation address pools: APNIC in April 2011, the RIPE NCC in September 2012, LACNIC in 2014 and ARIN in 2015. The intention from the early 1990’s was that the impending threat of imminent exhaustion of further addresses would be the overwhelming impetus to deploy the successor protocol. By that thinking then the Internet would’ve switched to exclusively use IPv6 before 2011. Yet, that has not happened.

Today a minimum of 90% of the Internet’s connected device population still exclusively uses IPv4 while the remainder use IPv4 and IPv6 [10]. This is an all-IPv4 network with a minority proportion also using IPv6. Estimates vary of the device population of today’s Internet, but they tend to fall within a band of 15 billion to 25 billion connected devices [11]. Yet only some 2.8 billion IPv4 addresses are visible in the Internet’s routing system. This implies that on average each announced public IPv4 address serves between 3 to 8 hidden internal devices.

Part of the reason why estimates of the total population of connected devices are so uncertain is that NATs occlude these internal devices so effectively that any conventional internet census cannot expose these hidden internal device pools with any degree of accuracy.

And part of the reason why the level of IPv6 deployment is still so low is that users, and the applications that they value, appear to operate perfectly well in a NATed environment. The costs of NAT deployment are offset by preserving the value of existing investment, both as a tangible investment in equipment and as an investment in knowledge and operational practices in IPv4.

NATS can be incrementally deployed, and they do not rely on some ill-defined measure of coordination with others to operate effectively. They are perhaps one of the best examples of a piecemeal, incremental deployment technology where the incremental costs of deployment directly benefit the entity who deployed the technology. This is in direct contrast to IPv6 deployment, where the ultimate objective of the deployment, namely the comprehensive replacement of IPv4 on the Internet can only be achieved once a significant majority of the Internet’s population are operating in a mode that supports both protocols. Until then the deployments of IPv6 are essentially forced to operate in a dual stack mode, and also support IPv4 connectivity. In other words, the incremental costs of deployment of IPv6 only generate incremental benefit once others also take the same decision to deploy this technology. Viewed from the perspective of an actor in this space the pressures and costs to stretch the IPv4 address space to encompass an ever-growing Internet are a constant factor. The decision to complement that with a deployment of IPv6 is an additional cost that in the short term does not offset any of the IPv4 costs.

So, for many actors the question is not “Should I deploy IPv6 now?” but “how far can I go with NATs?” By squeezing some 25 billion devices into 2 billion active IPv4 addresses, we have used a compression ratio of around 14:1, of the equivalent of adding four additional bits of address space. These bits have been effectively ‘borrowed’ from the TCP and UDP port address space. In other words, today’s Internet uses a 36 -bit address space in aggregate to allow these 25 billion devices to communicate.

Each additional bit doubles this pool, so the theoretical maximum space of a comprehensively NATted IPv4 environment is 48 bits, fully accounting for the 32-bit address space and the 16-bit port address space. This is certainly far less than IPv6’s 128 bits of address space, but the current division of IPv6 into a 64-bit network prefix and a 64-bit interface identifier drops the available IPv6 address space to 64 bits. The prevalent use of a /48 as a site prefix, introduces further address use inefficiencies that effectively drops the IPv6 address space to span the equivalent of some 56 bits.

NATs can be pushed harder. The “binding space” for a NAT is a 5-tuple consisting of the source and destination IP address, a source and destination port address and a protocol identifier. This 96-bit NAT address space is a highly theoretic ceiling, but the pragmatic question is how much of this space can be exploited in a cost-effective manner such that the marginal cost of exploitation is lower than the cost of an IPv6 deployment.

NATs as Architecture

NATs appear to have pushed applications to a further level of refinement and abstraction that were at one point considered to be desirable objectives rather than onerous limitations. The maintenance of both a unique fixed endpoint address space and a uniquely assigned name space for the Internet could be regarded as an expensive luxury when it appears that only one of these spaces is a strictly necessary regarding ensuring the integrity of communication.

The IPv4 architecture made several simplifying assumptions — one of these was that an IPv4 address was overloaded with both the unique identity of an endpoint and its network location. In an age where computers were bolted to the floor of a machine room this seemed like a very minor assumption, but in today’s world, it appears that the overwhelming number of connected devices are portable devices that constantly change their location both in a physical sense and regarding network-based location. This places stress on the IP architecture, and the resulting is that IP is variously tunneled or switched in the final hop access infrastructure to preserve the overloaded semantics of IP addresses.

NATs deliberately disrupt this relationship, and the presented client side address and port have a particular interpretation and context only for the duration of a session.

In the same way that clients now share IP addresses, services now also share addresses. Applications cannot assume that the association of a name to an IP address is a unique 1:1 relationship. Many service-identifying names may be associated with the same IP address, and in the case of multi-homed services, it can be the case that the name is associated with several IP addresses.

With this change comes the observation that IP addresses are no longer the essential “glue” of the Internet. They have changed to a role of ephemeral session tokens that have no lasting semantics. NATs are pushing us to a different network architecture that is far more flexible – a network that uses names as the essential glue that binds it together.

We are now in the phase of the internet’s evolution where the address space is no longer unique, and we rely on the name space to offer coherence to the network

From that perspective, what does IPv6 really offer?

More address bits? Well, perhaps not all that much. The space created by NATs operates from within a 96-bit vector of address and port components, and the usable space may well approach the equivalent of a 50-bit conventional address architecture. On the other hand, the IPv6 address architecture has stripped off some 64 bits for an interface identifier and conventionally uses a further 16 bits as a site identifier. The resulting space is of the order of 52 bits. It’s not clear that the two pools of address tokens are all that much different in size.

More flexibility? IPv6 is a return to the overloaded semantics of IP addresses as being unique endpoint tokens that provide a connected device with a static location and a static identity. This appears to be somewhat ironic given the observation that increasingly the Internet is largely composed of battery powered mobile devices of various forms.

Cheaper? Possibly, in the long term, but not in the short term. Until we get to the “tipping point” that would allow a network to operate solely using IPv6 without any visible impact on the network’s user population, then every network still must provide a service using IPv4.

Permanent address to endpoint association? Well not really. Not since we realised that having a fixed interface identifier represented an unacceptable privacy leak. These days IPv6 clients use so-called “privacy addresses” as their interface identifier, and change this local identifier value on a regular basis.

Perhaps we should appreciate the role of NATs in supporting the name-based connectivity environment that is today’s Internet. It was not a deliberately designed outcome, but a product of incremental evolution that has responded to the various pressures of scarcity and desires for greater flexibility and capability. Rather than eschewing NATs in the architecture as an aberrant deviation in response to a short-term situation, we may want to contemplate an Internet architecture that embraces a higher level of flexibility of addressing. If the name space is truly the binding glue of the Internet, then perhaps we might embrace a view that addresses are simply needed to distinguish one packet flow from another in the network, and nothing more.

Appreciating NATs

When NATs were first introduced to the Internet, they were widely condemned as an aberration in the Internet’s architecture. And in some ways, NATs have directly confronted the model of a stateless packet switching network core and capable attached edge devices.

But that model has been a myth for decades. The Internet as it is deployed is replete with various forms of network “middleware, ” and the concept of a simple stateless packet switching network infrastructure is has been relegated to the status of a historical, but now somewhat abstract concept.

In many ways, this condemnation of NATs was unwarranted, as we can reasonably expect that network middleware is here to stay, irrespective of whether the IP packets are formatted as IPv4 or IPv6 and irrespective of whether the outer IP address fields in the packets are translated or not.

Rather than being condemned, perhaps we should appreciate the role that NATs play in the evolution of the architecture of the Internet.

We have been contemplating what it means to have a name-based data network, where instead of using a fixed relationship between names and IP addresses, we eschew this mapping and perform network transactions by specifying the name of the desired service or resource [12]. NATs are an interesting step in this direction, where IP addresses have lost their fixed association with particular endpoints, and are used more as ephemeral session tokens than endpoint locators. This certainly appears to be an interesting step in the direction of named data networking.

The conventional wisdom is that the endpoint of this current transitioning Internet is an IPv6 network that has no further use for NATs. This may not be the case. We may find that NATs continue to offer an essential level of indirection and dynamic binding capability in networking that we would rather not casually discard. It may be that NATs are a useful component of network middleware and that they continue to have a role on the Internet well after this transition to IPv6 has been completed, whenever that may be!

References

[1] F. Solensky, “Continued Internet Growth”, Proceedings of the 18th Internet Engineering Task Force Meeting, August 1990.

[2] H. W. Braun, P. Ford and Y. Rekhter, “CIDR and the Evolution of the Internet”, SDSC Report GA-A21364, Proceedings of INET’93, Republished in ConneXions, September 1993.

[3] V. Fuller, T. Li, J. Yu and K. Varadhan, “Classless Inter-Domain Routing (CIDR): An Address Assignment and Aggregation Strategy”, Internet Request for Comment (RFC) 1519, September 1993.

[4] S. Bradner and A. Mankin, “The Recommendation for the IP Next Generation Protocol”, Internet Request for Comment (RFC) 1752, January 1995.

[5] D. Wing and A. Yourtchenko, “Happy Eyeballs: Success with Dual-Stack Hosts,” Internet Request for Comment 9RFC) 6555, April 2012.

[6] P. Tsuchiya and T. Eng, “Extending the IP Internet Through Address Reuse”, ACM SIGCOMM Computer Communications Review, 23(1): 16-33, January 1993.

[7] P. Srisuresh and D. Gan, “Load Sharing using IP Network Address Translation (LSNAT)”, Internet Request for Comment (RFC) 2319, August 1998.

[8] T. Hain, “Architectural Implications of NAT”, Internet Request for Comment (RFC) 2993, November 2000.

[9] G. Huston, “Anatomy: A Look Inside Network Address Translators,” The Internet Protocol Journal, vol. 7. No. 3, pp. 2-32, September 2004.

[10] IPv6 Deployment Measurement, https://stats.labs.apnic.net/ipv6/XA.

[11] Internet of Things Connected devices, 2015 – 2025

[12] L. Zhang, et. al, “Named Data Networking,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 3, pp 66-73, July 2014.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Read more here:: www.circleid.com/rss/topics/ipv6

An Opinion in Defence of NATs

By News Aggregator

Network Address Translation has often been described as an unfortunate aberration in the evolution of the Internet, and one that will be expunged with the completion of the transition of IPv6. I think that this view, which appears to form part of today’s conventional wisdom about the Internet unnecessarily vilifies NATs. In my opinion, NATs are far from being an aberration, and instead, I see them as an informative step in the evolution of the Internet, particularly as they relate to possibilities in the evolution of name-based networking. Here’s why.

Background

It was in 1989, some months after the US National Science Foundation-funded IP backbone network had been commissioned, and at a time when there was a visible momentum behind the adoption of IP as a communications protocol of choice, that the first inklings of the inherent finite nature of the IPv4 address became apparent in the Internet Engineering Task Force (IETF) [1].

Progressive iterations over the IP address consumption numbers reached the same general conclusion: that the momentum of deployment of IP meant that the critical parts of the 32-bit address space would be fully committed within 6 or so years. It was predicted that by 1996 we would have fully committed the pool of Class B networks, which encompassed one-quarter of the available IPv4 address space. At the same time, we were concerned at the pace of growth of the routing system, so stop gap measures that involved assigning multiple Class C networks to sites could’ve staved off exhaustion for a while, but perhaps at the expense of the viability of the routing system [2].

Other forms of temporary measures were considered by the IETF, and the stop gap measure that was adopted in early 1994 was the dropping of the implicit network/host partitioning of the address in classful addressing in favour of the use of an explicit network mask, or “classless” addressing. This directly addressed the pressing nature problem of the exhaustion of the Class B address pool, as the observation at the time was that while a Class C network was too small for many sites given the recent introduction of the personal computer, Class B networks were too large, and many sites were unable to realise reasonable levels of address use with Class B addresses. This move to classless addressing (and classless routing of course) gained some years of breathing space before the major impacts of address exhaustion, which was considered enough time to complete the specification and deployment of a successor IP protocol [3].

In the search for a successor IP protocol, several ideas were promulgated. The decisions around the design of IPv6 related to a desire to make minimal changes to the IPv4 specification, while changing the size of the address fields, and changing some of encoding of control functions through the use of the extension header concept, and the changing of the fragmentation behaviour to stop routers from performing fragmentation on the fly [4].

The common belief at the time was that the adoption of classless addressing in IPv4 bought sufficient time to allow the deployment of IPv6 to proceed. It was anticipated that IPv6 would be deployed across the entire Internet well before the remaining pools of IPv4 addresses were fully committed. This, together with a deliberate approach for hosts to prefer to use IPv6 for communication when both IPv4 and IPv6 was available for use would imply that the use of IPv4 would naturally dwindle away as more IPv6 was deployed, and that no ‘flag day’ or other means of coordinated action would be needed to complete this Internet wide protocol transition [5].

In the flurry of documents that explored concepts of a successor protocol was one paper that described a novel concept of source address sharing [6]. If a processing unit was placed on the wire, it was possible to intercept all outbound TCP and UDP packets and replace the source IP address with a different address and change the packet header checksum and then forward the packet on towards its intended destination. As long as this unit used one of its own addresses as the new address, then any response from the destination would be passed back to this unit. The unit could then use the other fields of the incoming IP packet header, namely the source address and the source and destination port addresses, to match this packet with the previous outgoing packet and perform the reverse address substitution, this time replacing the destination address with the original source address of the corresponding outgoing packet. This allowed a “public” address to be used by multiple internal end systems, provided that they were not all communicating simultaneously. More generally a pool of public addresses could be shared across a larger pool of internal systems.

It may not have been the original intent of the inventors of this address sharing concept, but the approach was enthusiastically taken up by the emerging ISP industry in the 1990’s. They were seeing the emergence of the home network and were unprepared to respond to it. The previous deployment model, used by dial-up modems, was that each active customer was assigned a single IP address as part of the session start process. A NAT in the gateway to the home network could extend this “single IP address per customer” model to include households with home networks and multiple attached devices. To do so efficiently a further refinement was added, namely that the source port was part of the translation. That way a single external address could theoretically be shared by up to 65,535 simultaneous TCP sessions, provided that the NAT could rewrite the source port along with the source address [7].

For the ensuing decade, NATs were deployed at the edge of the network, and have been used by the ISPs as a means of externalising the need to conserve IP addresses. The address sharing technology was essentially deployed by and operated by the end customer, and within the ISP network, each connected customer still required just a single IP address.

But perhaps that role is underselling the value of NATs in the evolution of the Internet. NATs provided a “firewall” between the end customer and the carrier. The telephony model shared the same end-to-end service philosophy, but it achieved this over exercising overarching control over all components of the service. For many decades telephone was a controlled monopoly that was intolerant of any form of competitive interest in the customer. The Internet did not go down this path, and one of the reasons why this didn’t happen is that NATs allowed the end customer to populate their home network with whatever equipment they chose, and via a NAT, present to the ISP carrier as a single “termination” with a single IP address. This effective segmentation of network created a parallel segmentation in the market, which allowed the consumer services segment to flourish without carrier-imposed constraint. And at the time that was critically important. The Internet wasn’t the next generation of the telephone service. It was an entirely different utility service operating in an entirely different manner.

More recently, NATs have appeared within the access networks themselves, performing the address sharing function across a larger set of customers. This was first associated with mobile access networks but has been used in almost all recent deployments of access networks, as a response to the visible scarcity in the supply of available IPv4 addresses.

NATs have not been universally applauded. Indeed, in many circles within the IETF NATs were deplored.

It was observed that NATs introduced active middleware into an end-to-end architecture, and divided the pool of attached devices into clients and servers. Clients (behind NATs) had no constant IP address and could not be the target of connection requests. Clients could only communicate with servers, not with each other. It appeared to some to be a step in a regressive direction that imposed a reliance on network middleware with its attendant fragility and imposed an asymmetry on communication [8].

For many years, the IETF did not produce standard specifications for the behaviour of NATs, particularly in the case of handling of UDP sessions. As UDP has no specific session controls, such as session opening and closing signals, how was a NAT meant to maintain its translation state? In the absence of a specific standard specification different implementations of this function made different assumptions and implemented different behaviour, introducing another detrimental aspect of NATs: namely variability.

How could an application operate through a NAT if the application used UDP? The result was the use of various NAT discovery protocols that attempted to provide the application with some understanding of the particular form of NAT behaviour that it was encountering [9].

NATs in Today’s Internet

Let’s now look at the situation today — the Internet of early 2017. The major hiatus in the supply of additional IPv4 addresses commenced in 2011 when the central IANA pool of unallocated IPv4 addresses was exhausted. Progressively the RIRs ran down their general allocation address pools: APNIC in April 2011, the RIPE NCC in September 2012, LACNIC in 2014 and ARIN in 2015. The intention from the early 1990’s was that the impending threat of imminent exhaustion of further addresses would be the overwhelming impetus to deploy the successor protocol. By that thinking then the Internet would’ve switched to exclusively use IPv6 before 2011. Yet, that has not happened.

Today a minimum of 90% of the Internet’s connected device population still exclusively uses IPv4 while the remainder use IPv4 and IPv6 [10]. This is an all-IPv4 network with a minority proportion also using IPv6. Estimates vary of the device population of today’s Internet, but they tend to fall within a band of 15 billion to 25 billion connected devices [11]. Yet only some 2.8 billion IPv4 addresses are visible in the Internet’s routing system. This implies that on average each announced public IPv4 address serves between 3 to 8 hidden internal devices.

Part of the reason why estimates of the total population of connected devices are so uncertain is that NATs occlude these internal devices so effectively that any conventional internet census cannot expose these hidden internal device pools with any degree of accuracy.

And part of the reason why the level of IPv6 deployment is still so low is that users, and the applications that they value, appear to operate perfectly well in a NATed environment. The costs of NAT deployment are offset by preserving the value of existing investment, both as a tangible investment in equipment and as an investment in knowledge and operational practices in IPv4.

NATS can be incrementally deployed, and they do not rely on some ill-defined measure of coordination with others to operate effectively. They are perhaps one of the best examples of a piecemeal, incremental deployment technology where the incremental costs of deployment directly benefit the entity who deployed the technology. This is in direct contrast to IPv6 deployment, where the ultimate objective of the deployment, namely the comprehensive replacement of IPv4 on the Internet can only be achieved once a significant majority of the Internet’s population are operating in a mode that supports both protocols. Until then the deployments of IPv6 are essentially forced to operate in a dual stack mode, and also support IPv4 connectivity. In other words, the incremental costs of deployment of IPv6 only generate incremental benefit once others also take the same decision to deploy this technology. Viewed from the perspective of an actor in this space the pressures and costs to stretch the IPv4 address space to encompass an ever-growing Internet are a constant factor. The decision to complement that with a deployment of IPv6 is an additional cost that in the short term does not offset any of the IPv4 costs.

So, for many actors the question is not “Should I deploy IPv6 now?” but “how far can I go with NATs?” By squeezing some 25 billion devices into 2 billion active IPv4 addresses, we have used a compression ratio of around 14:1, of the equivalent of adding four additional bits of address space. These bits have been effectively ‘borrowed’ from the TCP and UDP port address space. In other words, today’s Internet uses a 36 -bit address space in aggregate to allow these 25 billion devices to communicate.

Each additional bit doubles this pool, so the theoretical maximum space of a comprehensively NATted IPv4 environment is 48 bits, fully accounting for the 32-bit address space and the 16-bit port address space. This is certainly far less than IPv6’s 128 bits of address space, but the current division of IPv6 into a 64-bit network prefix and a 64-bit interface identifier drops the available IPv6 address space to 64 bits. The prevalent use of a /48 as a site prefix, introduces further address use inefficiencies that effectively drops the IPv6 address space to span the equivalent of some 56 bits.

NATs can be pushed harder. The “binding space” for a NAT is a 5-tuple consisting of the source and destination IP address, a source and destination port address and a protocol identifier. This 96-bit NAT address space is a highly theoretic ceiling, but the pragmatic question is how much of this space can be exploited in a cost-effective manner such that the marginal cost of exploitation is lower than the cost of an IPv6 deployment.

NATs as Architecture

NATs appear to have pushed applications to a further level of refinement and abstraction that were at one point considered to be desirable objectives rather than onerous limitations. The maintenance of both a unique fixed endpoint address space and a uniquely assigned name space for the Internet could be regarded as an expensive luxury when it appears that only one of these spaces is a strictly necessary regarding ensuring the integrity of communication.

The IPv4 architecture made several simplifying assumptions — one of these was that an IPv4 address was overloaded with both the unique identity of an endpoint and its network location. In an age where computers were bolted to the floor of a machine room this seemed like a very minor assumption, but in today’s world, it appears that the overwhelming number of connected devices are portable devices that constantly change their location both in a physical sense and regarding network-based location. This places stress on the IP architecture, and the resulting is that IP is variously tunneled or switched in the final hop access infrastructure to preserve the overloaded semantics of IP addresses.

NATs deliberately disrupt this relationship, and the presented client side address and port have a particular interpretation and context only for the duration of a session.

In the same way that clients now share IP addresses, services now also share addresses. Applications cannot assume that the association of a name to an IP address is a unique 1:1 relationship. Many service-identifying names may be associated with the same IP address, and in the case of multi-homed services, it can be the case that the name is associated with several IP addresses.

With this change comes the observation that IP addresses are no longer the essential “glue” of the Internet. They have changed to a role of ephemeral session tokens that have no lasting semantics. NATs are pushing us to a different network architecture that is far more flexible – a network that uses names as the essential glue that binds it together.

We are now in the phase of the internet’s evolution where the address space is no longer unique, and we rely on the name space to offer coherence to the network

From that perspective, what does IPv6 really offer?

More address bits? Well, perhaps not all that much. The space created by NATs operates from within a 96-bit vector of address and port components, and the usable space may well approach the equivalent of a 50-bit conventional address architecture. On the other hand, the IPv6 address architecture has stripped off some 64 bits for an interface identifier and conventionally uses a further 16 bits as a site identifier. The resulting space is of the order of 52 bits. It’s not clear that the two pools of address tokens are all that much different in size.

More flexibility? IPv6 is a return to the overloaded semantics of IP addresses as being unique endpoint tokens that provide a connected device with a static location and a static identity. This appears to be somewhat ironic given the observation that increasingly the Internet is largely composed of battery powered mobile devices of various forms.

Cheaper? Possibly, in the long term, but not in the short term. Until we get to the “tipping point” that would allow a network to operate solely using IPv6 without any visible impact on the network’s user population, then every network still must provide a service using IPv4.

Permanent address to endpoint association? Well not really. Not since we realised that having a fixed interface identifier represented an unacceptable privacy leak. These days IPv6 clients use so-called “privacy addresses” as their interface identifier, and change this local identifier value on a regular basis.

Perhaps we should appreciate the role of NATs in supporting the name-based connectivity environment that is today’s Internet. It was not a deliberately designed outcome, but a product of incremental evolution that has responded to the various pressures of scarcity and desires for greater flexibility and capability. Rather than eschewing NATs in the architecture as an aberrant deviation in response to a short-term situation, we may want to contemplate an Internet architecture that embraces a higher level of flexibility of addressing. If the name space is truly the binding glue of the Internet, then perhaps we might embrace a view that addresses are simply needed to distinguish one packet flow from another in the network, and nothing more.

Appreciating NATs

When NATs were first introduced to the Internet, they were widely condemned as an aberration in the Internet’s architecture. And in some ways, NATs have directly confronted the model of a stateless packet switching network core and capable attached edge devices.

But that model has been a myth for decades. The Internet as it is deployed is replete with various forms of network “middleware, ” and the concept of a simple stateless packet switching network infrastructure is has been relegated to the status of a historical, but now somewhat abstract concept.

In many ways, this condemnation of NATs was unwarranted, as we can reasonably expect that network middleware is here to stay, irrespective of whether the IP packets are formatted as IPv4 or IPv6 and irrespective of whether the outer IP address fields in the packets are translated or not.

Rather than being condemned, perhaps we should appreciate the role that NATs play in the evolution of the architecture of the Internet.

We have been contemplating what it means to have a name-based data network, where instead of using a fixed relationship between names and IP addresses, we eschew this mapping and perform network transactions by specifying the name of the desired service or resource [12]. NATs are an interesting step in this direction, where IP addresses have lost their fixed association with particular endpoints, and are used more as ephemeral session tokens than endpoint locators. This certainly appears to be an interesting step in the direction of named data networking.

The conventional wisdom is that the endpoint of this current transitioning Internet is an IPv6 network that has no further use for NATs. This may not be the case. We may find that NATs continue to offer an essential level of indirection and dynamic binding capability in networking that we would rather not casually discard. It may be that NATs are a useful component of network middleware and that they continue to have a role on the Internet well after this transition to IPv6 has been completed, whenever that may be!

References

[1] F. Solensky, “Continued Internet Growth”, Proceedings of the 18th Internet Engineering Task Force Meeting, August 1990.

[2] H. W. Braun, P. Ford and Y. Rekhter, “CIDR and the Evolution of the Internet”, SDSC Report GA-A21364, Proceedings of INET’93, Republished in ConneXions, September 1993.

[3] V. Fuller, T. Li, J. Yu and K. Varadhan, “Classless Inter-Domain Routing (CIDR): An Address Assignment and Aggregation Strategy”, Internet Request for Comment (RFC) 1519, September 1993.

[4] S. Bradner and A. Mankin, “The Recommendation for the IP Next Generation Protocol”, Internet Request for Comment (RFC) 1752, January 1995.

[5] D. Wing and A. Yourtchenko, “Happy Eyeballs: Success with Dual-Stack Hosts,” Internet Request for Comment 9RFC) 6555, April 2012.

[6] P. Tsuchiya and T. Eng, “Extending the IP Internet Through Address Reuse”, ACM SIGCOMM Computer Communications Review, 23(1): 16-33, January 1993.

[7] P. Srisuresh and D. Gan, “Load Sharing using IP Network Address Translation (LSNAT)”, Internet Request for Comment (RFC) 2319, August 1998.

[8] T. Hain, “Architectural Implications of NAT”, Internet Request for Comment (RFC) 2993, November 2000.

[9] G. Huston, “Anatomy: A Look Inside Network Address Translators,” The Internet Protocol Journal, vol. 7. No. 3, pp. 2-32, September 2004.

[10] IPv6 Deployment Measurement, https://stats.labs.apnic.net/ipv6/XA.

[11] Internet of Things Connected devices, 2015 – 2025

[12] L. Zhang, et. al, “Named Data Networking,” ACM SIGCOMM Computer Communication Review, vol. 44, no. 3, pp 66-73, July 2014.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Read more here:: www.circleid.com/rss/topics/ipv6

The post An Opinion in Defence of NATs appeared on IPv6.net.

Read more here:: IPv6 News Aggregator