packets

Why 5G Is in Trouble (and How to Fix It)

By Martin Geddes

I have a somewhat unconventional view of 5G. I just happen to believe it is the right one. It is trapped inside a category error about the nature of packet networking, and this means it is in trouble.

As context, we are seeing the present broadband Internet access model maturing and begin to reach its peak. 5G eagerly anticipates the next wave of applications.

The 5G Difference: “Purpose-for-Fitness” to “Fitness-for-Purpose”

As such, 5G is attempting to both extend and transcend the present “undifferentiated data sludge” model of mobile broadband.

Firstly, it pumps the “undrinkable” mucky bandwidth harder and faster, to give a modified version of what we have today with 4G. We will gloss over the minor miracle that needs to happen with backhaul, or that the mobility protocols today with 4G struggle when you get on the train (and 5G makes it worse).

Secondly, its other goal is to deliver differentiated “drinkable” access for different enterprise cloud and industrial applications. This essentially is a generic version of the very specific VoLTE solution developed for voice telephony in 4G, extended to any cloud application. It can be expressed as being for low-latency applications, or packed in a variety of other guises.

The Slow Evolution Towards General-Purpose Assured APP Access

The conventional wisdom is that packet networks enable networked computing (“join devices”), and networks do “work”. As such, the job of the network is to forward as many packets as fast as possible, and what matters most is “speed”. 5G fits this.

The unconventional wisdom is that packet networks enable interprocess communications (“join computations”), and networks don’t do “work”. As such, the job of the network is to trade resources around to deliver the “just right” quantity of quality to optimise the trade-offs of QoE risk.

The former model is “pipe”, the latter is “futures and options trading”. The former works with TCP/IP, the latter needs new packet architectures (RINA). The former can extend radio network protocols from 2G, 3G and 4G; the latter needs new ones. The former has a low-frequency resource trading model, the latter a high-frequency trading one.

A Paradigm Change in Engineering is Needed for 5G to Succeed

5G is making the network far more dynamic, without having the mathematics, models, methods or mechanisms to do the “high-frequency trading”. The whole industry is missing a core performance engineering skill: they can do (component) radio engineering, but not complete systems engineering. When you join all the bits, you don’t know what you get until you turn it on!

The result will not be pretty.

In particular, 5G is primarily delivering into the tail of the last S curve of generic unassured broadband Internet access; it is not on its present path fit-for-purpose for assured cloud application access (inc VR/AR and IoT), which is the new S curve of growth.

Telephony is virtual reality. VoLTE wasn’t solving the problem of how to extend the life of the past; it was solving a corner case of how do we communicate in future. Understand this, and the future and fate of 5G makes more sense.

The key question is whether 5G is aimed at extending the VoLTE part of 4G (fit-for-purpose voice) or improving the rest (purpose-for-fitness Internet access). It is trying to serve two strategic masters, the past and the future, at once.

Is 5G trying to “buy back up the curve”, implying doom for its makers and buyers?
Watch the video presentation: The Death of Cellular by Francis McInerney

So, what to do about it? I see three key industry actions.

Firstly, we need to narrow the intentional semantics. 5G is trying to do too many things.

The focus of the generic broadband access should not be peak speed, or even “antipeak” latency under ideal conditions. It should be to establish a consistent quality floor under real-world conditions with graceful degradation in overload. That floor should be adjustable so that you can segment the market by quality.

This is a precursor to a 6G, where the two sides of unassured and assured can be unified through a shared framework for managing the quality floor.

Whilst we need a “generic VoLTE”, only about 5 people on the planet know how to do it (and we’re all busy on other things). So for the assured access part, it should not attempt to make the leap from singular VoLTE to a generic offer in one go.

There needs to be a series of smaller and less ambitious steps that allow the coexistence of a modest number of managed services with different latency and throughput needs. However, the real issue is to assure complete supply chains, not just one part (the access) or sub-part (the radio link).

Which brings us to the second issue, the denotational semantics. As an industry, we’ve yet to agree on the standard units for broadband supply and demand (if you can believe it). So the next thing 5G has to fix is the lack of a shared requirements specification language for performance.

The good news is that this is a solved problem.

Key Action Needed: Upgrade Engineering to Align Supply to Demand/span>

Finally, the operational semantics. If 5G is going to be of any use to anyone but equipment salespeople, it has to demonstrate the difference it makes. That implies it needs to have improved mechanisms that allow for high-fidelity measurement of what QoE was being delivered, high-frequency control to deliver it, and new architectures that appropriately join these together.

This QoE control is a paradigm change. Today the radio people constructing a bandwidth supply, and the packet people chopping up whatever is there, using whatever transport protocols they inherited from the IETF.

The future is a demand-led model that is the antithesis of the IETF’s “rough consensus and running code” approach. That means a deep rethink because at present the radio folk are running the show, as they have always done. It’s a supply-led industry.

The problem has to be reframed as a distributed computing one that makes the radio subservient to the computational outcome. That’s going to ruffle a lot of feathers and upset a lot of power structures. The limiting factor in my experience is always human, never technical.

The alternative is that 5G gets stuck between two mutually incompatible goals, and serves neither well. Then eventually the whole ecosystem eventually gets bypassed in the 2020s, say by an IoT specialist player being bought by an Amazon, rather like how the iPhone overtook the handset space a decade ago.

Couldn’t ever happen? Ask him…

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Mobile Internet, Networks, Telecom, Wireless

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Why 5G Is in Trouble (and How to Fix It)

By News Aggregator

By Martin Geddes

I have a somewhat unconventional view of 5G. I just happen to believe it is the right one. It is trapped inside a category error about the nature of packet networking, and this means it is in trouble.

As context, we are seeing the present broadband Internet access model maturing and begin to reach its peak. 5G eagerly anticipates the next wave of applications.

The 5G Difference: “Purpose-for-Fitness” to “Fitness-for-Purpose”

As such, 5G is attempting to both extend and transcend the present “undifferentiated data sludge” model of mobile broadband.

Firstly, it pumps the “undrinkable” mucky bandwidth harder and faster, to give a modified version of what we have today with 4G. We will gloss over the minor miracle that needs to happen with backhaul, or that the mobility protocols today with 4G struggle when you get on the train (and 5G makes it worse).

Secondly, its other goal is to deliver differentiated “drinkable” access for different enterprise cloud and industrial applications. This essentially is a generic version of the very specific VoLTE solution developed for voice telephony in 4G, extended to any cloud application. It can be expressed as being for low-latency applications, or packed in a variety of other guises.

The Slow Evolution Towards General-Purpose Assured APP Access

The conventional wisdom is that packet networks enable networked computing (“join devices”), and networks do “work”. As such, the job of the network is to forward as many packets as fast as possible, and what matters most is “speed”. 5G fits this.

The unconventional wisdom is that packet networks enable interprocess communications (“join computations”), and networks don’t do “work”. As such, the job of the network is to trade resources around to deliver the “just right” quantity of quality to optimise the trade-offs of QoE risk.

The former model is “pipe”, the latter is “futures and options trading”. The former works with TCP/IP, the latter needs new packet architectures (RINA). The former can extend radio network protocols from 2G, 3G and 4G; the latter needs new ones. The former has a low-frequency resource trading model, the latter a high-frequency trading one.

A Paradigm Change in Engineering is Needed for 5G to Succeed

5G is making the network far more dynamic, without having the mathematics, models, methods or mechanisms to do the “high-frequency trading”. The whole industry is missing a core performance engineering skill: they can do (component) radio engineering, but not complete systems engineering. When you join all the bits, you don’t know what you get until you turn it on!

The result will not be pretty.

In particular, 5G is primarily delivering into the tail of the last S curve of generic unassured broadband Internet access; it is not on its present path fit-for-purpose for assured cloud application access (inc VR/AR and IoT), which is the new S curve of growth.

Telephony is virtual reality. VoLTE wasn’t solving the problem of how to extend the life of the past; it was solving a corner case of how do we communicate in future. Understand this, and the future and fate of 5G makes more sense.

The key question is whether 5G is aimed at extending the VoLTE part of 4G (fit-for-purpose voice) or improving the rest (purpose-for-fitness Internet access). It is trying to serve two strategic masters, the past and the future, at once.

Is 5G trying to “buy back up the curve”, implying doom for its makers and buyers?
Watch the video presentation: The Death of Cellular by Francis McInerney

So, what to do about it? I see three key industry actions.

Firstly, we need to narrow the intentional semantics. 5G is trying to do too many things.

The focus of the generic broadband access should not be peak speed, or even “antipeak” latency under ideal conditions. It should be to establish a consistent quality floor under real-world conditions with graceful degradation in overload. That floor should be adjustable so that you can segment the market by quality.

This is a precursor to a 6G, where the two sides of unassured and assured can be unified through a shared framework for managing the quality floor.

Whilst we need a “generic VoLTE”, only about 5 people on the planet know how to do it (and we’re all busy on other things). So for the assured access part, it should not attempt to make the leap from singular VoLTE to a generic offer in one go.

There needs to be a series of smaller and less ambitious steps that allow the coexistence of a modest number of managed services with different latency and throughput needs. However, the real issue is to assure complete supply chains, not just one part (the access) or sub-part (the radio link).

Which brings us to the second issue, the denotational semantics. As an industry, we’ve yet to agree on the standard units for broadband supply and demand (if you can believe it). So the next thing 5G has to fix is the lack of a shared requirements specification language for performance.

The good news is that this is a solved problem.

Key Action Needed: Upgrade Engineering to Align Supply to Demand/span>

Finally, the operational semantics. If 5G is going to be of any use to anyone but equipment salespeople, it has to demonstrate the difference it makes. That implies it needs to have improved mechanisms that allow for high-fidelity measurement of what QoE was being delivered, high-frequency control to deliver it, and new architectures that appropriately join these together.

This QoE control is a paradigm change. Today the radio people constructing a bandwidth supply, and the packet people chopping up whatever is there, using whatever transport protocols they inherited from the IETF.

The future is a demand-led model that is the antithesis of the IETF’s “rough consensus and running code” approach. That means a deep rethink because at present the radio folk are running the show, as they have always done. It’s a supply-led industry.

The problem has to be reframed as a distributed computing one that makes the radio subservient to the computational outcome. That’s going to ruffle a lot of feathers and upset a lot of power structures. The limiting factor in my experience is always human, never technical.

The alternative is that 5G gets stuck between two mutually incompatible goals, and serves neither well. Then eventually the whole ecosystem eventually gets bypassed in the 2020s, say by an IoT specialist player being bought by an Amazon, rather like how the iPhone overtook the handset space a decade ago.

Couldn’t ever happen? Ask him…

Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd

Follow CircleID on Twitter

More under: Mobile Internet, Networks, Telecom, Wireless

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post Why 5G Is in Trouble (and How to Fix It) appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Switch Vendor Looks to Unclog Data

By George Leopold

Big data and cloud migrations are reshaping the datacenter as more servers are linked and bandwidth capacity joins scaling as essential requirements. Those needs are creating opportunities for network gear vendors as datacenters make the jump to bigger IPv6 pipes to accommodate data-driven applications and broader use of analytics.

Among them is Chinese switch maker H3C Technologies, which is claiming a record for switch performance in recent benchmark testing that sought to replicate real-world datacenter operations. Test conductor Spirent Communications recently conducted what it claimed was the highest density 100-gigabit datacenter-switching test in conjunction with H3C and “moderated” by the independent test lab Network Test.

Spirent and H3C claimed the results of the 100G Ethernet networking trial represent nothing less than a “high-water mark” for datacenter networking. Among the findings was stable (“lossless”) performance for both IPv4 and IPv6 network traffic at most frame rates using a routing protocol and stressful “fully meshed traffic.”

“No task is more important for a datacenter core switch than moving traffic at maximum speed with zero frame loss,” also known as dropped packets, the tester noted.

Read the full story at sister web site EnterpriseTech.com.

Recent items:

Does Infiniband Have a Future on Hadoop?

MapD Partners on Network Analytics Platform

The post Switch Vendor Looks to Unclog Data appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Switch Vendor Looks to Unclog Data

By News Aggregator

By George Leopold

Big data and cloud migrations are reshaping the datacenter as more servers are linked and bandwidth capacity joins scaling as essential requirements. Those needs are creating opportunities for network gear vendors as datacenters make the jump to bigger IPv6 pipes to accommodate data-driven applications and broader use of analytics.

Among them is Chinese switch maker H3C Technologies, which is claiming a record for switch performance in recent benchmark testing that sought to replicate real-world datacenter operations. Test conductor Spirent Communications recently conducted what it claimed was the highest density 100-gigabit datacenter-switching test in conjunction with H3C and “moderated” by the independent test lab Network Test.

Spirent and H3C claimed the results of the 100G Ethernet networking trial represent nothing less than a “high-water mark” for datacenter networking. Among the findings was stable (“lossless”) performance for both IPv4 and IPv6 network traffic at most frame rates using a routing protocol and stressful “fully meshed traffic.”

“No task is more important for a datacenter core switch than moving traffic at maximum speed with zero frame loss,” also known as dropped packets, the tester noted.

Read the full story at sister web site EnterpriseTech.com.

Recent items:

Does Infiniband Have a Future on Hadoop?

MapD Partners on Network Analytics Platform

The post Switch Vendor Looks to Unclog Data appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post Switch Vendor Looks to Unclog Data appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Checklist for Getting a Grip on DDOS Attacks and the Botnet Army

By Industry Perspectives

Heitor Faroni is Director of Solutions Marketing for Alcatel-Lucent Enterprise.

Distributed Denial of Service (DDoS) attacks jumped into the mainstream consciousness last year after several high-profile cases – one of the largest and most widely reported being the Dyn takedown in Fall 2016, an interesting example as it used poorly secured IoT devices to coordinate the attack. While not necessarily a new threat, they have in fact been around since the late ’90s.

When you consider that Gartner predicts that by 2020 it is predicted there will be 20 billion connected devices as part of the growing Internet of Things, the need to implement the right network procedures and tools to properly secure all these devices is only going to grow.

The New Battleground – Rent-a-bots on the Rise

Put simply, DDoS attacks occur when an attacker attempts to make a network resource unavailable to legitimate users by flooding the targeted network with superfluous traffic until it simply overwhelms the servers and knocks the service offline. Thousands and thousands of these attacks happen every year, and are increasing both in number and in scale. According to some reports, 2016 saw a 138 percent year-over-year increase in the total number of attacks greater than 100Gbps.

The Dyn attack used the Mirai botnet which exploits poorly secured, IP-enabled “smart things” to swell its ranks of infected devices. It is programmed to scan for IoT devices that are still only protected by factory-set defaults or hard-coded usernames and passwords. Once infected, the device becomes a member of a botnet of tens of thousands of IoT devices, which can then bombard a selected target with malicious traffic.

This botnet and others are available for hire online from enterprising cybercriminals; and as their functionalities and capabilities are expanded and refined, more and more connected devices will be at risk.

So what steps can businesses take to protect themselves now and in the in the future?

First: Contain the Threat

With the rise of IoT at the heart of digital business transformation and its power as an agent for leveraging some of the most important technological advances – such as big data, automation, machine learning and enterprise-wide visibility – new ways of managing networks and their web of connected devices are rushing to keep pace.

A key development is IoT containment. This is a method of creating virtual isolated environments using network virtualization techniques. The idea is to group connected devices with a specific functional purpose, and the respective authorized users into a unique IoT container. You still have all users and devices in a corporation physically connected to a single converged network infrastructure, but they are logically isolated by these containers.

Say, for example, the security team has 10 IP-surveillance cameras at a facility. By creating an IoT container for the security team’s network, IT staff can create a virtual, isolated network which cannot be accessed by unauthorized personnel – or be seen by other devices outside the virtual environment. If any part of the network outside of this environment is compromised, it will not spread to the surveillance network. This can be replicated for payroll systems, R&D or any other team within the business.

By creating a virtual IoT environment you can also ensure the right conditions for a group of devices to operate properly. Within a container, quality of service (QoS) rules can be enforced, and it is possible to reserve or limit bandwidth, prioritize mission critical traffic and block undesired applications. For instance, the surveillance cameras that run a continuous feed may require a reserved amount of bandwidth, whereas critical-care machines in hospital units must get the highest priority. This QoS enforcement can be better accomplished by using switches enabled with deep-packet inspection, which see the packets traversing the network as well as what applications are in use – so you know if someone is accessing the CRM system, security feeds or simply watching Netflix.

Second: Protection at the Switch

Businesses should ensure that switch vendors are taking the threat seriously and putting in place procedures to maximize hardware protection. A good approach can be summed up in a three-pronged strategy.

  • A second pair of eyes – make sure the switch operating system is verified by third-party security experts. Some companies may shy away from sharing source code to be verified by industry specialists, but it is important to look at manufacturers that have ongoing relationships with leading industry security experts.
  • Scrambled code means one switch can’t compromise the whole network. The use of open source code as part of operating systems is common in the industry, which does come with some risk as the code is “common knowledge”. By scrambling object code within the switch’s memory, even if a hacker could locate sections of open source code in one switch each would be scrambled uniquely, so the same attack would not work on multiple switches.
  • How is the switch operating system delivered? The IT industry has a global supply chain, with component manufacturing, assembly, shipping and distribution having a worldwide footprint. This introduces the risk of the switch being tampered with before it gets to the end-customer. The network installation team should always download the official operating systems to the switch directly from the vendor’s secure servers before installation.

Third: Do the Simple Things to Secure Your Smart Things

As well as establishing a more secure core network, there are precautions you can take right now to enhance device protection. It is amazing how many businesses miss out these simple steps.

  • Change the default password One very simple and often overlooked procedure is changing the default password. In the Dyn case, the virus searched for default settings of the IP devices to take control.
  • Update the software As the battle between cybercriminals and security experts continues, the need to stay up-to-the-minute with the latest updates and security patches becomes more important. Pay attention to the latest updates and make it part of the routine to stay on top.
  • Prevent remote management Disable the remote management protocol, such as telnet or http, that provide control from another location. The recommended remote management secure protocols are via SSH or https.

Evolve Your Network

The Internet of Things has great transformative potential for businesses in all industries, from manufacturing and healthcare to transportation and education. But with any new wave of technical innovation comes new challenges. We are at the beginning of the IoT era, which is why it’s important to get the fundamental network requirements in place to support not only the increase in data traversing our networks, but enforcing QoS rules and minimizing risk from cyberattacks.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

Checklist for Getting a Grip on DDOS Attacks and the Botnet Army

By News Aggregator

By Industry Perspectives

Heitor Faroni is Director of Solutions Marketing for Alcatel-Lucent Enterprise.

Distributed Denial of Service (DDoS) attacks jumped into the mainstream consciousness last year after several high-profile cases – one of the largest and most widely reported being the Dyn takedown in Fall 2016, an interesting example as it used poorly secured IoT devices to coordinate the attack. While not necessarily a new threat, they have in fact been around since the late ’90s.

When you consider that Gartner predicts that by 2020 it is predicted there will be 20 billion connected devices as part of the growing Internet of Things, the need to implement the right network procedures and tools to properly secure all these devices is only going to grow.

The New Battleground – Rent-a-bots on the Rise

Put simply, DDoS attacks occur when an attacker attempts to make a network resource unavailable to legitimate users by flooding the targeted network with superfluous traffic until it simply overwhelms the servers and knocks the service offline. Thousands and thousands of these attacks happen every year, and are increasing both in number and in scale. According to some reports, 2016 saw a 138 percent year-over-year increase in the total number of attacks greater than 100Gbps.

The Dyn attack used the Mirai botnet which exploits poorly secured, IP-enabled “smart things” to swell its ranks of infected devices. It is programmed to scan for IoT devices that are still only protected by factory-set defaults or hard-coded usernames and passwords. Once infected, the device becomes a member of a botnet of tens of thousands of IoT devices, which can then bombard a selected target with malicious traffic.

This botnet and others are available for hire online from enterprising cybercriminals; and as their functionalities and capabilities are expanded and refined, more and more connected devices will be at risk.

So what steps can businesses take to protect themselves now and in the in the future?

First: Contain the Threat

With the rise of IoT at the heart of digital business transformation and its power as an agent for leveraging some of the most important technological advances – such as big data, automation, machine learning and enterprise-wide visibility – new ways of managing networks and their web of connected devices are rushing to keep pace.

A key development is IoT containment. This is a method of creating virtual isolated environments using network virtualization techniques. The idea is to group connected devices with a specific functional purpose, and the respective authorized users into a unique IoT container. You still have all users and devices in a corporation physically connected to a single converged network infrastructure, but they are logically isolated by these containers.

Say, for example, the security team has 10 IP-surveillance cameras at a facility. By creating an IoT container for the security team’s network, IT staff can create a virtual, isolated network which cannot be accessed by unauthorized personnel – or be seen by other devices outside the virtual environment. If any part of the network outside of this environment is compromised, it will not spread to the surveillance network. This can be replicated for payroll systems, R&D or any other team within the business.

By creating a virtual IoT environment you can also ensure the right conditions for a group of devices to operate properly. Within a container, quality of service (QoS) rules can be enforced, and it is possible to reserve or limit bandwidth, prioritize mission critical traffic and block undesired applications. For instance, the surveillance cameras that run a continuous feed may require a reserved amount of bandwidth, whereas critical-care machines in hospital units must get the highest priority. This QoS enforcement can be better accomplished by using switches enabled with deep-packet inspection, which see the packets traversing the network as well as what applications are in use – so you know if someone is accessing the CRM system, security feeds or simply watching Netflix.

Second: Protection at the Switch

Businesses should ensure that switch vendors are taking the threat seriously and putting in place procedures to maximize hardware protection. A good approach can be summed up in a three-pronged strategy.

  • A second pair of eyes – make sure the switch operating system is verified by third-party security experts. Some companies may shy away from sharing source code to be verified by industry specialists, but it is important to look at manufacturers that have ongoing relationships with leading industry security experts.
  • Scrambled code means one switch can’t compromise the whole network. The use of open source code as part of operating systems is common in the industry, which does come with some risk as the code is “common knowledge”. By scrambling object code within the switch’s memory, even if a hacker could locate sections of open source code in one switch each would be scrambled uniquely, so the same attack would not work on multiple switches.
  • How is the switch operating system delivered? The IT industry has a global supply chain, with component manufacturing, assembly, shipping and distribution having a worldwide footprint. This introduces the risk of the switch being tampered with before it gets to the end-customer. The network installation team should always download the official operating systems to the switch directly from the vendor’s secure servers before installation.

Third: Do the Simple Things to Secure Your Smart Things

As well as establishing a more secure core network, there are precautions you can take right now to enhance device protection. It is amazing how many businesses miss out these simple steps.

  • Change the default password One very simple and often overlooked procedure is changing the default password. In the Dyn case, the virus searched for default settings of the IP devices to take control.
  • Update the software As the battle between cybercriminals and security experts continues, the need to stay up-to-the-minute with the latest updates and security patches becomes more important. Pay attention to the latest updates and make it part of the routine to stay on top.
  • Prevent remote management Disable the remote management protocol, such as telnet or http, that provide control from another location. The recommended remote management secure protocols are via SSH or https.

Evolve Your Network

The Internet of Things has great transformative potential for businesses in all industries, from manufacturing and healthcare to transportation and education. But with any new wave of technical innovation comes new challenges. We are at the beginning of the IoT era, which is why it’s important to get the fundamental network requirements in place to support not only the increase in data traversing our networks, but enforcing QoS rules and minimizing risk from cyberattacks.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

The post Checklist for Getting a Grip on DDOS Attacks and the Botnet Army appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

IPv6 at LSU

By Hector Rios

IPv6 quote immensity

Louisiana State University (LSU) has been supporting IPv6 since 2008. Our first exposure was at an Internet2 Member meeting held in New Orleans. LSU was the host and assisted in providing various network requirements. IPv6 was one of them, but even though it ended up not being used, it did provide the catalyst that enabled the eventual deployment on the Baton Rouge campus.

Implementing IPv6 on the LSU campus first required some initial research and testing. As network engineers, we first had to get up to speed on the technology, how it operated, how it was different from IPv4, what security concerns were there, and how it could easily be deployed. In addition, we had to become familiar with the capabilities of our existing network infrastructure, determine whether IPv6 was supported, to what degree, and build an inventory. This exercise provided valuable information that allowed us to determine areas where we could confidently deploy IPv6, and it also provided us with a list of equipment that either completely or partially lacked IPv6 support. Some of the features we explored were the following: OSPFv3, MBGP, HSRP, First Hop Security, Multicast, Wireless, Network Access Control (NAC), DHCP, DNS, NMS, ACLs, and firewall rules, to name a few. This list was then incorporated into our network life cycle plans and strategic plans to ensure that those pieces of equipment or software would eventually get upgraded.

After our initial research and testing, it was decided that the best deployment model would be dual-stack with stateless autoconfiguration (SLAAC). To start at a small scale, IPv6 was first deployed within the LSU’s Information Technology Services (ITS) building. Users were notified in advance of this new capability and documentation was made available so they could become familiar with the transition. This approach provided both network engineers and users the opportunity to “play” with IPv6, but more importantly, it allowed us to learn even more about what we needed to do to support it at a larger scale. Some of the lessons learned during this phase were things like caveats regarding different operating systems. For example, Windows XP required explicit IPv6 installation and activation; Windows Vista and 7 required temporary addresses to be disabled (privacy extensions). In addition, we were also able to further refine our firewall policies and security measures at the access, distribution, and core layers.

In a period of about two years, we had learned so much about IPv6 that we were ready to deploy it campus wide. In September of 2010, the dual-stack network was expanded to the entire campus, including our wireless network and VPN. By then, all relevant information for users as well as operators had been properly documented and made available. To allow for testing, we stood up an IPv6 site (ipv6.lsu.edu) so users could confirm that IPv6 was working correctly. In addition, we worked with some departments to encourage them to enable their main websites with IPv6 support. This allowed them to get involved in the process, become more aware of IPv6 and the capabilities of not only their servers, but anything that might be IPv6 capable, and finally build the same curiosity we had when we began our journey.

Eventually we participated in both World IPv6 Day and subsequently in World IPv6 Launch Day. This allowed us to provide the LSU community with a context by which they could see the reach and importance of IPv6 at a worldwide level as well as to promote it in whatever capacity we could.

IPv6 address space is big. Really big

In the initial phases of our deployment, we were still not fully convinced about our addressing scheme. The main issue was the fact that we were so used to working with IPv4 and being conservative, that we hadn’t grasped the immensity of the IPv6 space. We knew we needed a more flexible scheme that would allow us for easier deployment, management and identification. Once we understood our requirements, we came up with a better solution. Our IPv6 space was broken down into groups of IP spaces that were dedicated to specific areas of the campus such as our Residential Network, research networks, remote sites, and future campus sites. Further subdivisions were implemented that took into account growth and enabled uniformity. Finally, we embedded information into IPv6 prefixes so we could easily identify networks by router and VLAN ID. The result was a scheme that lead to a more informative address space that was easier to deploy and troubleshoot.

Workarounds. There will be some.

Realistically, there are many challenges with deploying IPv6. For example, early on at our university, we had a mechanism to control access to the network. This mechanism relied on both DHCP and DNS, and required users to register their MAC address via a registration portal. When we started implementing IPv6, we quickly realized we were going to automatically provide network access to our users even if they were not registered because this registration relied solely on IPv4 and could not be easily ported to IPv6. The workaround for this issue was to only provide user devices with an IPv4 DNS server address so we could serve both A and AAAA records and still control network access.

IPv6 global addresses are prefixes that can be globally routable. Within the LSU network, we had subnets that were both publicly and privately addressed. Some privately addressed subnets had access to the Internet via NAT, others didn’t. Since our initial approach was to assign a global prefix to every subnet (we did not want to use Unique Local Addresses), this opened up the ability for some devices to have access to the Internet, when they previously did not have this capability. This issue was addressed by ensuring that the proper firewall rules were in place to prevent Internet access to those devices that didn’t need it. In some instance, IPv6 was not even enabled at all.

Even though IPv6 support has matured over the years, interesting bugs and caveats are still common. During one of our router implementations we ran into an interesting IPv6 issue. The problem was an apparent limitation on how to secure SNMP on IPv6. Specifically, it seemed as though the router was unable to support IPv6 ACLs to control SNMP access. After working with our vendor, the workaround was to create an ACL for IPv4 and IPv6 with the same name. This solved the issue.

IPv6 quote less excuses

Who needs to be involved?

The initiative and vision to support IPv6 at an early stage started with the networking team because we had the knowledge of the technology and understood its importance and impact. The challenge was the ability to translate that knowledge into something that key university stakeholders understood and were able to support. When you bring something like IPv6 to IT executives, it could be a challenge to be able to demonstrate the tangible benefits, especially when there are other priorities that have a wider and palpable economic impact. When you bring up concerns about products and vendors lacking IPv6 support to people who don’t understand its importance, they’ll have a bigger tendency push it to the side and not consider it a priority. IPv6 is an “under-the-hood” technology. Some stakeholders don’t care too much about the technicalities of the engine specs, they just want those engines to be able to move things from one place to another, and to be fast and reliable. This is especially true when there is already an older engine (IPv4) that has been running for quite a while and seems to be doing fine. However, it is incumbent upon network engineers to fully understand the issues and concerns with IPv4 (depletion, NAT) and getting the buy-in at the highest level of your IT organization. This way, a strategic plan and roadmap can be developed and ensure everybody is onboard and follows it.

It is important to understand that IPv6 is not a switch that can be simply turned on when needed. It requires careful planning and it affects everything from the very applications that users interact with to the devices that process and move packets. Ignoring it will not only work against you but could potentially put you at a disadvantage. Do you know if your critical applications currently support or have IPv6 in their roadmap? Talk to your vendors. Even today, we still meet vendors that have no understanding of IPv6 or have no plans to support it.

IPv6 quote get your hands dirty

It’s not as hard as you think

Deploying IPv6 is not as hard as people think. Yes, there are many new things to learn and challenges to overcome. Engineers need proper training and practice, but sources for education abound, both free and paid. Unlike when LSU first got involved with IPv6, today a lot of technology, applications and services already support IPv6. But you must do your homework and ensure that you have a clear understanding of your requirements, and even more important, you must test those requirements to make sure they are actually met. In a way, here at LSU it felt like we were beta testers for a while, but we persevered by working with vendors and pressuring them to address our issues. If we were to implement IPv6 today, I know that it would be a lot easier because the technology has had time to mature. Now many solutions already come with built-in IPv6 support, and in almost all instances at no additional cost. So there are less and less excuses to not support IPv6.

If you’re thinking about IPv6, I suggest you get your hands dirty. Start with what you have, see how it works, determine what needs to be upgraded, and what needs to be replaced. Determine who your stakeholders are and engage with them to provide education and get them on board to test with you. As you learn more, you’ll become more comfortable and will quickly realize that it’s not as hard as you think. At LSU we’ve been doing it for so long that is now second nature. We look forward to the day when IPv4 will sunset. Adopt IPv6 today and you’ll make it happen that much faster.

The post IPv6 at LSU appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

IPv6 at LSU

By News Aggregator

IPv6 quote immensity

By Hector Rios

Louisiana State University (LSU) has been supporting IPv6 since 2008. Our first exposure was at an Internet2 Member meeting held in New Orleans. LSU was the host and assisted in providing various network requirements. IPv6 was one of them, but even though it ended up not being used, it did provide the catalyst that enabled the eventual deployment on the Baton Rouge campus.

Implementing IPv6 on the LSU campus first required some initial research and testing. As network engineers, we first had to get up to speed on the technology, how it operated, how it was different from IPv4, what security concerns were there, and how it could easily be deployed. In addition, we had to become familiar with the capabilities of our existing network infrastructure, determine whether IPv6 was supported, to what degree, and build an inventory. This exercise provided valuable information that allowed us to determine areas where we could confidently deploy IPv6, and it also provided us with a list of equipment that either completely or partially lacked IPv6 support. Some of the features we explored were the following: OSPFv3, MBGP, HSRP, First Hop Security, Multicast, Wireless, Network Access Control (NAC), DHCP, DNS, NMS, ACLs, and firewall rules, to name a few. This list was then incorporated into our network life cycle plans and strategic plans to ensure that those pieces of equipment or software would eventually get upgraded.

After our initial research and testing, it was decided that the best deployment model would be dual-stack with stateless autoconfiguration (SLAAC). To start at a small scale, IPv6 was first deployed within the LSU’s Information Technology Services (ITS) building. Users were notified in advance of this new capability and documentation was made available so they could become familiar with the transition. This approach provided both network engineers and users the opportunity to “play” with IPv6, but more importantly, it allowed us to learn even more about what we needed to do to support it at a larger scale. Some of the lessons learned during this phase were things like caveats regarding different operating systems. For example, Windows XP required explicit IPv6 installation and activation; Windows Vista and 7 required temporary addresses to be disabled (privacy extensions). In addition, we were also able to further refine our firewall policies and security measures at the access, distribution, and core layers.

In a period of about two years, we had learned so much about IPv6 that we were ready to deploy it campus wide. In September of 2010, the dual-stack network was expanded to the entire campus, including our wireless network and VPN. By then, all relevant information for users as well as operators had been properly documented and made available. To allow for testing, we stood up an IPv6 site (ipv6.lsu.edu) so users could confirm that IPv6 was working correctly. In addition, we worked with some departments to encourage them to enable their main websites with IPv6 support. This allowed them to get involved in the process, become more aware of IPv6 and the capabilities of not only their servers, but anything that might be IPv6 capable, and finally build the same curiosity we had when we began our journey.

Eventually we participated in both World IPv6 Day and subsequently in World IPv6 Launch Day. This allowed us to provide the LSU community with a context by which they could see the reach and importance of IPv6 at a worldwide level as well as to promote it in whatever capacity we could.

IPv6 address space is big. Really big

In the initial phases of our deployment, we were still not fully convinced about our addressing scheme. The main issue was the fact that we were so used to working with IPv4 and being conservative, that we hadn’t grasped the immensity of the IPv6 space. We knew we needed a more flexible scheme that would allow us for easier deployment, management and identification. Once we understood our requirements, we came up with a better solution. Our IPv6 space was broken down into groups of IP spaces that were dedicated to specific areas of the campus such as our Residential Network, research networks, remote sites, and future campus sites. Further subdivisions were implemented that took into account growth and enabled uniformity. Finally, we embedded information into IPv6 prefixes so we could easily identify networks by router and VLAN ID. The result was a scheme that lead to a more informative address space that was easier to deploy and troubleshoot.

Workarounds. There will be some.

Realistically, there are many challenges with deploying IPv6. For example, early on at our university, we had a mechanism to control access to the network. This mechanism relied on both DHCP and DNS, and required users to register their MAC address via a registration portal. When we started implementing IPv6, we quickly realized we were going to automatically provide network access to our users even if they were not registered because this registration relied solely on IPv4 and could not be easily ported to IPv6. The workaround for this issue was to only provide user devices with an IPv4 DNS server address so we could serve both A and AAAA records and still control network access.

IPv6 global addresses are prefixes that can be globally routable. Within the LSU network, we had subnets that were both publicly and privately addressed. Some privately addressed subnets had access to the Internet via NAT, others didn’t. Since our initial approach was to assign a global prefix to every subnet (we did not want to use Unique Local Addresses), this opened up the ability for some devices to have access to the Internet, when they previously did not have this capability. This issue was addressed by ensuring that the proper firewall rules were in place to prevent Internet access to those devices that didn’t need it. In some instance, IPv6 was not even enabled at all.

Even though IPv6 support has matured over the years, interesting bugs and caveats are still common. During one of our router implementations we ran into an interesting IPv6 issue. The problem was an apparent limitation on how to secure SNMP on IPv6. Specifically, it seemed as though the router was unable to support IPv6 ACLs to control SNMP access. After working with our vendor, the workaround was to create an ACL for IPv4 and IPv6 with the same name. This solved the issue.

Who needs to be involved?

The initiative and vision to support IPv6 at an early stage started with the networking team because we had the knowledge of the technology and understood its importance and impact. The challenge was the ability to translate that knowledge into something that key university stakeholders understood and were able to support. When you bring something like IPv6 to IT executives, it could be a challenge to be able to demonstrate the tangible benefits, especially when there are other priorities that have a wider and palpable economic impact. When you bring up concerns about products and vendors lacking IPv6 support to people who don’t understand its importance, they’ll have a bigger tendency push it to the side and not consider it a priority. IPv6 is an “under-the-hood” technology. Some stakeholders don’t care too much about the technicalities of the engine specs, they just want those engines to be able to move things from one place to another, and to be fast and reliable. This is especially true when there is already an older engine (IPv4) that has been running for quite a while and seems to be doing fine. However, it is incumbent upon network engineers to fully understand the issues and concerns with IPv4 (depletion, NAT) and getting the buy-in at the highest level of your IT organization. This way, a strategic plan and roadmap can be developed and ensure everybody is onboard and follows it.

It is important to understand that IPv6 is not a switch that can be simply turned on when needed. It requires careful planning and it affects everything from the very applications that users interact with to the devices that process and move packets. Ignoring it will not only work against you but could potentially put you at a disadvantage. Do you know if your critical applications currently support or have IPv6 in their roadmap? Talk to your vendors. Even today, we still meet vendors that have no understanding of IPv6 or have no plans to support it.

IPv6 quote get your hands dirty

It’s not as hard as you think

Deploying IPv6 is not as hard as people think. Yes, there are many new things to learn and challenges to overcome. Engineers need proper training and practice, but sources for education abound, both free and paid. Unlike when LSU first got involved with IPv6, today a lot of technology, applications and services already support IPv6. But you must do your homework and ensure that you have a clear understanding of your requirements, and even more important, you must test those requirements to make sure they are actually met. In a way, here at LSU it felt like we were beta testers for a while, but we persevered by working with vendors and pressuring them to address our issues. If we were to implement IPv6 today, I know that it would be a lot easier because the technology has had time to mature. Now many solutions already come with built-in IPv6 support, and in almost all instances at no additional cost. So there are less and less excuses to not support IPv6.

If you’re thinking about IPv6, I suggest you get your hands dirty. Start with what you have, see how it works, determine what needs to be upgraded, and what needs to be replaced. Determine who your stakeholders are and engage with them to provide education and get them on board to test with you. As you learn more, you’ll become more comfortable and will quickly realize that it’s not as hard as you think. At LSU we’ve been doing it for so long that is now second nature. We look forward to the day when IPv4 will sunset. Adopt IPv6 today and you’ll make it happen that much faster.

The post IPv6 at LSU appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

The post IPv6 at LSU appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

IDG Contributor Network: Barracuda protects industrial IoT with network-based firewall

By Deepak Puri

Many industrial IoT systems have open doors that create unintended vulnerabilities.

What information could be exposed by open communications protocols? How do hackers identify vulnerable systems? What security resources are available? How do IoT firewalls protect against such threats?

TCP Port 502 vulnerabilities

Many industrial systems use TCP Port 502, which allows two hosts to establish a connection and exchange streams of data. TCP guarantees delivery of data and that packets will be delivered on port 502 in the same order in which they were sent. This creates the risk of remote attackers to install arbitrary firmware updates via a MODBUS 125 function code to TCP port 502. Scans from services such as Shodan identify systems that have an open TCP port 502 that could be vulnerable.

To read this article in full or to leave a comment, please click here

Read more here:: www.networkworld.com/category/lan-wan/index.rss

IDG Contributor Network: Barracuda protects industrial IoT with network-based firewall

By News Aggregator

By Deepak Puri

Many industrial IoT systems have open doors that create unintended vulnerabilities.

What information could be exposed by open communications protocols? How do hackers identify vulnerable systems? What security resources are available? How do IoT firewalls protect against such threats?

TCP Port 502 vulnerabilities

Many industrial systems use TCP Port 502, which allows two hosts to establish a connection and exchange streams of data. TCP guarantees delivery of data and that packets will be delivered on port 502 in the same order in which they were sent. This creates the risk of remote attackers to install arbitrary firmware updates via a MODBUS 125 function code to TCP port 502. Scans from services such as Shodan identify systems that have an open TCP port 502 that could be vulnerable.

To read this article in full or to leave a comment, please click here

Read more here:: www.networkworld.com/category/lan-wan/index.rss

The post IDG Contributor Network: Barracuda protects industrial IoT with network-based firewall appeared on IPv6.net.

Read more here:: IPv6 News Aggregator