Andrew Brown, the executive director of Enterprise and IoT Research at Strategy Analytics, recently interviewed Matt Bacon, the marketing and communications director at Actility, to discuss the company’s activities in IoT; its network, partners and customers and its efforts in industrial markets. Actility is a founding member of the LoRa Alliance and offers low power wide area (LPWA) infrastructure with its ThingPark IoT communications platform. The platform provides LoRaWAN longrange coverage for low-power sensors used in multiple vertical industry applications
Andrew Brown: What are the key IoT applications that Actility customers are implementing in industrial environments?
Matt Bacon, the marketing and communications director at Actility
Matt Bacon: To begin with, it makes sense to explain what we do at Actility and how we help our customers in IoT. Our core product is the ThingPark communications platform, which was initially focused on LoRaWAN, but will shortly also support licensed 3GPP technologies; first LTE Cat M and then narrowband IoT (NB-IoT) for customers. With the platform, we manage data end-to-end, from the sensor via the gateway to customer applications in the cloud. We are able to handle various additional functions such as protocol translation, if required, also ensuring devices are correctly provisioned and sending their data packets end-to-end. We are not an analytics or visualisation company; we offer key ingredients in a complete IoT solution created by a range of partners. Our initial customers were network operators who chose us to build nationwide LoRaWAN networks in order for them to resell connectivity to their customers. They used ThingPark to manage the LoRaWAN component of their network.
Andrew Brown, Strategy Analytics
There are multiple applications that our customers, like KPN or Orange are enabling through connectivity for their industrial customers. For example, one industrial customer manages thousands of rat traps throughout The Netherlands. Connect them with LoRa and the traps only need to be checked and emptied when they have actually caught a rat, so there are far fewer truck rolls required, which dramatically improves the overall total cost of ownership (TCO) of the project.
Our partnership with Inmarsat has enabled the first globally available LoRaWAN IoT platform and we are supporting the company in building smart city applications in Kigali in Rwanda. In the same country, we are also working with Inmarsat and Carnegie Mellon University on a mountain tea plantation and processing facility. There, IoT will deliver agricultural monitoring such as soil moisture levels, but also precise temperature and humidity monitoring in the processing facility, which need to be monitored and controlled to ensure the best possible tea.
We also handle more traditional plant monitoring projects, such as the work we are doing with IBM Watson and Cougar Automation, a UK systems integrator, for RS Components. RS has a large warehouse with thousands of metres of conveyor belts. It ships up to 44,000 parcels a day, which are moved by conveyor belts. As a parcel drops from one belt to another, it can marginally knock the belts out of alignment. As this is repeated with thousands […]
The post Low power means long range coverage for industrial sensors appeared first on IoT Now – How to run an IoT enabled business.
Read more here:: www.m2mnow.biz/feed/
Akita, an IoT device watchdog station raised approximately $700,000 crowdfunding on Kickstarter. With 7000 plus backers, the startup promises to provide instant privacy for connected products.
The device performs three core activities i.e. scans connected gadgets/devices, blocks compromised devices and notifies the users of known issues. Akita comes with full support and help desk monitoring powered by Axius.
This device connects to a LAN port on users’ home router (not inline). The startup describes the device working as follows:
Akita’s Kickstarter received significant backing (both in terms o the number of backers and funds raised from the campaign), though, it only aimed to raise $30,000 initially.
The rise in popularity of privacy and network security devices is understandable. A home network, with several connected devices, need robust protections. That’s where other startups like Dojo and F-Secure also promise to secure network traffic and identify rouge devices.
Readers might visit the Postscapes Connected Device Security guide to understand how other devices in the same niche work and how Akita stacks up against its competitors.
Read more here:: feeds.feedburner.com/iot
I have a somewhat unconventional view of 5G. I just happen to believe it is the right one. It is trapped inside a category error about the nature of packet networking, and this means it is in trouble.
As context, we are seeing the present broadband Internet access model maturing and begin to reach its peak. 5G eagerly anticipates the next wave of applications.
The 5G Difference: “Purpose-for-Fitness” to “Fitness-for-Purpose”
As such, 5G is attempting to both extend and transcend the present “undifferentiated data sludge” model of mobile broadband.
Firstly, it pumps the “undrinkable” mucky bandwidth harder and faster, to give a modified version of what we have today with 4G. We will gloss over the minor miracle that needs to happen with backhaul, or that the mobility protocols today with 4G struggle when you get on the train (and 5G makes it worse).
Secondly, its other goal is to deliver differentiated “drinkable” access for different enterprise cloud and industrial applications. This essentially is a generic version of the very specific VoLTE solution developed for voice telephony in 4G, extended to any cloud application. It can be expressed as being for low-latency applications, or packed in a variety of other guises.
The conventional wisdom is that packet networks enable networked computing (“join devices”), and networks do “work”. As such, the job of the network is to forward as many packets as fast as possible, and what matters most is “speed”. 5G fits this.
The unconventional wisdom is that packet networks enable interprocess communications (“join computations”), and networks don’t do “work”. As such, the job of the network is to trade resources around to deliver the “just right” quantity of quality to optimise the trade-offs of QoE risk.
The former model is “pipe”, the latter is “futures and options trading”. The former works with TCP/IP, the latter needs new packet architectures (RINA). The former can extend radio network protocols from 2G, 3G and 4G; the latter needs new ones. The former has a low-frequency resource trading model, the latter a high-frequency trading one.
5G is making the network far more dynamic, without having the mathematics, models, methods or mechanisms to do the “high-frequency trading”. The whole industry is missing a core performance engineering skill: they can do (component) radio engineering, but not complete systems engineering. When you join all the bits, you don’t know what you get until you turn it on!
The result will not be pretty.
In particular, 5G is primarily delivering into the tail of the last S curve of generic unassured broadband Internet access; it is not on its present path fit-for-purpose for assured cloud application access (inc VR/AR and IoT), which is the new S curve of growth.
Telephony is virtual reality. VoLTE wasn’t solving the problem of how to extend the life of the past; it was solving a corner case of how do we communicate in future. Understand this, and the future and fate of 5G makes more sense.
The key question is whether 5G is aimed at extending the VoLTE part of 4G (fit-for-purpose voice) or improving the rest (purpose-for-fitness Internet access). It is trying to serve two strategic masters, the past and the future, at once.
Is 5G trying to “buy back up the curve”, implying doom for its makers and buyers?
Watch the video presentation: The Death of Cellular by Francis McInerney
So, what to do about it? I see three key industry actions.
Firstly, we need to narrow the intentional semantics. 5G is trying to do too many things.
The focus of the generic broadband access should not be peak speed, or even “antipeak” latency under ideal conditions. It should be to establish a consistent quality floor under real-world conditions with graceful degradation in overload. That floor should be adjustable so that you can segment the market by quality.
This is a precursor to a 6G, where the two sides of unassured and assured can be unified through a shared framework for managing the quality floor.
Whilst we need a “generic VoLTE”, only about 5 people on the planet know how to do it (and we’re all busy on other things). So for the assured access part, it should not attempt to make the leap from singular VoLTE to a generic offer in one go.
There needs to be a series of smaller and less ambitious steps that allow the coexistence of a modest number of managed services with different latency and throughput needs. However, the real issue is to assure complete supply chains, not just one part (the access) or sub-part (the radio link).
Which brings us to the second issue, the denotational semantics. As an industry, we’ve yet to agree on the standard units for broadband supply and demand (if you can believe it). So the next thing 5G has to fix is the lack of a shared requirements specification language for performance.
The good news is that this is a solved problem.
Finally, the operational semantics. If 5G is going to be of any use to anyone but equipment salespeople, it has to demonstrate the difference it makes. That implies it needs to have improved mechanisms that allow for high-fidelity measurement of what QoE was being delivered, high-frequency control to deliver it, and new architectures that appropriately join these together.
This QoE control is a paradigm change. Today the radio people constructing a bandwidth supply, and the packet people chopping up whatever is there, using whatever transport protocols they inherited from the IETF.
The future is a demand-led model that is the antithesis of the IETF’s “rough consensus and running code” approach. That means a deep rethink because at present the radio folk are running the show, as they have always done. It’s a supply-led industry.
The problem has to be reframed as a distributed computing one that makes the radio subservient to the computational outcome. That’s going to ruffle a lot of feathers and upset a lot of power structures. The limiting factor in my experience is always human, never technical.
The alternative is that 5G gets stuck between two mutually incompatible goals, and serves neither well. Then eventually the whole ecosystem eventually gets bypassed in the 2020s, say by an IoT specialist player being bought by an Amazon, rather like how the iPhone overtook the handset space a decade ago.
Couldn’t ever happen? Ask him…
Written by Martin Geddes, Founder, Martin Geddes Consulting Ltd
Follow CircleID on Twitter
Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml
Big data and cloud migrations are reshaping the datacenter as more servers are linked and bandwidth capacity joins scaling as essential requirements. Those needs are creating opportunities for network gear vendors as datacenters make the jump to bigger IPv6 pipes to accommodate data-driven applications and broader use of analytics.
Among them is Chinese switch maker H3C Technologies, which is claiming a record for switch performance in recent benchmark testing that sought to replicate real-world datacenter operations. Test conductor Spirent Communications recently conducted what it claimed was the highest density 100-gigabit datacenter-switching test in conjunction with H3C and “moderated” by the independent test lab Network Test.
Spirent and H3C claimed the results of the 100G Ethernet networking trial represent nothing less than a “high-water mark” for datacenter networking. Among the findings was stable (“lossless”) performance for both IPv4 and IPv6 network traffic at most frame rates using a routing protocol and stressful “fully meshed traffic.”
“No task is more important for a datacenter core switch than moving traffic at maximum speed with zero frame loss,” also known as dropped packets, the tester noted.
Read the full story at sister web site EnterpriseTech.com.
Read more here:: www.datanami.com/feed/
By Hector Rios
Louisiana State University (LSU) has been supporting IPv6 since 2008. Our first exposure was at an Internet2 Member meeting held in New Orleans. LSU was the host and assisted in providing various network requirements. IPv6 was one of them, but even though it ended up not being used, it did provide the catalyst that enabled the eventual deployment on the Baton Rouge campus.
Implementing IPv6 on the LSU campus first required some initial research and testing. As network engineers, we first had to get up to speed on the technology, how it operated, how it was different from IPv4, what security concerns were there, and how it could easily be deployed. In addition, we had to become familiar with the capabilities of our existing network infrastructure, determine whether IPv6 was supported, to what degree, and build an inventory. This exercise provided valuable information that allowed us to determine areas where we could confidently deploy IPv6, and it also provided us with a list of equipment that either completely or partially lacked IPv6 support. Some of the features we explored were the following: OSPFv3, MBGP, HSRP, First Hop Security, Multicast, Wireless, Network Access Control (NAC), DHCP, DNS, NMS, ACLs, and firewall rules, to name a few. This list was then incorporated into our network life cycle plans and strategic plans to ensure that those pieces of equipment or software would eventually get upgraded.
After our initial research and testing, it was decided that the best deployment model would be dual-stack with stateless autoconfiguration (SLAAC). To start at a small scale, IPv6 was first deployed within the LSU’s Information Technology Services (ITS) building. Users were notified in advance of this new capability and documentation was made available so they could become familiar with the transition. This approach provided both network engineers and users the opportunity to “play” with IPv6, but more importantly, it allowed us to learn even more about what we needed to do to support it at a larger scale. Some of the lessons learned during this phase were things like caveats regarding different operating systems. For example, Windows XP required explicit IPv6 installation and activation; Windows Vista and 7 required temporary addresses to be disabled (privacy extensions). In addition, we were also able to further refine our firewall policies and security measures at the access, distribution, and core layers.
In a period of about two years, we had learned so much about IPv6 that we were ready to deploy it campus wide. In September of 2010, the dual-stack network was expanded to the entire campus, including our wireless network and VPN. By then, all relevant information for users as well as operators had been properly documented and made available. To allow for testing, we stood up an IPv6 site (ipv6.lsu.edu) so users could confirm that IPv6 was working correctly. In addition, we worked with some departments to encourage them to enable their main websites with IPv6 support. This allowed them to get involved in the process, become more aware of IPv6 and the capabilities of not only their servers, but anything that might be IPv6 capable, and finally build the same curiosity we had when we began our journey.
Eventually we participated in both World IPv6 Day and subsequently in World IPv6 Launch Day. This allowed us to provide the LSU community with a context by which they could see the reach and importance of IPv6 at a worldwide level as well as to promote it in whatever capacity we could.
IPv6 address space is big. Really big
In the initial phases of our deployment, we were still not fully convinced about our addressing scheme. The main issue was the fact that we were so used to working with IPv4 and being conservative, that we hadn’t grasped the immensity of the IPv6 space. We knew we needed a more flexible scheme that would allow us for easier deployment, management and identification. Once we understood our requirements, we came up with a better solution. Our IPv6 space was broken down into groups of IP spaces that were dedicated to specific areas of the campus such as our Residential Network, research networks, remote sites, and future campus sites. Further subdivisions were implemented that took into account growth and enabled uniformity. Finally, we embedded information into IPv6 prefixes so we could easily identify networks by router and VLAN ID. The result was a scheme that lead to a more informative address space that was easier to deploy and troubleshoot.
Workarounds. There will be some.
Realistically, there are many challenges with deploying IPv6. For example, early on at our university, we had a mechanism to control access to the network. This mechanism relied on both DHCP and DNS, and required users to register their MAC address via a registration portal. When we started implementing IPv6, we quickly realized we were going to automatically provide network access to our users even if they were not registered because this registration relied solely on IPv4 and could not be easily ported to IPv6. The workaround for this issue was to only provide user devices with an IPv4 DNS server address so we could serve both A and AAAA records and still control network access.
IPv6 global addresses are prefixes that can be globally routable. Within the LSU network, we had subnets that were both publicly and privately addressed. Some privately addressed subnets had access to the Internet via NAT, others didn’t. Since our initial approach was to assign a global prefix to every subnet (we did not want to use Unique Local Addresses), this opened up the ability for some devices to have access to the Internet, when they previously did not have this capability. This issue was addressed by ensuring that the proper firewall rules were in place to prevent Internet access to those devices that didn’t need it. In some instance, IPv6 was not even enabled at all.
Even though IPv6 support has matured over the years, interesting bugs and caveats are still common. During one of our router implementations we ran into an interesting IPv6 issue. The problem was an apparent limitation on how to secure SNMP on IPv6. Specifically, it seemed as though the router was unable to support IPv6 ACLs to control SNMP access. After working with our vendor, the workaround was to create an ACL for IPv4 and IPv6 with the same name. This solved the issue.
Who needs to be involved?
The initiative and vision to support IPv6 at an early stage started with the networking team because we had the knowledge of the technology and understood its importance and impact. The challenge was the ability to translate that knowledge into something that key university stakeholders understood and were able to support. When you bring something like IPv6 to IT executives, it could be a challenge to be able to demonstrate the tangible benefits, especially when there are other priorities that have a wider and palpable economic impact. When you bring up concerns about products and vendors lacking IPv6 support to people who don’t understand its importance, they’ll have a bigger tendency push it to the side and not consider it a priority. IPv6 is an “under-the-hood” technology. Some stakeholders don’t care too much about the technicalities of the engine specs, they just want those engines to be able to move things from one place to another, and to be fast and reliable. This is especially true when there is already an older engine (IPv4) that has been running for quite a while and seems to be doing fine. However, it is incumbent upon network engineers to fully understand the issues and concerns with IPv4 (depletion, NAT) and getting the buy-in at the highest level of your IT organization. This way, a strategic plan and roadmap can be developed and ensure everybody is onboard and follows it.
It is important to understand that IPv6 is not a switch that can be simply turned on when needed. It requires careful planning and it affects everything from the very applications that users interact with to the devices that process and move packets. Ignoring it will not only work against you but could potentially put you at a disadvantage. Do you know if your critical applications currently support or have IPv6 in their roadmap? Talk to your vendors. Even today, we still meet vendors that have no understanding of IPv6 or have no plans to support it.
It’s not as hard as you think
Deploying IPv6 is not as hard as people think. Yes, there are many new things to learn and challenges to overcome. Engineers need proper training and practice, but sources for education abound, both free and paid. Unlike when LSU first got involved with IPv6, today a lot of technology, applications and services already support IPv6. But you must do your homework and ensure that you have a clear understanding of your requirements, and even more important, you must test those requirements to make sure they are actually met. In a way, here at LSU it felt like we were beta testers for a while, but we persevered by working with vendors and pressuring them to address our issues. If we were to implement IPv6 today, I know that it would be a lot easier because the technology has had time to mature. Now many solutions already come with built-in IPv6 support, and in almost all instances at no additional cost. So there are less and less excuses to not support IPv6.
If you’re thinking about IPv6, I suggest you get your hands dirty. Start with what you have, see how it works, determine what needs to be upgraded, and what needs to be replaced. Determine who your stakeholders are and engage with them to provide education and get them on board to test with you. As you learn more, you’ll become more comfortable and will quickly realize that it’s not as hard as you think. At LSU we’ve been doing it for so long that is now second nature. We look forward to the day when IPv4 will sunset. Adopt IPv6 today and you’ll make it happen that much faster.
Read more here:: teamarin.net/feed/
By Deepak Puri
Many industrial IoT systems have open doors that create unintended vulnerabilities.
What information could be exposed by open communications protocols? How do hackers identify vulnerable systems? What security resources are available? How do IoT firewalls protect against such threats?
TCP Port 502 vulnerabilities
Many industrial systems use TCP Port 502, which allows two hosts to establish a connection and exchange streams of data. TCP guarantees delivery of data and that packets will be delivered on port 502 in the same order in which they were sent. This creates the risk of remote attackers to install arbitrary firmware updates via a MODBUS 125 function code to TCP port 502. Scans from services such as Shodan identify systems that have an open TCP port 502 that could be vulnerable.
Read more here:: www.networkworld.com/category/lan-wan/index.rss
Red Hat delivered an update to Linux platforms that addresses how packets are transferred, container management, IoT devices, securing IT environment.
Read more here:: www.itbusinessedge.com/feeds
Juniper Networks has found and mostly patched a flaw in the way the firmware on its routers process IPv6 traffic, which allowed malicious users to simulate Direct Denial of Service attacks.
The vulnerability, which seems to be common to all devices processing IPv6 address, meant that purposely crafted neighbour discovery packets could be used to flood the routing engine from a remote or unauthenticated source, causing it to stop processing legitimate traffic, and leading to a DDoS condition.
According to Juniper’s advisory report:
Read more here:: feeds.arstechnica.com/arstechnica/index?format=xml
Cisco today released a high-level alert warning about a vulnerability in IPv6 packet processing functions of multiple Cisco products that could allow an unauthenticated, remote attacker to cause an affected device to stop processing IPv6 traffic, leading to a denial of service (DoS) condition on the device.
Cisco states: “The vulnerability is due to insufficient processing logic for crafted IPv6 packets that are sent to an affected device. An attacker could exploit this vulnerability by sending crafted IPv6 Neighbor Discovery packets to an affected device for processing. A successful exploit could allow the attacker to cause the device to stop processing IPv6 traffic, leading to a DoS condition on the device.”
The company has also pointed out that the vulnerability is not Cisco specific and any IPv6 processing unit not capable of dropping such packets early in the processing path or in hardware is affected by this vulnerability.
There are no workarounds that address this vulnerability as of yet and customers are advised to rely on external mitigation techniques.
Follow CircleID on Twitter
Read more here:: feeds.circleid.com/cid_sections/news?format=xml
By Geoff Huston
Geoff returns to the subject of IP packet fragmentation, this time looking at how IPv6 has changed the behaviour of packet fragmentation and discussing the concern of whether IPv6 can handle big packets.
Read more here:: blog.apnic.net/feed/