broadband forum tr-069

10 Things Every CIO Must Know about Their Data Centers

By News Aggregator

By Tim Kittila

While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.

CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:

  1. If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
  1. Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
  1. Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
  1. How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
  1. What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
  1. Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
  1. Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
  1. Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
  1. What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
  1. What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.

Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Read more here:: datacenterknowledge.com/feed/

The post 10 Things Every CIO Must Know about Their Data Centers appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

GigaSpaces Closes Analytics-App Gap With Spark

By News Aggregator

By George Leopold

Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.

In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.

The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.

The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.

The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.

The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.

The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.

Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.

Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.

“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”

To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)

Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.

IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.

Recent items:

IBM Bolsters Spark Ties with Latest SQL Engine

In-Memory Analytics to Boost Flight Ops For Major US Airline

The post GigaSpaces Closes Analytics-App Gap With Spark appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post GigaSpaces Closes Analytics-App Gap With Spark appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Smart thermostats gain traction in Europe and North America

By Sheetal Kumbhar

Berg Insight, the M2M/IoT market research provider, released new findings about the smart thermostat market. The number of North American and European homes with a smart thermostat grew by 67% to 10.1 million in 2016. The North American market recorded a 64% growth in the installed base of smart thermostats to 7.8 million. In Europe, […]

The post Smart thermostats gain traction in Europe and North America appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Smart thermostats gain traction in Europe and North America

By News Aggregator

By Sheetal Kumbhar

Berg Insight, the M2M/IoT market research provider, released new findings about the smart thermostat market. The number of North American and European homes with a smart thermostat grew by 67% to 10.1 million in 2016. The North American market recorded a 64% growth in the installed base of smart thermostats to 7.8 million. In Europe, […]

The post Smart thermostats gain traction in Europe and North America appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Smart thermostats gain traction in Europe and North America appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Deploy360@IETF99, Day 3: IPv6 & TLS

By News Aggregator

By Kevin Meynell

After a packed first couple of days, Wednesday at IETF 99 in Prague is a bit quieter for us. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

There’s just the three working groups to follow today, starting at 09.30 CEST/UTC+2 with TLS. A couple of very important drafts up for discussion though, with both the TLS 1.3 and DTLS 1.3 specifications in last call. There’s also a couple of other interesting drafts relating to DANE record and DNSSEC authentication chain extension for TLS, and Data Center use of Static DH in TLS 1.3.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively, there’s DMM that will be discussing at least one IPv6-relevant draft on the Applicability of Segment Routing IPv6 to the user-plane of mobile networks.

During the first afternoon session at 13.30 CEST/UTC+2, there’s DHC. This will continue to discuss four DHCPv6 related drafts, as well as hear about the DHCPv6 deployment experiences at Comcast.

Don’t forget that from 17.10 CDT/UTC-6 onwards will be the IETF Plenary Session. This is being held in Congress Hall I/II.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 3: IPv6 & TLS appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Nation Scale Internet Filtering — Do’s and Don’ts

By News Aggregator

By Paul Vixie

If a national government wants to prevent certain kinds of Internet communication inside its borders, the costs can be extreme and success will never be more than partial. VPN and tunnel technologies will keep improving as long as there is demand, and filtering or blocking out every such technology will be a never-ending game of one-upmanship. Everyone knows and will always know that determined Internet users will find a way to get to what they want, but sometimes the symbolic message is more important than the operational results. In this article, I will describe some current and prior approaches to this problem, and also, make some recommendations doing nation-state Internet filtering in the most responsible and constructive manner.

History, Background, and SOPA

For many years, China’s so-called Great Firewall has mostly stopped most law-abiding people including both citizens and visitors from accessing most of the Internet content that the Chinese government does not approve of. As a frequent visitor to China, I find it a little odd that my Verizon Wireless data roaming is implemented as a tunnel back to the USA, and is therefore unfiltered. Whereas, when I’m on a local WiFi network, I’m behind the Great Firewall, unable to access Facebook, Twitter, and so on. The downside of China’s approach is that I’ve been slow to expand my business there — I will not break the law, and I need my employees to have access to the entire Internet.

Another example is Italy’s filtering policy regarding unlicensed (non-taxpaying) online gambling, which was blocked not by a national “Great Firewall” but rather SOPA-style DNS filtering mandated for Italian ISP’s. The visible result was an uptick in the use of Google DNS (8.8.8.8 and 8.8.4.4) by Italian gamblers, and if there was also an increase in gambling tax revenue, that was not widely reported. The downside here is the visible cracks in Italian society — many of Italians apparently do not trust their own government. Furthermore, in 2013 the European Union ruled that this kind of filtering was a violation of EU policy.

In Turkey up until 2016, the government had similar protections in place, not about gambling but rather pornography and terrorism and anti-Islamic hate speech. The filtering was widely respected, showing that the Turkish people and their government were more closely aligned at that time than was evident during the Italian experiment. It was possible for Turkish internet users to opt-out of the government’s Internet filtering regime, but such opt-out requests were uncommon. This fit the Internet’s cooperation-based foundation perfectly: where interests are aligned, cooperation is possible, but where interests are not aligned, unilateral mandates are never completely effective.

In the years since the SOPA debacle in the United States, I’ve made it my priority to discuss with the entertainment and luxury goods industries the business and technical problems posed to them by the Internet. Away from the cameras, most executives freely admit that it’s not possible to prevent determined users from reaching any part of the Internet they might seek, including so-called “pirate” sites which may even be “dedicated to infringement”. I learned however that there is a class of buyers, of both music and movies and luxury goods, who are not interested in infringement per se, and who are often simply misled by “pirate” Internet sites who pretend to be legitimate. One estimate was that only 1/3rd of commercial music is bought legally, and the remaining 2/3rd is roughly divided between dedicated (1/3rd) and accidental (1/3rd) infringement. If so, then getting the accidental infringers who comprise 1/3rd of the market to buy their music legally wouldn’t change the cost of music for those buyers, but could raise the music industry’s revenues by 100%. We should all think of that as a “win-win-win” possibility.

Speaking for myself, I’d rather live and act within the law, respecting intellectual property rights, and using my so-called “dollar votes” to encourage more commercial art to be produced. I fought SOPA not because I believed that content somehow “wanted to be free”, but because this kind of filtering will only be effective where the end-users see it as a benefit — see it, in other words, as aligned with their interests. That’s why I co-invented the DNS RPZ firewall system back in 2010, which allows security policy subscribers to automatically connect to their providers in near-realtime, and to then cooperate on wide-scale filtering of DNS content based on a shared security policy. This is the technology that SOPA would have used, except, SOPA would have been widely bypassed, and where not bypassed, would have prohibited DNSSEC deployment. American Internet users are more like Italians than Turks — they don’t want their government telling them what they can’t do.

I think, though, that every government ought to offer this kind of DNS filtering, so that any Internet user in that country who wants to see only the subset of the Internet considered safe by their national government, can get that behavior as a service. Some users, including me, would be happy to follow such policy advice even though we’d fight against any similar policy mandate. In my case, I’d be willing to pay extra to get this kind of filtering. My nation’s government invests a lot of time and money identifying illegal web sites, whether dedicated to terrorism, or infringement, or whatever. I’d like them to publish their findings in real time using an open and unencumbered protocol like DNS RPZ, so that those of us who want to avoid those varieties of bad stuff can voluntarily do so. In fact, the entertainment industry could do the same — because I don’t want to be an accidental infringer either.

Future, Foreground, and Specific Approaches

While human ingenuity can sometimes seem boundless, a nation-state exerting any kind of control over Internet reachability within its borders has only three broad choices available to them.

First, the Great Firewall approach. In this scenario, the government is on-path and can witness, modify, or insert traffic directly. This is costly, both in human resources, services, equipment, electric power, and prestige. It’s necessary for every in-country Internet Service Provider who wants an out-of-country connection, to work directly with government agencies or agents to ensure that real time visibility and control are among the government’s powers. This may require that all Internet border crossings occur in some central location, or it may require that the government’s surveillance and traffic modification capabilities be installed in multiple discrete locations. In addition to hard costs, there will be soft costs like errors and omissions which induce unexplained failures. The inevitable effects on the nation’s economy must be considered, since a “Great Firewall” approach must by definition wall the country off from mainstream human ideas, with associated chilling effects on outside investment. Finally, this approach, like all access policies, can be bypassed by a determined-enough end-user who is willing to ignore the law. The “Great Firewall” approach will maximize the bypass costs, having first maximized deployment costs.

Second, a distributed announcement approach using Internet Protocol address-level firewalls. Every user and every service on the Internet has to have one or more IP addresses from which to send, or to which receive, packets to or from other Internet participants. While the user-side IP addresses tend to be migratory and temporary in nature due to mobile users or address-pool sharing, the server-side IP addresses tend to be well known, pre-announced, and predictable. If a national government can compel all of its Internet Service Providers to listen for “IP address firewall” configuration information from a government agency, and to program its own local firewalls in accordance with the government’s then-current access policies, then it would have the effect of making distant (out-of-country) services deliberately unreachable by in-country users. Like all policy efforts, this can be bypassed, either by in-country (user) effort, or by out-of-country (service) provider effort, or by middle-man proxy or VPN provider effort. Bypass will be easier than in the Great Firewall approach described above, but a strong advantage of this approach is that the government does not have to be on-path, and so everyone’s deployment costs are considerably lower.

Third and finally, a distributed announcement approach using IP Domain Name System (DNS-level) firewalls. Every Internet access requires at least one DNS lookup, and these lookups can be interrupted according to policy if the end-user and Internet Service Provider (ISP) are willing to cooperate on the matter. A policy based firewall operating at the DNS level can interrupt communications based on several possible criteria: either a “domain name” can be poisoned, or a “name server”, or an “address result”. In each case, the DNS element to be poisoned has to be discovered and advertised in advance, exactly as in the “address-level firewall” and “Great Firewall” approaches described above. However, DNS lookups are far less frequent than packet-level transmissions, and so the deployment cost of a DNS-level firewall will be far lower than for a packet-level firewall. A DNS firewall can be constructed using off the shelf “open source” software using the license-free “DNS Response Policy Zone” (DNS RPZ) technology first announced in 2010. The DNS RPZ system allows an unlimited number of DNS operators (“subscribers”) to synchronize their DNS firewall policy to one or more “providers” such as national governments or industry trade associations. DNS firewalls offer the greatest ease of bypass, so much so that it’s better to say that “end-user cooperation is assumed,” which could be a feature rather than a bug.

Conclusion

A national government who wants to make a difference in the lived Internet experience of its citizens should consider not just the hard deployment and operational costs, but also the soft costs to the overall economy, and in prestige, and especially, what symbolic message is intended. If safety as defined by the government is to be seen as a goal it shares with its citizens and that will be implemented using methods and policies agreed to by its citizens, then ease of bypass should not be a primary consideration. Rather, ease of participation, and transparency of operation will be the most important ingredients for success.

Written by Paul Vixie, CEO, Farsight Security

Follow CircleID on Twitter

More under: Access Providers, Censorship, DNS, Intellectual Property, Internet Governance, Networks, Policy & Regulation

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post Nation Scale Internet Filtering — Do’s and Don’ts appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

RFC 8200 – IPv6 has been standardized

By News Aggregator

By Aftab Siddiqui

On 14/07/2017, the IETF with the publication of RFC8200 announced that the Internet Protocol Version 6 (IPv6) had become the latest Internet Standard. Great news for IPv6, but if you’re surprised and confused by this, then you’re probably not only one!

The IPv6 specification that we’ve been studying and religiously following for more than 18 years was defined in RFC2460 along with several other RFCs [RFC 5095, RFC 5722, RFC 5871, RFC 6437, RFC 6564, RFC 6935, RFC 6946, RFC 7045, RFC 7112] . However, RFC 2460 was only ever a Draft Standard, and only now moves to being a full Internet Standard.

The IETF decided to make this change as there are various RFCs defining the IPv6 specification, and it’s good to combine these along with the Errata into a single RFC. To further understand this issue, it’s necessary to first review the IETF Standardisation process.

As per RFC 2026, all work started with an Internet Draft (I-D) which were and still are intended to be rough sketches of ideas, contributed as raw inputs to the IETF process and having a lifetime of no longer than 6 months although they may be updated several times. And I-D may also be adopted by a Working Group and further refined, progressing through a number of iterations until (and only if) consensus is reached, when the specification goes through a review phase before being published as a Request for Comment (RFC).

Again with reference to RFC 2026, specifications that were intended to become Internet Standards evolved through a set of maturity levels known as the “Standards Track”, which itself has three classifications:

Proposed Standard – This generally means a specification is stable, has resolved known design choices, is believed to be well-understood, has received significant community review, and appears to enjoy enough community interest to be considered valuable. However, further experience might result in a change or even retraction of the specification before it advances. Neither implementation nor operational experience is usually required prior to the designation of a specification as a Proposed Standard.

Draft Standard – A specification from which at least two independent and interoperable implementations from different code bases have been developed, and for which sufficient successful operational experience has been obtained, may be elevated to the “Draft Standard” level. A Draft Standard must be well-understood and known to be quite stable, both in its semantics and as a basis for developing an implementation.

Internet Standard – A specification for which significant implementation and successful operational experience has been obtained may be elevated to the Internet Standard level. An Internet Standard (or just Standard) is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community. A specification that reaches the status of Standard is assigned a number in the STD series while retaining its RFC number.

In 2011, RFC 6410 was published as an update to RFC 2026 that replaced the three-tier ladder with a two-tier ladder. In this update, specifications become Internet Standards through a set of two maturity levels known as “Proposed Standard” and “Internet Standard” and hence the former “Draft Standard” level was abandoned with criteria provided for reclassifying these RFCs.

Any protocol or service that is currently at the abandoned Draft Standard maturity level will retain that classification, absent explicit actions as follows:

  1. A Draft Standard may be reclassified as an Internet Standard provided there are no errata against the specification that would cause a new implementation to fail to interoperate with deployed ones.
  2. The IESG may choose to reclassify any Draft Standard document as a Proposed Standard.

As of 1 June 2017 there were 81 RFCs with “Draft Standard” under the “Standard Track” and RFC 2460 – Internet Protocol, Version 6 (IPv6) Specification was one of these

The IPv6 Maintenance (6MAN) Working Group which is responsible for the maintenance, upkeep and advancement of the IPv6 protocol specifications and addressing architecture, started working on advancing the IPv6 core specifications towards an Internet Standard at IETF 93 in July 2015. The working group identified multiple RFCs to update, including RFC 2460, and decided to revise and re-classify it by incorporating updates from other 9 RFCs and 2 Errata.

The first draft was published in August 2015 as “draft-ietf-6man-rfc2460bis” and after several further changes, the final version was submitted for review in May 2017 as “draft-ietf-6man-rfc2460bis-13“. All of the changes from RFC2460 are summarized in Appendix B [Page 36] and are ordered by the Internet Draft that initiated the change. The document has gone through extensive scrutiny in the 6MAN working group and there is broad support for this version to be published as an Internet Standard.

So really nothing changes, as RFC 8200 is a combined version of RFC 2460 along with other relevant RFCs and Errata.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post RFC 8200 – IPv6 has been standardized appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Deploy360@IETF99, Day 1: IoT, IPv6 & SIDR

By News Aggregator

By Kevin Meynell

It’s another busy week at IETF 99 in Prague, and we’ll be bringing you daily blog posts that highlight what Deploy360 will be focused on during that day. And Monday sees a packed agenda with three working groups on the Internet-of-Things, a couple on routing, one on encryption, and an important IPv6 Maintenance WG session.

The day kicks off at 09.30 CEST/UTC+2 with 6MAN, and the big development is the move of the IPv6 specification to Internet Standard Status, as despite being widely deployed, IPv6 has remained a ‘Draft Standard’ since its original publication in 1998. There are also two working group drafts on updating the IPv6 Addressing Architecture as currently defined in RFC 4291, and on IPv6 Node Requirements as currently defined in RFC 6434. Other existing drafts up for discussion include recommendations on IPv6 address usage and on Route Information Options in Redirect Messages.

There are three new drafts being proposed, including one that covers scenarios when IPv6 hosts might not be able to properly detect that a network has changed IPv6 addressing and proposes changes to the Default Address Selection algorithm defined in RFC6724; another that proposes a mechanism for IPv6 hosts to retrieve additional information about network access through multiple interfaces; whilst the remaining draft defines the AERO address for use by mobile networks with a tethered network of IoT devices requiring a unique link-local address after receiving a delegated prefix.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Running in parallel is ACE which is developing authentication and authorization mechanisms for accessing resources on network nodes with limited CPU, memory and power. Amongst the ten drafts on the agenda, there’s one proposing a DTLS profile for ACE.

Also at the same time is CURDLE which is chartered to add cryptographic mechanisms to some IETF protocols, and to make implementation requirements including deprecation of old algorithms. The agenda isn’t very comprehensive at the moment, but nine drafts were recently submitted to the IESG for publication, and what will certainly be discussed today is a draft on key change methods for SSH.

In the afternoon, Homenet is meeting from 13.30 CEST/UTC+2. This is developing protocols for residential networks based on IPv6, and will continue to discuss updated drafts relating to a name resolution and service discovery architecture for homenets, how the Babel routing protocol can be used in conjunction with the HNCP protocol in a Homenet scenario, and the use of .homenet as a special use top-level domain to replace .home. There are also three new drafts relating to the service discovery and registration aspects of Homenet.

Running in parallel is 6TiSCH. There will be summaries of the 1st F-Interop 6TiSCH Interoperability Event and OpenWSN Hackathon, followed by discussions on the updated drafts related to the 6top protocol that enables distributed scheduling, as well as a draft related to security functionality.

The later afternoon session sees SIDROPS meeting from 15.50 CEST/UTC+2. This is taking the technology developed by SIDR and is developing guidelines for the operation of SIDR-aware networks, as well as providing operational guidance on how to deploy and operate SIDR technologies in existing and new networks. One particularly interesting draft proposes to use blockchain technology to validate IP address delegation, whilst another describes an approach to validate the content of the RPKI certificate tree. A couple of other drafts aim to clarify existing approaches to RPKI validation.

Concluding the day is GROW during the evening session. This group looks at the operational problems associated with the IPv4 and IPv6 global routing systems, and whilst theres’s no agenda for this meeting yet, four new and updated drafts were recently published on more graceful shutting down of BGP sessions, how to minimise the impact of maintenance on BGP sessions, and extensions to the BGP monitoring protocol.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 1: IoT, IPv6 & SIDR appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

What Digital Transformation Means to Retailers | @ThingsExpo #DX #IoT #M2M

By News Aggregator

A recent BusinessWeek article titled “America’s Retailers Are Closing Stores Faster Than Ever” summarizes the epidemic that retailers are facing today (see Figure 1).
Retailers are closing stores at a record path and the driving force behind the acceleration in store closings is Amazon, who now accounts for over 50% of all on-line retail sales (see Figure 2).
What are retailers to do when the tricks and techniques that worked in the past just don’t work in today’s real-time data and analytics driven business world? Business models that worked in a world that valued size and location quickly fall apart in a world where retailers are leveraging customer, product and operational data and analytics to provide a highly-personalization shopping experience and anticipate customers’ shopping desires to provide a wider range of highly-relevant, and easily accessible products (think Amazon 1-Click®).

read more

Read more here:: iot.sys-con.com/index.rss

The post What Digital Transformation Means to Retailers | @ThingsExpo #DX #IoT #M2M appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Our Hot Topics @ IETF 99

By News Aggregator

By Kevin Meynell

Next week is IETF 99 in Prague which will be the fourth time the IETF has been held in the city. The Deploy360 team will be represented by Megan Kruse and Dan York, along with ISOC’s Chief Internet Technology Officer Olaf Kolkman. We’ll once again be highlighting the latest IPv6, DNSSEC, Securing BGP and TLS related developments.

Our colleagues are planning to cover the following sessions, so please come and say hello!

Monday, 17 July 2017

Tuesday, 18 July 2017

Wednesday, 19 July 2017

Thursday, 20 July 2017

Friday, 21 July 2017

The Internet Society has also put together its latest Rough Guide to the IETF 99, and will again be covering wider developments over on the Tech Matters Blog. In particular, see:

  • Rough Guide to IETF 99: DNSSEC, DANE and DNS Security
  • Rough Guide to IETF 99: IPv6
  • Rough Guide to IETF 99: Internet Infrastructure Resilience
  • Rough Guide to IETF 99: The Internet of Things
  • Rough Guide to IETF 99: Trust, Identity, and Privacy

If you can’t get to Prague next week, you can attend remotely! Just visit the IETF 99 remote participation page or check out http://www.ietf.org/live/ for more options.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Our Hot Topics @ IETF 99 appeared on IPv6.net.

Read more here:: IPv6 News Aggregator