By Tim Kittila
While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.
CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:
- If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
- Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
- Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
- How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
- What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
- Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
- Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
- Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
- What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
- What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.
Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.
About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.
Read more here:: datacenterknowledge.com/feed/
The post 10 Things Every CIO Must Know about Their Data Centers appeared on IPv6.net.
Read more here:: IPv6 News Aggregator
Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.
In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.
The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.
The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.
The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.
The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.
The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.
Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.
Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.
“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”
To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)
Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.
IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.
Read more here:: www.datanami.com/feed/
Read more here:: IPv6 News Aggregator
Berg Insight, the M2M/IoT market research provider, released new findings about the smart thermostat market. The number of North American and European homes with a smart thermostat grew by 67% to 10.1 million in 2016. The North American market recorded a 64% growth in the installed base of smart thermostats to 7.8 million. In Europe, […]
The post Smart thermostats gain traction in Europe and North America appeared first on IoT Now – How to run an IoT enabled business.
Read more here:: www.m2mnow.biz/feed/
The post Smart thermostats gain traction in Europe and North America appeared on IPv6.net.
Read more here:: IPv6 News Aggregator
By Paul Vixie
If a national government wants to prevent certain kinds of Internet communication inside its borders, the costs can be extreme and success will never be more than partial. VPN and tunnel technologies will keep improving as long as there is demand, and filtering or blocking out every such technology will be a never-ending game of one-upmanship. Everyone knows and will always know that determined Internet users will find a way to get to what they want, but sometimes the symbolic message is more important than the operational results. In this article, I will describe some current and prior approaches to this problem, and also, make some recommendations doing nation-state Internet filtering in the most responsible and constructive manner.
History, Background, and SOPA
For many years, China’s so-called Great Firewall has mostly stopped most law-abiding people including both citizens and visitors from accessing most of the Internet content that the Chinese government does not approve of. As a frequent visitor to China, I find it a little odd that my Verizon Wireless data roaming is implemented as a tunnel back to the USA, and is therefore unfiltered. Whereas, when I’m on a local WiFi network, I’m behind the Great Firewall, unable to access Facebook, Twitter, and so on. The downside of China’s approach is that I’ve been slow to expand my business there — I will not break the law, and I need my employees to have access to the entire Internet.
Another example is Italy’s filtering policy regarding unlicensed (non-taxpaying) online gambling, which was blocked not by a national “Great Firewall” but rather SOPA-style DNS filtering mandated for Italian ISP’s. The visible result was an uptick in the use of Google DNS (184.108.40.206 and 220.127.116.11) by Italian gamblers, and if there was also an increase in gambling tax revenue, that was not widely reported. The downside here is the visible cracks in Italian society — many of Italians apparently do not trust their own government. Furthermore, in 2013 the European Union ruled that this kind of filtering was a violation of EU policy.
In Turkey up until 2016, the government had similar protections in place, not about gambling but rather pornography and terrorism and anti-Islamic hate speech. The filtering was widely respected, showing that the Turkish people and their government were more closely aligned at that time than was evident during the Italian experiment. It was possible for Turkish internet users to opt-out of the government’s Internet filtering regime, but such opt-out requests were uncommon. This fit the Internet’s cooperation-based foundation perfectly: where interests are aligned, cooperation is possible, but where interests are not aligned, unilateral mandates are never completely effective.
In the years since the SOPA debacle in the United States, I’ve made it my priority to discuss with the entertainment and luxury goods industries the business and technical problems posed to them by the Internet. Away from the cameras, most executives freely admit that it’s not possible to prevent determined users from reaching any part of the Internet they might seek, including so-called “pirate” sites which may even be “dedicated to infringement”. I learned however that there is a class of buyers, of both music and movies and luxury goods, who are not interested in infringement per se, and who are often simply misled by “pirate” Internet sites who pretend to be legitimate. One estimate was that only 1/3rd of commercial music is bought legally, and the remaining 2/3rd is roughly divided between dedicated (1/3rd) and accidental (1/3rd) infringement. If so, then getting the accidental infringers who comprise 1/3rd of the market to buy their music legally wouldn’t change the cost of music for those buyers, but could raise the music industry’s revenues by 100%. We should all think of that as a “win-win-win” possibility.
Speaking for myself, I’d rather live and act within the law, respecting intellectual property rights, and using my so-called “dollar votes” to encourage more commercial art to be produced. I fought SOPA not because I believed that content somehow “wanted to be free”, but because this kind of filtering will only be effective where the end-users see it as a benefit — see it, in other words, as aligned with their interests. That’s why I co-invented the DNS RPZ firewall system back in 2010, which allows security policy subscribers to automatically connect to their providers in near-realtime, and to then cooperate on wide-scale filtering of DNS content based on a shared security policy. This is the technology that SOPA would have used, except, SOPA would have been widely bypassed, and where not bypassed, would have prohibited DNSSEC deployment. American Internet users are more like Italians than Turks — they don’t want their government telling them what they can’t do.
I think, though, that every government ought to offer this kind of DNS filtering, so that any Internet user in that country who wants to see only the subset of the Internet considered safe by their national government, can get that behavior as a service. Some users, including me, would be happy to follow such policy advice even though we’d fight against any similar policy mandate. In my case, I’d be willing to pay extra to get this kind of filtering. My nation’s government invests a lot of time and money identifying illegal web sites, whether dedicated to terrorism, or infringement, or whatever. I’d like them to publish their findings in real time using an open and unencumbered protocol like DNS RPZ, so that those of us who want to avoid those varieties of bad stuff can voluntarily do so. In fact, the entertainment industry could do the same — because I don’t want to be an accidental infringer either.
Future, Foreground, and Specific Approaches
While human ingenuity can sometimes seem boundless, a nation-state exerting any kind of control over Internet reachability within its borders has only three broad choices available to them.
First, the Great Firewall approach. In this scenario, the government is on-path and can witness, modify, or insert traffic directly. This is costly, both in human resources, services, equipment, electric power, and prestige. It’s necessary for every in-country Internet Service Provider who wants an out-of-country connection, to work directly with government agencies or agents to ensure that real time visibility and control are among the government’s powers. This may require that all Internet border crossings occur in some central location, or it may require that the government’s surveillance and traffic modification capabilities be installed in multiple discrete locations. In addition to hard costs, there will be soft costs like errors and omissions which induce unexplained failures. The inevitable effects on the nation’s economy must be considered, since a “Great Firewall” approach must by definition wall the country off from mainstream human ideas, with associated chilling effects on outside investment. Finally, this approach, like all access policies, can be bypassed by a determined-enough end-user who is willing to ignore the law. The “Great Firewall” approach will maximize the bypass costs, having first maximized deployment costs.
Second, a distributed announcement approach using Internet Protocol address-level firewalls. Every user and every service on the Internet has to have one or more IP addresses from which to send, or to which receive, packets to or from other Internet participants. While the user-side IP addresses tend to be migratory and temporary in nature due to mobile users or address-pool sharing, the server-side IP addresses tend to be well known, pre-announced, and predictable. If a national government can compel all of its Internet Service Providers to listen for “IP address firewall” configuration information from a government agency, and to program its own local firewalls in accordance with the government’s then-current access policies, then it would have the effect of making distant (out-of-country) services deliberately unreachable by in-country users. Like all policy efforts, this can be bypassed, either by in-country (user) effort, or by out-of-country (service) provider effort, or by middle-man proxy or VPN provider effort. Bypass will be easier than in the Great Firewall approach described above, but a strong advantage of this approach is that the government does not have to be on-path, and so everyone’s deployment costs are considerably lower.
Third and finally, a distributed announcement approach using IP Domain Name System (DNS-level) firewalls. Every Internet access requires at least one DNS lookup, and these lookups can be interrupted according to policy if the end-user and Internet Service Provider (ISP) are willing to cooperate on the matter. A policy based firewall operating at the DNS level can interrupt communications based on several possible criteria: either a “domain name” can be poisoned, or a “name server”, or an “address result”. In each case, the DNS element to be poisoned has to be discovered and advertised in advance, exactly as in the “address-level firewall” and “Great Firewall” approaches described above. However, DNS lookups are far less frequent than packet-level transmissions, and so the deployment cost of a DNS-level firewall will be far lower than for a packet-level firewall. A DNS firewall can be constructed using off the shelf “open source” software using the license-free “DNS Response Policy Zone” (DNS RPZ) technology first announced in 2010. The DNS RPZ system allows an unlimited number of DNS operators (“subscribers”) to synchronize their DNS firewall policy to one or more “providers” such as national governments or industry trade associations. DNS firewalls offer the greatest ease of bypass, so much so that it’s better to say that “end-user cooperation is assumed,” which could be a feature rather than a bug.
A national government who wants to make a difference in the lived Internet experience of its citizens should consider not just the hard deployment and operational costs, but also the soft costs to the overall economy, and in prestige, and especially, what symbolic message is intended. If safety as defined by the government is to be seen as a goal it shares with its citizens and that will be implemented using methods and policies agreed to by its citizens, then ease of bypass should not be a primary consideration. Rather, ease of participation, and transparency of operation will be the most important ingredients for success.
Written by Paul Vixie, CEO, Farsight Security
Follow CircleID on Twitter
Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml
Read more here:: IPv6 News Aggregator
On 14/07/2017, the IETF with the publication of RFC8200 announced that the Internet Protocol Version 6 (IPv6) had become the latest Internet Standard. Great news for IPv6, but if you’re surprised and confused by this, then you’re probably not only one!
The IPv6 specification that we’ve been studying and religiously following for more than 18 years was defined in RFC2460 along with several other RFCs [RFC 5095, RFC 5722, RFC 5871, RFC 6437, RFC 6564, RFC 6935, RFC 6946, RFC 7045, RFC 7112] . However, RFC 2460 was only ever a Draft Standard, and only now moves to being a full Internet Standard.
The IETF decided to make this change as there are various RFCs defining the IPv6 specification, and it’s good to combine these along with the Errata into a single RFC. To further understand this issue, it’s necessary to first review the IETF Standardisation process.
As per RFC 2026, all work started with an Internet Draft (I-D) which were and still are intended to be rough sketches of ideas, contributed as raw inputs to the IETF process and having a lifetime of no longer than 6 months although they may be updated several times. And I-D may also be adopted by a Working Group and further refined, progressing through a number of iterations until (and only if) consensus is reached, when the specification goes through a review phase before being published as a Request for Comment (RFC).
Again with reference to RFC 2026, specifications that were intended to become Internet Standards evolved through a set of maturity levels known as the “Standards Track”, which itself has three classifications:
Proposed Standard – This generally means a specification is stable, has resolved known design choices, is believed to be well-understood, has received significant community review, and appears to enjoy enough community interest to be considered valuable. However, further experience might result in a change or even retraction of the specification before it advances. Neither implementation nor operational experience is usually required prior to the designation of a specification as a Proposed Standard.
Draft Standard – A specification from which at least two independent and interoperable implementations from different code bases have been developed, and for which sufficient successful operational experience has been obtained, may be elevated to the “Draft Standard” level. A Draft Standard must be well-understood and known to be quite stable, both in its semantics and as a basis for developing an implementation.
Internet Standard – A specification for which significant implementation and successful operational experience has been obtained may be elevated to the Internet Standard level. An Internet Standard (or just Standard) is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community. A specification that reaches the status of Standard is assigned a number in the STD series while retaining its RFC number.
In 2011, RFC 6410 was published as an update to RFC 2026 that replaced the three-tier ladder with a two-tier ladder. In this update, specifications become Internet Standards through a set of two maturity levels known as “Proposed Standard” and “Internet Standard” and hence the former “Draft Standard” level was abandoned with criteria provided for reclassifying these RFCs.
Any protocol or service that is currently at the abandoned Draft Standard maturity level will retain that classification, absent explicit actions as follows:
- A Draft Standard may be reclassified as an Internet Standard provided there are no errata against the specification that would cause a new implementation to fail to interoperate with deployed ones.
- The IESG may choose to reclassify any Draft Standard document as a Proposed Standard.
As of 1 June 2017 there were 81 RFCs with “Draft Standard” under the “Standard Track” and RFC 2460 – Internet Protocol, Version 6 (IPv6) Specification was one of these
The IPv6 Maintenance (6MAN) Working Group which is responsible for the maintenance, upkeep and advancement of the IPv6 protocol specifications and addressing architecture, started working on advancing the IPv6 core specifications towards an Internet Standard at IETF 93 in July 2015. The working group identified multiple RFCs to update, including RFC 2460, and decided to revise and re-classify it by incorporating updates from other 9 RFCs and 2 Errata.
The first draft was published in August 2015 as “draft-ietf-6man-rfc2460bis” and after several further changes, the final version was submitted for review in May 2017 as “draft-ietf-6man-rfc2460bis-13“. All of the changes from RFC2460 are summarized in Appendix B [Page 36] and are ordered by the Internet Draft that initiated the change. The document has gone through extensive scrutiny in the 6MAN working group and there is broad support for this version to be published as an Internet Standard.
So really nothing changes, as RFC 8200 is a combined version of RFC 2460 along with other relevant RFCs and Errata.
Read more here:: www.internetsociety.org/deploy360/blog/feed/
Read more here:: IPv6 News Aggregator
It’s another busy week at IETF 99 in Prague, and we’ll be bringing you daily blog posts that highlight what Deploy360 will be focused on during that day. And Monday sees a packed agenda with three working groups on the Internet-of-Things, a couple on routing, one on encryption, and an important IPv6 Maintenance WG session.
The day kicks off at 09.30 CEST/UTC+2 with 6MAN, and the big development is the move of the IPv6 specification to Internet Standard Status, as despite being widely deployed, IPv6 has remained a ‘Draft Standard’ since its original publication in 1998. There are also two working group drafts on updating the IPv6 Addressing Architecture as currently defined in RFC 4291, and on IPv6 Node Requirements as currently defined in RFC 6434. Other existing drafts up for discussion include recommendations on IPv6 address usage and on Route Information Options in Redirect Messages.
There are three new drafts being proposed, including one that covers scenarios when IPv6 hosts might not be able to properly detect that a network has changed IPv6 addressing and proposes changes to the Default Address Selection algorithm defined in RFC6724; another that proposes a mechanism for IPv6 hosts to retrieve additional information about network access through multiple interfaces; whilst the remaining draft defines the AERO address for use by mobile networks with a tethered network of IoT devices requiring a unique link-local address after receiving a delegated prefix.
NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.
Running in parallel is ACE which is developing authentication and authorization mechanisms for accessing resources on network nodes with limited CPU, memory and power. Amongst the ten drafts on the agenda, there’s one proposing a DTLS profile for ACE.
Also at the same time is CURDLE which is chartered to add cryptographic mechanisms to some IETF protocols, and to make implementation requirements including deprecation of old algorithms. The agenda isn’t very comprehensive at the moment, but nine drafts were recently submitted to the IESG for publication, and what will certainly be discussed today is a draft on key change methods for SSH.
In the afternoon, Homenet is meeting from 13.30 CEST/UTC+2. This is developing protocols for residential networks based on IPv6, and will continue to discuss updated drafts relating to a name resolution and service discovery architecture for homenets, how the Babel routing protocol can be used in conjunction with the HNCP protocol in a Homenet scenario, and the use of .homenet as a special use top-level domain to replace .home. There are also three new drafts relating to the service discovery and registration aspects of Homenet.
Running in parallel is 6TiSCH. There will be summaries of the 1st F-Interop 6TiSCH Interoperability Event and OpenWSN Hackathon, followed by discussions on the updated drafts related to the 6top protocol that enables distributed scheduling, as well as a draft related to security functionality.
The later afternoon session sees SIDROPS meeting from 15.50 CEST/UTC+2. This is taking the technology developed by SIDR and is developing guidelines for the operation of SIDR-aware networks, as well as providing operational guidance on how to deploy and operate SIDR technologies in existing and new networks. One particularly interesting draft proposes to use blockchain technology to validate IP address delegation, whilst another describes an approach to validate the content of the RPKI certificate tree. A couple of other drafts aim to clarify existing approaches to RPKI validation.
Concluding the day is GROW during the evening session. This group looks at the operational problems associated with the IPv4 and IPv6 global routing systems, and whilst theres’s no agenda for this meeting yet, four new and updated drafts were recently published on more graceful shutting down of BGP sessions, how to minimise the impact of maintenance on BGP sessions, and extensions to the BGP monitoring protocol.
For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.
Relevant Working Groups
- 6man (IPv6 Maintenance) WG
Monday, 17 July 2017 09.30-12.00 CEST/UTC+2, Grand Hilton Ballroom
- ace (Authentication and Authorization for Constrained Environments)
Monday, 17 July 2017 09.30-12.00 CEST/UTC+2, Congress Hall I
- curdle (CURves, Deprecating and a Little more Encryption) WG
Monday, 17 July 2017 09.30-12.00 CEST/UTC+2, Congress Hall III
- homenet (Home Networking) WG
Monday, 17 July 2017 13.30-15.30 CEST/UTC+2, Grand Hilton Ballroom
- 6TiSCH (IPv6 over the TSCH mode of IEEE 802.15.4e) WG
Monday, 17 July 2017 13.30-15.30 CEST/UTC+2, Karlin I/II
- sidrops (SIDR Operations) WG
Monday, 17 July 2017 15.50-17.50 CEST/UTC+2, Congress Hall III
- grow (Global Routing Operations) WG
Monday, 17 July 2017 17.40-18.40 CEST/UTC+2, Congress Hall III
Read more here:: www.internetsociety.org/deploy360/blog/feed/
Read more here:: IPv6 News Aggregator
A recent BusinessWeek article titled “America’s Retailers Are Closing Stores Faster Than Ever” summarizes the epidemic that retailers are facing today (see Figure 1).
Retailers are closing stores at a record path and the driving force behind the acceleration in store closings is Amazon, who now accounts for over 50% of all on-line retail sales (see Figure 2).
What are retailers to do when the tricks and techniques that worked in the past just don’t work in today’s real-time data and analytics driven business world? Business models that worked in a world that valued size and location quickly fall apart in a world where retailers are leveraging customer, product and operational data and analytics to provide a highly-personalization shopping experience and anticipate customers’ shopping desires to provide a wider range of highly-relevant, and easily accessible products (think Amazon 1-Click®).
Read more here:: iot.sys-con.com/index.rss
The post What Digital Transformation Means to Retailers | @ThingsExpo #DX #IoT #M2M appeared on IPv6.net.
Read more here:: IPv6 News Aggregator
Next week is IETF 99 in Prague which will be the fourth time the IETF has been held in the city. The Deploy360 team will be represented by Megan Kruse and Dan York, along with ISOC’s Chief Internet Technology Officer Olaf Kolkman. We’ll once again be highlighting the latest IPv6, DNSSEC, Securing BGP and TLS related developments.
Our colleagues are planning to cover the following sessions, so please come and say hello!
Monday, 17 July 2017
- IPv6 Maintenance – Grand Hilton Ballroom @ 09.30-12.00 UTC+2
- Authentication and Authorization for Constrained Environments – Congress Hall I @ 09.30-12.00 UTC+2
- CURves, Deprecating and a Little more Encryption – Congress Hall III @ 09.30-12.00 UTC+2
- Home Networking – Grand Hilton Ballroom @ 13.30-15.30 UTC+2
- IPv6 over the TSCH mode of IEEE 802.15.4e – Karlin I/II @ 13.30-15.30 UTC+2
- SIDR Operations – Congress Hall III @ 15:50-17:20 UTC+2
- Global Routing Operations – Congress Hall III @ 17.40-18.40 UTC+2
Tuesday, 18 July 2017
- IPv6 Operations (Part 1) – Congress Hall II @ 09.30-12.00 UTC+2
- DNS PRIVate Exchange – Congress Hall III @ 09.30-12.00 UTC+2
- Privacy Enhanced RTP Conferencing – Berlin/Brussels @ 09.30-12.00 UTC+2
- Thing-to-Thing – Grand Hilton Ballroom @ 13.30-15.30 UTC+2
- Domain Name System Operations (Part 1) – Congress Hall II @ 15.50-17.50 UTC+2
- Crypto Forum – Congress Hall I @ 15.50-17.50 UTC+2
- IPv6 over Networks of Resource-constrained Nodes – Karlin I/II @ 15.50-17.50 UTC+2
Wednesday, 19 July 2017
- Transport Layer Security – Grand Hilton Ballroom @ 09.30-12.00 UTC+2
- Distributed Mobility Management – Berlin/Brussels @ 09.30-12.00 UTC+2
- Dynamic Host Configuration – Athens/Barcelona @ 13.30-15.00 UTC+2
Thursday, 20 July 2017
- IPv6 Operations (Part 2) – Grand Hilton Ballroom @ 13.30-15.30 UTC+2
- Routing Over Low power and Lossy networks – Karlin I/II @ 13.30-15.30 UTC+2
- IP Wireless Access in Vehicular Environments – Athens/Barcelona @ 15.50-17.50 UTC+2
- Using TLS in Applications – Berlin/Brussels @ 18.10-19.10 UTC+2
- Domain Name System Operations (Part 2) – Grand Hilton Ballroom @ 18.10-19.10 UTC+2
Friday, 21 July 2017
- Automated Certificate Management Environment – Athens/Barcelona @ 09.30-11.30 UTC+2
- IPv6 over Low Power Wide-Area Networks – Karlin I/II @ 09.30-11.30 UTC+2
- Rough Guide to IETF 99: DNSSEC, DANE and DNS Security
- Rough Guide to IETF 99: IPv6
- Rough Guide to IETF 99: Internet Infrastructure Resilience
- Rough Guide to IETF 99: The Internet of Things
- Rough Guide to IETF 99: Trust, Identity, and Privacy
Read more here:: www.internetsociety.org/deploy360/blog/feed/
Read more here:: IPv6 News Aggregator
IPv6 Case Study from Washington and Jefferson College in Washington, Pennsylvania
Washington and Jefferson College is ready for IPv6. When we first started to look into doing IPv6, our initial thinking was that we could get a range of addresses, and then take our time with the rollout. However, once we got our ARIN assignment, we took off with it. We knew more and more systems are going to be IPv4 and IPv6, so we wanted to be able to access all the resources in the world, and we wanted people to be able to access all of our resources too. We did not want to be in a situation where we were limiting ourselves.
Brace yourself for impact (or not)
In the past when I have gone to conferences and heard people talking about IPv6, they would paint this picture that IPv6 is very complicated so brace yourself for impact. Then after we got our addresses, I thought to myself, this is not as complicated as I thought it was going to be. Once I got the proper materials, started researching and planning, it was not a nightmare like people had made it out to be. Some resources that we used to get up to speed were online tech documents, message boards, and books like Cisco IPv6 Fundamentals or Cisco IPv6 for Enterprise Networks. Fear of the unknown has led some to ignore IPv6, but really, we are running out of excuses.
Washington and Jefferson College acquired IPv6 from ARIN in 2013. Later that year, we started an in-depth planning stage. In early 2014, we enabled and configured IPv6 in our network core and on our critical servers like domain controllers, LDAP servers, DNS servers, etc. Then as the year went on, we did more testing. We decided to go with Stateful DHCPv6 for clients, so once we had everything set up on our servers and in our network core, we configured a test VLAN with just IPv6 to see if we could get internal communication to work. Once we could do that with IPv6, we set up an admin VLAN for our department that was dual stack to test if we were communicating with different servers over IPv4 and IPv6. The next year, after we switched Internet Service Providers, we expanded out to our ISP so we could communicate with the rest of the world via IPv6. Once we verified that everything was working on our critical servers and admin VLAN, we then moved on to internal servers and services people would access from off campus like our VPN servers.
In our server room infrastructure out to the Internet, we are set up and live with IPv6. Our plan now is to expand out to our faculty, staff, and computer labs. When we were initially deciding to implement IPv6, our Server Manager, Jason Pergola, and myself talked to our CIO, Dan Faulk, about moving forward with acquiring an IPv6 range. Because of the uncertainties with IPv6 and not many, if any, colleges in our area even thinking about IPv6 yet, our CIO asked that we first put together an initial plan. After discussing our plan and the benefits, we were given the green light to move forward. Laying out a road map is what helped to make a good convincing case.
We hit a few snags along the way, but for the most part everything seemed to run smoothly. At first, we found all Windows systems would pull an IPv6 address via DHCPv6, but we could not get any Mac computers to pull one. If we took the same Mac to our home ISP, it would work. It turned out a colon in the wrong place caused a lot of confusion (think a link-local address format of FE80:7::1 instead of FE80::7:1 – a result of us trying to be fancy with manually configuring the link-local address instead of allowing IPv6 to automatically generate it). Once we figured it out and assigned the same link-local address to every VLAN interface with the FE80::#:# format, we were good to go.
As a smaller institution, we lucked out because we had just upgraded our firewall and Internet management appliance, and we just did an update on our core switch. All of our backbone network equipment and server infrastructure supported IPv6. Everything fell into place. We currently have some parts of our campus that have older networking equipment that do not support IPv6, which we are in the process of phasing out. With recent upgrades, we are now ready to continue the rollout.
Our Server Manager said he noticed a huge speed boost to our email servers once we opened IPv6 to the rest of the world. At first with IPv4 only, there was a lag but once our servers were also communicating with the outside world via IPv6, those processes were not taking nearly as long. It appeared that those servers were trying to use IPv6 first and when they could not, they would rollover to IPv4. We have also noticed that some Internet activity seems to be responding quicker than it did over IPv4, like accessing certain websites or videos. It could be the way different content loads, or it could be something with the way our ISP has their IPv6 network infrastructure set up.
From a recruiting standpoint, if you have potential students who are researching your college and they are not able to connect to you because they are running IPv6-only, then I feel like you are hurting the institution. Or if there are students currently enrolled that are looking to do research, and they can’t connect to the proper resources, that isn’t allowing students to complete their classes and get good grades.
IPv6 is not as scary as many people are making it out to be. Like anything else in the IT industry, you have to put in the time to actually learn IPv6. By not doing it, you are limiting your institution by not being able to communicate with everything in the world.
Do proper research and planning. Even to this day, we are still learning. Do not be afraid to ask questions. No matter how bizarre the issue is, you never know who will give you the right answer that may lead somewhere else. Or you may be helping out someone else without even knowing it.
Read more here:: teamarin.net/feed/
Read more here:: IPv6 News Aggregator
One of the main issues when an ISP is planning to deliver IPv6 services is to decide how to address the customers.
Read more here:: labs.ripe.net/RSS