at&t ipv6

Deploy360@IETF99, Day 5: Kdo se moc ptá, moc se dozví

By Kevin Meynell

There’s a couple of sessions of interest on the last day of IETF 99 before we say na shledanou to the City of a Hundred Spires.

Both sessions are running in parallel on the Friday morning starting at 09.30 CEST/UTC+2. ACME will continue to discuss the ACME specification, as well as the addition of CAA checking for compliance with CA/B Forum guidelines. There’s also new drafts specifying how to issue certificates for telephone numbers, how to issue certificates for VoIP service providers to Secure Telephony Identity, and ACME extensions to enable the issuance of short-term and automatically renewed certificates, certificates for e-mail recipients that want to use S/MIME, and certificates for use by TLS e-mail services.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively you can check out LPWAN that’s working on enabling IPv6 connectivity with very low wireless transmission rates between battery-powered devices spread across multiple kilometres. This will be discussing five drafts related to IPv6 header fragmentation and compression, as well as ICMPv6 usage over LPWANs.

That brings this IETF to an end, so it’s goodbye from us in Prague. Many thanks for reading along this week… please do read our other IETF 99-related posts … and we’ll see you at IETF 100 on 12-17 November 2017 in Singapore!

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

Deploy360@IETF99, Day 5: Kdo se moc ptá, moc se dozví

By News Aggregator

By Kevin Meynell

There’s a couple of sessions of interest on the last day of IETF 99 before we say na shledanou to the City of a Hundred Spires.

Both sessions are running in parallel on the Friday morning starting at 09.30 CEST/UTC+2. ACME will continue to discuss the ACME specification, as well as the addition of CAA checking for compliance with CA/B Forum guidelines. There’s also new drafts specifying how to issue certificates for telephone numbers, how to issue certificates for VoIP service providers to Secure Telephony Identity, and ACME extensions to enable the issuance of short-term and automatically renewed certificates, certificates for e-mail recipients that want to use S/MIME, and certificates for use by TLS e-mail services.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


Alternatively you can check out LPWAN that’s working on enabling IPv6 connectivity with very low wireless transmission rates between battery-powered devices spread across multiple kilometres. This will be discussing five drafts related to IPv6 header fragmentation and compression, as well as ICMPv6 usage over LPWANs.

That brings this IETF to an end, so it’s goodbye from us in Prague. Many thanks for reading along this week… please do read our other IETF 99-related posts … and we’ll see you at IETF 100 on 12-17 November 2017 in Singapore!

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 5: Kdo se moc ptá, moc se dozví appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

10 Things Every CIO Must Know about Their Data Centers

By Tim Kittila

While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.

CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:

  1. If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
  1. Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
  1. Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
  1. How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
  1. What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
  1. Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
  1. Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
  1. Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
  1. What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
  1. What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.

Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Read more here:: datacenterknowledge.com/feed/

10 Things Every CIO Must Know about Their Data Centers

By News Aggregator

By Tim Kittila

While data centers aren’t necessarily something CIOs think about on a daily basis, there are some essential things every executive in this role must know about their organization’s data center operations. They all have to do with data center outages, past and future ones. These incidents carry significant risk of negative impact on the entire organization’s performance and profitability, which are things that fall comfortably within a typical CIO’s scope of responsibilities.

CIOs need to know answers to these questions, and those answers need to be updated on a regular basis. Here they are:

  1. If you knew that your primary production data center was going to take an outage tomorrow, what would you do differently today? This is the million-dollar question, although not knowing the answer usually costs a lot more to the CIO. Simply put, if you don’t know your data center’s vulnerabilities, you are more likely to take an outage. Working with experienced consultants will usually help, both in terms of tapping into their expertise and in terms of having a new set of eyes focus on the matter. At least two things should be reviewed: 1) How your data center is designed; and 2) How it operates. This review will help identify downtime risks and point to potential ways to mitigate.
  1. Has your company ever experienced a significant data center outage? How do you know it was significant? Key here is defining “significant outage.” The definition can vary from one organization to another, and even between roles within a single company. It can also vary by application. Setting common definitions around this topic is essential to identifying and eliminating unplanned outages. Once defined, begin to track, measure, and communicate these definitions within your organization.
  1. Which applications are the most critical ones to your organization, and how are you protecting them from outages? The lazy uniform answer would be, “Every application is important.” But every organization has applications and services that are more critical than others. A website going down in a hospital doesn’t stop patients being treated, but a website outage for an e-commerce company means missed sales. Once you identify your most critical apps and services, determine who will protect them and how, based on your specific business case and risk tolerance.
  1. How do you measure the cost of a data center outage? Having this story clear can help the business make better decisions. By developing a model for determining outage costs and weighing them against the cost to mitigate the risk, the business can make more informed decisions. Total outage cost can be nebulous, but spending the time to get as close to it as possible and getting executive buy-in on that story will help the cause. We have witnessed generator projects and UPS upgrades turned down simply because the manager couldn’t tell this story to the business. A word of warning: The evidence and the costs for the outage have to be realistic. Soft costs get hard to calculate and can make the choices seem simple, but sometimes the outage may just mean a backlog of information that needs to be processed, without significant top-line or bottom-line impact. Even the most naïve business execs will sniff out unrealistic hypotheticals. Outage cost estimates have to be real.
  1. What indirect business costs will a data center outage result in? This varies greatly from organization to organization, but these are the more difficult to quantify costs, such as loss of productivity, loss of competitive advantage, reduced customer loyalty, regulatory fines, and many other types of losses.
  1. Do you have documented processes and procedures in place to mitigate human error in the data center? If so, how do you know they are being precisely followed? According to recent Uptime Institute statistics, around 73% of data center outages are caused by human error. Before we can replace all humans with machines, the only way to address this is having clearly defined processes and procedures. The fact that this statistic hasn’t improved over time indicates that most organizations still have a lot of work to do in this area. Enforcement of these policies is just as critical. Many organizations do have sound policies but don’t enforce them adequately.
  1. Do your data center security policies gel with your business security policies? We could write an entire article on this topic (and one is in the works), but in short, now that IT and facilities are figuring out how to collaborate better inside the data center, it’s time for IT and security departments to do the same. One of the common problems we’ve observed is when a corporate physical security system needs to operate within the data center but under different usage requirements than the rest of the company. Getting corporate security and data center operations to integrate, or at least share data is usually problematic.
  1. Do you have a structured, ongoing process for determining what applications run in on-premises data centers, in a colo, or in a public cloud? As your business requirements change, so do your applications and resources needed to operate them. All applications running in the data center should be assessed and reviewed at least annually, if not more often, and the best type of infrastructure should be decided for each application based on reliability, performance, and security requirements of the business.
  1. What is your IoT security strategy? Do you have an incident response plan in place? Now that most organizations have solved or mitigated BYOD threats, IoT devices are likely the next major category of input devices to track and monitor. As we have seen over the years, many organizations are monitoring activity on the application stack, while IoT devices are left unmonitored and often unprotected. These devices play a major role in the physical infrastructure (such as power and cooling systems) that operates the organization’s IT stack. Leaving them unprotected increases the risk of data center outages.
  1. What is your Business Continuity/Disaster Recovery process? And the follow up questions: Does your entire staff know where they need to be and what they need to do if you have a critical and unplanned data center event? Has that plan been tested? Again, processes are key here. Most organizations we consult with do have these processes architected, implemented, and documented. The key issue is once again the human factor: Most often personnel don’t know about these processes, and if they do, they haven’t practiced them to be alert and cognizant of what to do when a major event actually happens.

Many other questions could (and should) be asked, but we believe that these represent the greatest risk and impact to an organization’s IT operations in a data center. Can you thoroughly answer all of these questions for your company? If not, it’s time to look for answers.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

Read more here:: datacenterknowledge.com/feed/

The post 10 Things Every CIO Must Know about Their Data Centers appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Deploy360@IETF99, Day 4: IoT, IPv6, DNSSEC & TLS

By News Aggregator

By Kevin Meynell

Thursday at IETF 99 in Prague is a mixture of overflow sessions, the Internet-of-Things, and encryption. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

Our day doesn’t actually start until 13.30 CEST/UTC+2, with the second part of V6OPS. This will continue discussing the ten drafts from whichever point it left them on Tuesday morning (see our Day 2 post for more information).

If you have V6OPS fatigue, then alternatively check out ROLL. This focuses on routing for the Internet-of-Things and has six drafts up for discussion.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


The second afternoon session at 15.50 CEST/UTC+2 features IPWAVE. This will be discussing two drafts on transmitting IPv6 over over IEEE 802.11-OCB in Vehicle-to-Internet and Vehicle-to-Infrastructure networks, and on a problem statement for IP Wireless Access in Vehicular Environments. A further draft summarises a survey on IP-based Vehicular Networking for Intelligent Transportation Systems.

There’s two working groups during the evening session starting at 18.10 CEST/UTC+2. UTA is discussing three drafts related to the compulsory use of TLS for SMTP, an interesting one proposing to obsolete clear text transfer for e-mail, and one proposing an SMTP service extension.

Finally, there’s the second part of DNSOP. There appears to be just the one DNSSEC-related draft in this session, on algorithm negotiation.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Deploy360@IETF99, Day 4: IoT, IPv6, DNSSEC & TLS appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Deploy360@IETF99, Day 4: IoT, IPv6, DNSSEC & TLS

By Kevin Meynell

Thursday at IETF 99 in Prague is a mixture of overflow sessions, the Internet-of-Things, and encryption. Each day we’re bringing you blog posts pointing out what Deploy360 will be focusing on.

Our day doesn’t actually start until 13.30 CEST/UTC+2, with the second part of V6OPS. This will continue discussing the ten drafts from whichever point it left them on Tuesday morning (see our Day 2 post for more information).

If you have V6OPS fatigue, then alternatively check out ROLL. This focuses on routing for the Internet-of-Things and has six drafts up for discussion.


NOTE: If you are unable to attend IETF 99 in person, there are multiple ways to participate remotely.


The second afternoon session at 15.50 CEST/UTC+2 features IPWAVE. This will be discussing two drafts on transmitting IPv6 over over IEEE 802.11-OCB in Vehicle-to-Internet and Vehicle-to-Infrastructure networks, and on a problem statement for IP Wireless Access in Vehicular Environments. A further draft summarises a survey on IP-based Vehicular Networking for Intelligent Transportation Systems.

There’s two working groups during the evening session starting at 18.10 CEST/UTC+2. UTA is discussing three drafts related to the compulsory use of TLS for SMTP, an interesting one proposing to obsolete clear text transfer for e-mail, and one proposing an SMTP service extension.

Finally, there’s the second part of DNSOP. There appears to be just the one DNSSEC-related draft in this session, on algorithm negotiation.

For more background, please read the Rough Guide to IETF 99 from Olaf, Dan, Andrei, Mat, Karen and myself.

Relevant Working Groups

Read more here:: www.internetsociety.org/deploy360/blog/feed/

GigaSpaces Closes Analytics-App Gap With Spark

By George Leopold

Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.

In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.

The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.

The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.

The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.

The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.

The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.

Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.

Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.

“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”

To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)

Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.

IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.

Recent items:

IBM Bolsters Spark Ties with Latest SQL Engine

In-Memory Analytics to Boost Flight Ops For Major US Airline

The post GigaSpaces Closes Analytics-App Gap With Spark appeared first on Datanami.

Read more here:: www.datanami.com/feed/

GigaSpaces Closes Analytics-App Gap With Spark

By News Aggregator

By George Leopold

Data analytics and cloud vendors are rushing to support enhancements to the latest version of Apache Spark that boost streaming performance while adding new features such as data set APIs and support for continuous, real-time applications.

In-memory computing specialist GigaSpaces this week joined IBM and others jumping on the Spark 2.1 bandwagon with the roll out an upgraded transactional and analytical processing platform. The company said Wednesday (July 19) the latest version of its InsightEdge platform leverages Spark data science and analytics capabilities while combining the in-memory analytics juggernaut with its open source in-memory computing data grid.

The combination provides a distributed data store based on RAM and solid-state drives, the New York-based company added.

The upgrade was prompted by the new Spark capabilities along with growing market demand for real-time, scalable analytics as adoption of fast data analytics grows.

The in-memory computing platform combines analytical and transactional workloads in an open source software stack, and then streams applications such as Internet of Things (IoT) sensor data.

The analytics company said it is working with Magic Software (NASDAQ and TASE: MGIC), an application development and business integration software vendor, on an IoT project designed to speed ingestion of telemetry data using Magic’s integration and intelligence engine.

The partners said the sensor data integration effort targets IoT applications such as predictive maintenance and anomaly detection where data is ingested, prepped, correlated and merged. Data is then transferred from the GigaSpaces platform to Magic’s xpi engine that serves as the orchestrator for predictive and other analytics tasks.

Along with the IoT partnership and combined transactional and analytical processing, the Spark-powered in-memory computing platform also offers machine learning and geospatial processing capabilities along with multi-tier data storage for streaming analytics workloads, the company said.

Ali Hodroj, GigaSpace’s vice president of products and strategies, said the platform upgrade responds to the growing enterprise requirement to integrate applications and data science infrastructure.

“Many organizations are simply not large enough to justify spending valuable time, resources and money building, managing, and maintaining an on-premises data science infrastructure,” Hodroj asserted in a blog post. “While some can migrate to the cloud to commoditize their infrastructure, those who cannot are challenged with the high costs and complexity of cluster-sprawling big data deployments.”

To reduce latency, GigaSpaces and others are embracing Spark 2.1 fast data analytics, which was released late last year. (Spark 2.2 was released earlier this month.)

Vendors such as GigaSpaces are offering tighter collaboration between DevOps and data science teams via a unified application and analytics platform. Others, including IBM, are leveraging Spark 2.1 for Hadoop and stream processing distributions.

IBM (NYSE: IBM) said this week the latest version of its SQL platform targets enterprise requirements for data lakes by integrating Spark 2.1 on the Hortonworks Data Platform, the company’s Hadoop distribution. It also connects with Hortonworks DataFlow, the stream-processing platform.

Recent items:

IBM Bolsters Spark Ties with Latest SQL Engine

In-Memory Analytics to Boost Flight Ops For Major US Airline

The post GigaSpaces Closes Analytics-App Gap With Spark appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post GigaSpaces Closes Analytics-App Gap With Spark appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Internet Trends Report: Five ITSM Takeaways

By Industry Perspectives

Aparna TA is a Product Analyst for ManageEngine.

There’s an interesting time ahead for ITSM as it moves into the cloud and evolves to support a mobile workforce. Help desks will have to adapt as end users’ expectations of ITSM solutions start to mirror those of consumer applications.

Every year, Mary Meeker, a partner at venture capitalist firm Kleiner Perkins Caufield & Byers, produces a report on global internet trends. And this year’s narrative, like all others, is extremely sought out by technology companies and enthusiasts. This highly anticipated report sets the stage for the next big thing and sheds light on consumer technology adoption.

This year it covers a wide range of topics — from the increasing measurability of online advertising and the growing internet base in China to technological advancements in healthcare services. While the report doesn’t explicitly talk about the impact of global trends on ITSM, it leaves a lot of breadcrumbs that set expectations for the future of the ITSM industry. Here are a few things that might be relevant to IT service management.

  1. Mobile is driving the consumerization of enterprise IT – People spent more than twice as much time on mobile, desktop and other connected devices in 2016 than they did in 2008. As the wall between personal life and work wears down, customer expectations on an enterprise level are mirroring those of consumer apps.

IT service desk vendors are starting to adopt a mobile-first strategy to stay relevant to the mobile workforce. The ability to put a service desk in the palms of end users helps to drastically increase self-service adoption and improve user satisfaction.

  1. The cloud is accelerating change across enterprises – Cloud adoption has increased to new heights and is creating opportunities for new methods of software delivery such as APIs, microservices, elastic databases, etc. The shift from costlier perpetual licenses to cheaper subscription models has contributed to the rapid increase in cloud adoption, as the time and cost of setting up a cloud infrastructure are minimized.

As customers start to move toward a cloud-only IT infrastructure, SaaS has become a de facto model for many new vendors. With simple integrations, ITSM will be able to adopt newer technologies more quickly now than ever. The cloud also provides mobility and has the potential to take service desk operations beyond four walls to remote locations across the globe.

  1. Rising security concerns dictating the need for more compliance – As enterprises adopt cloud infrastructure, they are more wary about their applications’ security and compliance. The increased adoption of public and private clouds has led to an exponential increase in the severity of malicious threats.

Cloud vendors are warming up to new data protection and security policies, especially after the EU’s announcement of the General Data Protection Regulation (GDPR). This illustrates the greater need for the ITSM community and cloud vendors to work together to keep vulnerabilities at bay.

  1. Gaming can help optimize learning and engagement – There are about 2.6 billion gamers now compared to just 100 million in 1997, and gaming is still evolving. Gaming provides an intuitive interface to learn, and many organizations now use gamification to provide an engaging learning platform.

Many help desks have already implemented gamification in their tools to increase IT technician productivity. This can also be used to align IT technician’s day-to-day activities to business goals, thus creating a sense of accomplishment. Gaming can also be used to help end users adopt self-service portals and IT service desks faster.

  1. Social media can provide an opportunity to improve customer service – In a survey by Ovum, more than 60 percent of organizations expressed the need to provide easier access to online support channels. The growth of new tools like APIs and browser extensions has paved the way for innovative service delivery models which integrate enterprise applications (such as help desks) with consumer applications, including social media. Many companies are actively using social media as a channel to address customer concerns and resolve issues.

Many help desks have built-in integrations with social channels that automatically convert tweets or posts into tickets, thus utilizing popular social channels to widen the reach of online support. Social media channels provide a unique opportunity to go the extra mile to delight customers while gaining trust and brand equity.

These are just some of the key internet trends that coincide or overlap with the trajectory of ITSM and related technologies. While this report focuses on major internet trends, there are several technologies like AI, machine learning, analytics and IoT that are expected to be big game changers in the future of ITSM.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

Internet Trends Report: Five ITSM Takeaways

By News Aggregator

By Industry Perspectives

Aparna TA is a Product Analyst for ManageEngine.

There’s an interesting time ahead for ITSM as it moves into the cloud and evolves to support a mobile workforce. Help desks will have to adapt as end users’ expectations of ITSM solutions start to mirror those of consumer applications.

Every year, Mary Meeker, a partner at venture capitalist firm Kleiner Perkins Caufield & Byers, produces a report on global internet trends. And this year’s narrative, like all others, is extremely sought out by technology companies and enthusiasts. This highly anticipated report sets the stage for the next big thing and sheds light on consumer technology adoption.

This year it covers a wide range of topics — from the increasing measurability of online advertising and the growing internet base in China to technological advancements in healthcare services. While the report doesn’t explicitly talk about the impact of global trends on ITSM, it leaves a lot of breadcrumbs that set expectations for the future of the ITSM industry. Here are a few things that might be relevant to IT service management.

  1. Mobile is driving the consumerization of enterprise IT – People spent more than twice as much time on mobile, desktop and other connected devices in 2016 than they did in 2008. As the wall between personal life and work wears down, customer expectations on an enterprise level are mirroring those of consumer apps.

IT service desk vendors are starting to adopt a mobile-first strategy to stay relevant to the mobile workforce. The ability to put a service desk in the palms of end users helps to drastically increase self-service adoption and improve user satisfaction.

  1. The cloud is accelerating change across enterprises – Cloud adoption has increased to new heights and is creating opportunities for new methods of software delivery such as APIs, microservices, elastic databases, etc. The shift from costlier perpetual licenses to cheaper subscription models has contributed to the rapid increase in cloud adoption, as the time and cost of setting up a cloud infrastructure are minimized.

As customers start to move toward a cloud-only IT infrastructure, SaaS has become a de facto model for many new vendors. With simple integrations, ITSM will be able to adopt newer technologies more quickly now than ever. The cloud also provides mobility and has the potential to take service desk operations beyond four walls to remote locations across the globe.

  1. Rising security concerns dictating the need for more compliance – As enterprises adopt cloud infrastructure, they are more wary about their applications’ security and compliance. The increased adoption of public and private clouds has led to an exponential increase in the severity of malicious threats.

Cloud vendors are warming up to new data protection and security policies, especially after the EU’s announcement of the General Data Protection Regulation (GDPR). This illustrates the greater need for the ITSM community and cloud vendors to work together to keep vulnerabilities at bay.

  1. Gaming can help optimize learning and engagement – There are about 2.6 billion gamers now compared to just 100 million in 1997, and gaming is still evolving. Gaming provides an intuitive interface to learn, and many organizations now use gamification to provide an engaging learning platform.

Many help desks have already implemented gamification in their tools to increase IT technician productivity. This can also be used to align IT technician’s day-to-day activities to business goals, thus creating a sense of accomplishment. Gaming can also be used to help end users adopt self-service portals and IT service desks faster.

  1. Social media can provide an opportunity to improve customer service – In a survey by Ovum, more than 60 percent of organizations expressed the need to provide easier access to online support channels. The growth of new tools like APIs and browser extensions has paved the way for innovative service delivery models which integrate enterprise applications (such as help desks) with consumer applications, including social media. Many companies are actively using social media as a channel to address customer concerns and resolve issues.

Many help desks have built-in integrations with social channels that automatically convert tweets or posts into tickets, thus utilizing popular social channels to widen the reach of online support. Social media channels provide a unique opportunity to go the extra mile to delight customers while gaining trust and brand equity.

These are just some of the key internet trends that coincide or overlap with the trajectory of ITSM and related technologies. While this report focuses on major internet trends, there are several technologies like AI, machine learning, analytics and IoT that are expected to be big game changers in the future of ITSM.

Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Read more here:: datacenterknowledge.com/feed/

The post Internet Trends Report: Five ITSM Takeaways appeared on IPv6.net.

Read more here:: IPv6 News Aggregator