transit ipv6

What’s Keeping Deep Learning In Academia From Reaching Its Full Potential?

By Scott Clark

Deep learning is gaining a foothold in the enterprise as a way to improve the development and performance of critical business applications. It started to gain traction in companies optimizing advertising and recommendation systems, like Google, Yelp, and Baidu. But the space has seen a huge level of innovation over the past few years due to tools like open-source deep learning frameworks–like TensorFlow, MXNet, or Caffe 2–that democratize access to powerful deep learning techniques for companies of all sizes. Additionally, the rise of GPU-enabled cloud infrastructure on platforms like AWS and Azure has made it easier than ever for firms to build and scale these pipelines faster and cheaper than ever before.

Now, its use is extending to fields like financial services, oil and gas, and many other industries. Tractica, a market intelligence firm, predicts that deep learning enterprise software spending will surpass $40 billion worldwide by 2024. Companies that handle large amounts of data are tapping into deep learning to strengthen areas like machine perception, big data analytics, and the Internet of Things.

In the academic world outside of computer science from physics to public policy, though, where deep learning is rapidly being adopted and could be hugely beneficial, it’s often used in a way that leaves performance on the table.

Where academia falls short

Getting the most out of machine learning or deep learning frameworks requires optimization of the configuration parameters that govern these systems. These are the tunable parameters that need to be set before any learning actually takes place. Finding the right configurations can provide many orders of magnitude improvements in accuracy, performance or efficiency. Yet, the majority of professors and students who use deep learning outside of computer science, where these techniques are developed, are often using one of three traditional, suboptimal methods to tune, or optimize, the configuration parameters of these systems. They may use manual search–trying to optimize high-dimensional problems by hand or intuition via trial-and-error; grid search–building an exhaustive set of possible parameters and testing each one individually at great cost; or randomized search–the most effective in practice, but unfortunately the equivalent of trying to climb a mountain by jumping out of an airplane hoping you eventually land on the peak.

(gor kisselev/Shutterstock)

While these methods are easy to implement, they often fall short of the best possible solution and waste precious computational resources that are often scarce in academic settings. Experts often do not apply more advanced techniques because they are so orthogonal to the core research they are doing and the need to find, administer, and optimize more sophisticated optimization methods often wastes expert time. This challenge can also cause experts to rely on less powerful but easier to tune methods, and not even attempt deep learning. While researchers have used these methods for years, it’s not always the most effective way to conduct research.

The need for Bayesian Optimization

Bayesian optimization automatically fine tunes the parameters of these algorithms and machine learning models without accessing the underlying data or model itself. The process probes the underlying system to observe various outputs. It detects how previous configurations have performed to determine the best, most intelligent thing to try next. This helps researchers and domain experts arrive at the best possible model and frees up time to focus on more pressing parts of their research.

Bayesian optimization has already been applied outside of deep learning to other problems in academia from gravitational lensing to polymer synthesis to materials design and beyond. Additionally, a number of professors and students are already using this method at universities like MIT, University of Waterloo and Carnegie Mellon to optimize their deep learning models and conduct life-changing research. George Chen, assistant professor at Carnegie Mellon’s Heinz College of Public Policy and Information Systems, uses Bayesian Optimization to fine tune the machine learning models he uses in his experiments. His research consists of medical imaging analysis that automates the process of locating a specific organ in the human body. The implications of his research could help prevent unnecessary procedures in patients with congenital heart defects and others. Before applying Bayesian Optimization to his research, Chen had to guess and check the best parameters for his data models. Now, he’s able to automate the process and receive updates on his mobile phone so he can spend time completing other necessary parts of the research process.

Unfortunately, the vast majority of researchers leveraging deep learning outside of academia are not using these powerful techniques. This costs them time and resources or even completely prevents them from achieving their research goals via deep learning. When those experts are forced to do multidimensional, guess-and-check equations in their head, they usually have to spend valuable computational resources on modeling and work with sub-optimal results. Deploying Bayesian Optimization can accelerate the research process, free up time to focus on other important tasks and unlock better outcomes.

Scott Clark is the co-founder and CEO of SigOpt, which provides its services for free to academics around the world.. He has been applying optimal learning techniques in industry and academia for years, from bioinformatics to production advertising systems. Before SigOpt, Scott worked on the Ad Targeting team at Yelp leading the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open sourcing MOE. Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University and BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. Scott was chosen as one of Forbes’ 30 under 30 in 2016.

Related Items:

Getting Hyped for Deep Learning Configs

Dealing with Deep Learning’s Big Black Box Problem

Machine Learning, Deep Learning, and AI: What’s the Difference?

The post What’s Keeping Deep Learning In Academia From Reaching Its Full Potential? appeared first on Datanami.

Read more here:: www.datanami.com/feed/

What’s Keeping Deep Learning In Academia From Reaching Its Full Potential?

By News Aggregator

By Scott Clark

Deep learning is gaining a foothold in the enterprise as a way to improve the development and performance of critical business applications. It started to gain traction in companies optimizing advertising and recommendation systems, like Google, Yelp, and Baidu. But the space has seen a huge level of innovation over the past few years due to tools like open-source deep learning frameworks–like TensorFlow, MXNet, or Caffe 2–that democratize access to powerful deep learning techniques for companies of all sizes. Additionally, the rise of GPU-enabled cloud infrastructure on platforms like AWS and Azure has made it easier than ever for firms to build and scale these pipelines faster and cheaper than ever before.

Now, its use is extending to fields like financial services, oil and gas, and many other industries. Tractica, a market intelligence firm, predicts that deep learning enterprise software spending will surpass $40 billion worldwide by 2024. Companies that handle large amounts of data are tapping into deep learning to strengthen areas like machine perception, big data analytics, and the Internet of Things.

In the academic world outside of computer science from physics to public policy, though, where deep learning is rapidly being adopted and could be hugely beneficial, it’s often used in a way that leaves performance on the table.

Where academia falls short

Getting the most out of machine learning or deep learning frameworks requires optimization of the configuration parameters that govern these systems. These are the tunable parameters that need to be set before any learning actually takes place. Finding the right configurations can provide many orders of magnitude improvements in accuracy, performance or efficiency. Yet, the majority of professors and students who use deep learning outside of computer science, where these techniques are developed, are often using one of three traditional, suboptimal methods to tune, or optimize, the configuration parameters of these systems. They may use manual search–trying to optimize high-dimensional problems by hand or intuition via trial-and-error; grid search–building an exhaustive set of possible parameters and testing each one individually at great cost; or randomized search–the most effective in practice, but unfortunately the equivalent of trying to climb a mountain by jumping out of an airplane hoping you eventually land on the peak.

(gor kisselev/Shutterstock)

While these methods are easy to implement, they often fall short of the best possible solution and waste precious computational resources that are often scarce in academic settings. Experts often do not apply more advanced techniques because they are so orthogonal to the core research they are doing and the need to find, administer, and optimize more sophisticated optimization methods often wastes expert time. This challenge can also cause experts to rely on less powerful but easier to tune methods, and not even attempt deep learning. While researchers have used these methods for years, it’s not always the most effective way to conduct research.

The need for Bayesian Optimization

Bayesian optimization automatically fine tunes the parameters of these algorithms and machine learning models without accessing the underlying data or model itself. The process probes the underlying system to observe various outputs. It detects how previous configurations have performed to determine the best, most intelligent thing to try next. This helps researchers and domain experts arrive at the best possible model and frees up time to focus on more pressing parts of their research.

Bayesian optimization has already been applied outside of deep learning to other problems in academia from gravitational lensing to polymer synthesis to materials design and beyond. Additionally, a number of professors and students are already using this method at universities like MIT, University of Waterloo and Carnegie Mellon to optimize their deep learning models and conduct life-changing research. George Chen, assistant professor at Carnegie Mellon’s Heinz College of Public Policy and Information Systems, uses Bayesian Optimization to fine tune the machine learning models he uses in his experiments. His research consists of medical imaging analysis that automates the process of locating a specific organ in the human body. The implications of his research could help prevent unnecessary procedures in patients with congenital heart defects and others. Before applying Bayesian Optimization to his research, Chen had to guess and check the best parameters for his data models. Now, he’s able to automate the process and receive updates on his mobile phone so he can spend time completing other necessary parts of the research process.

Unfortunately, the vast majority of researchers leveraging deep learning outside of academia are not using these powerful techniques. This costs them time and resources or even completely prevents them from achieving their research goals via deep learning. When those experts are forced to do multidimensional, guess-and-check equations in their head, they usually have to spend valuable computational resources on modeling and work with sub-optimal results. Deploying Bayesian Optimization can accelerate the research process, free up time to focus on other important tasks and unlock better outcomes.

Scott Clark is the co-founder and CEO of SigOpt, which provides its services for free to academics around the world.. He has been applying optimal learning techniques in industry and academia for years, from bioinformatics to production advertising systems. Before SigOpt, Scott worked on the Ad Targeting team at Yelp leading the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open sourcing MOE. Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University and BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. Scott was chosen as one of Forbes’ 30 under 30 in 2016.

Related Items:

Getting Hyped for Deep Learning Configs

Dealing with Deep Learning’s Big Black Box Problem

Machine Learning, Deep Learning, and AI: What’s the Difference?

The post What’s Keeping Deep Learning In Academia From Reaching Its Full Potential? appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post What’s Keeping Deep Learning In Academia From Reaching Its Full Potential? appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Interoute grows its global networked cloud platform in Asia Pacific

By Zenobia Hegde

Interoute, a global cloud and network provider, has opened a new point-of-presence (PoP) in Sydney, Australia. The PoP will host the 18th Interoute Virtual Data Centre (VDC) globally, the company’s third in the Asia Pacific region.

The announcement comes in response to customer demand for expanded global cloud coverage in the Asia Pacific region and extends the reach of Interoute’s new Edge SD-WAN services. Interoute Edge is used by customers to accelerate their enterprise data traffic around the world.

Using the global Interoute Cloud Fabric, public and private access networks can blend into one dynamic digital platform, where application traffic travels over the fastest available routes. The Interoute Cloud Fabric binds together all Interoute VDC cloud zones, co-location facilities, PoPs, as well as third-party cloud providers with Interoute’s ultra-low latency private network backbone.

Mark Lewis, EVP Products and Development at Interoute, said: “Sydney is a global city and businesses are demanding better local connectivity and compute capability to support their growth. As we continue to grow our footprint in the Asia Pacific region, opening a PoP in Sydney is an important step to providing businesses in this theatre with faster, more reliable and secure access to cloud and IT services.”

The Sydney location will be a core PoP on the Interoute network with local peering and resilient connectivity to Interoute’s Singapore and Hong Kong locations. It will host the 18th Interoute VDC global IaaS zone, giving customers the ability to launch virtual machines as well as Interoute Edge gateways.

With these services in Sydney set to go live in the first half of 2018, Interoute’s vast global network will expand to connect 127 major cities across 30 countries and 18 global cloud zones. Interoute offers customers a global ICT infrastructure platform that supports the integration of enterprise legacy IT with Digital environments.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Interoute grows its global networked cloud platform in Asia Pacific appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Interoute grows its global networked cloud platform in Asia Pacific

By News Aggregator

By Zenobia Hegde

Interoute, a global cloud and network provider, has opened a new point-of-presence (PoP) in Sydney, Australia. The PoP will host the 18th Interoute Virtual Data Centre (VDC) globally, the company’s third in the Asia Pacific region.

The announcement comes in response to customer demand for expanded global cloud coverage in the Asia Pacific region and extends the reach of Interoute’s new Edge SD-WAN services. Interoute Edge is used by customers to accelerate their enterprise data traffic around the world.

Using the global Interoute Cloud Fabric, public and private access networks can blend into one dynamic digital platform, where application traffic travels over the fastest available routes. The Interoute Cloud Fabric binds together all Interoute VDC cloud zones, co-location facilities, PoPs, as well as third-party cloud providers with Interoute’s ultra-low latency private network backbone.

Mark Lewis, EVP Products and Development at Interoute, said: “Sydney is a global city and businesses are demanding better local connectivity and compute capability to support their growth. As we continue to grow our footprint in the Asia Pacific region, opening a PoP in Sydney is an important step to providing businesses in this theatre with faster, more reliable and secure access to cloud and IT services.”

The Sydney location will be a core PoP on the Interoute network with local peering and resilient connectivity to Interoute’s Singapore and Hong Kong locations. It will host the 18th Interoute VDC global IaaS zone, giving customers the ability to launch virtual machines as well as Interoute Edge gateways.

With these services in Sydney set to go live in the first half of 2018, Interoute’s vast global network will expand to connect 127 major cities across 30 countries and 18 global cloud zones. Interoute offers customers a global ICT infrastructure platform that supports the integration of enterprise legacy IT with Digital environments.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Interoute grows its global networked cloud platform in Asia Pacific appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Interoute grows its global networked cloud platform in Asia Pacific appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

PTC to accelerate customers’ connected service strategy with launch of ThingWorx Asset Advisor

By News Aggregator

By Zenobia Hegde

PTC announced from PTC Forum Europe in Stuttgart, Germany, the launch of the ThingWorx® Asset Advisor app for service to accelerate its customers’ service transformation initiatives. ThingWorx® Asset Advisor for service enables remote monitoring and servicing of assets deployed in the field.

Built on PTC’s leading ThingWorx industrial innovation platform, ThingWorx Asset Advisor for service is a role-based app for service managers and technicians that is fast to deploy, scalable, flexible, and customisable. It provides visibility to connected assets with key role-intelligent information, offering insight into the operating condition of the asset, alerts on operating anomalies, and remote service for the connected assets.

ThingWorx Asset Advisor for service follows PTC’s launch of the ThingWorx Asset Advisor app for manufacturing this past June at LiveWorx®17 and continues PTC’s commitment to helping industrial companies simplify their digital transformation efforts.

As industrial companies focus on improving efficiency and reducing downtime of their operations, being able to connect and monitor assets to capture critical alerts in real-time is key to an effective connected service strategy. PTC has a long history of enabling customers to connect assets and remotely monitor, diagnose, and resolve service issues.

Customers adopting a connected service strategy helped PTC’s IoT business outpace the market growth rate of 30 to 40 percent in fiscal year 2017. Companies like Elekta, Diebold, Sysmex, and McKinley Elevator are improving first-time fix rates 30 percent more than industry averages; mean time to repair by 6X; and equipment uptime by 20 percent, by being able to remotely monitor and service connected assets.

The ThingWorx Asset Advisor app enables customers to accelerate the time to value by providing them with an even easier and faster path to connected service capabilities.

“The capabilities enabled through ThingWorx technology will help us deliver the machine uptime required by our customers in production environments,” said Antonio Lopez, vice president, global customer services, 3D Systems.

“Connected Service is a key use case for digital transformation by asset owners and operators. In fact, IDC believes that by 2020, 50% of global OEMs with connected service offerings will have incorporated augmented service execution and/or remote management, thus improving service margins by up to 30%. Using an IoT platform to enable this capability is a critical ingredient to success,” said Heather Ashton, research manager, Service Innovation and Connected Product Strategies, IDC.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post PTC to accelerate customers’ connected service strategy with launch of ThingWorx Asset Advisor appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post PTC to accelerate customers’ connected service strategy with launch of ThingWorx Asset Advisor appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

PTC to accelerate customers’ connected service strategy with launch of ThingWorx Asset Advisor

By Zenobia Hegde

PTC announced from PTC Forum Europe in Stuttgart, Germany, the launch of the ThingWorx® Asset Advisor app for service to accelerate its customers’ service transformation initiatives. ThingWorx® Asset Advisor for service enables remote monitoring and servicing of assets deployed in the field.

Built on PTC’s leading ThingWorx industrial innovation platform, ThingWorx Asset Advisor for service is a role-based app for service managers and technicians that is fast to deploy, scalable, flexible, and customisable. It provides visibility to connected assets with key role-intelligent information, offering insight into the operating condition of the asset, alerts on operating anomalies, and remote service for the connected assets.

ThingWorx Asset Advisor for service follows PTC’s launch of the ThingWorx Asset Advisor app for manufacturing this past June at LiveWorx®17 and continues PTC’s commitment to helping industrial companies simplify their digital transformation efforts.

As industrial companies focus on improving efficiency and reducing downtime of their operations, being able to connect and monitor assets to capture critical alerts in real-time is key to an effective connected service strategy. PTC has a long history of enabling customers to connect assets and remotely monitor, diagnose, and resolve service issues.

Customers adopting a connected service strategy helped PTC’s IoT business outpace the market growth rate of 30 to 40 percent in fiscal year 2017. Companies like Elekta, Diebold, Sysmex, and McKinley Elevator are improving first-time fix rates 30 percent more than industry averages; mean time to repair by 6X; and equipment uptime by 20 percent, by being able to remotely monitor and service connected assets.

The ThingWorx Asset Advisor app enables customers to accelerate the time to value by providing them with an even easier and faster path to connected service capabilities.

“The capabilities enabled through ThingWorx technology will help us deliver the machine uptime required by our customers in production environments,” said Antonio Lopez, vice president, global customer services, 3D Systems.

“Connected Service is a key use case for digital transformation by asset owners and operators. In fact, IDC believes that by 2020, 50% of global OEMs with connected service offerings will have incorporated augmented service execution and/or remote management, thus improving service margins by up to 30%. Using an IoT platform to enable this capability is a critical ingredient to success,” said Heather Ashton, research manager, Service Innovation and Connected Product Strategies, IDC.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post PTC to accelerate customers’ connected service strategy with launch of ThingWorx Asset Advisor appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Axway empowers organisations to meet evolving customer expectations

By News Aggregator

By Zenobia Hegde

Axway, a catalyst for digital transformation, announced significant enhancements across AMPLIFY™, a data integration and engagement platform, that enable business managers, IT professionals and developers to innovate faster, increase efficiency and reduce business risk.

The latest release of Axway AMPLIFY includes more than 40 new innovations that span API management, file transfer, application development and analytics. With the new innovations, organisations can quickly and easily unlock business value from a vast array of data sources to transform the customer experience.

The new innovations include significant expansions to the AMPLIFY Marketplace, a toolbox of offerings for customers to find, try, buy and manage subscriptions. These offerings deliver service accelerators for easier and more seamless discovery, promotion, and consumption of composable integration and engagement assets for users of the platform.

These assets include more than 30 connectors that enable applications to access data from different sources ranging from popular CRM systems to marketing automation, databases and Big Data analytics. In addition, new support for Microsoft Azure enhances choice, flexibility, agility and security by enabling organisations to benefit from the productivity, intelligence and hybrid capabilities of the trusted Azure cloud platform.

“Organisations today are experiencing a tsunami of technological changes that have resulted in a dramatic and rapid shift in what it takes to deliver a great customer experience, forcing organisations to rethink traditional customer engagement strategies,” said Vince Padua, VP of Platform and Products, Axway. “At Axway, we are focused on helping organisations embrace these changes and transform the customer experience. The latest innovations in AMPLIFY help deliver on that vision by enabling organisations to innovate faster, reduce risk and drive efficiencies across their business.”

The latest innovations within AMPLIFY enable IT professionals, business managers and developers to accelerate delivery of digital services by making integration of data sources and aggregating services across hybrid platforms faster and easier.

New innovations include:

API management innovations: Transform the creation and deployment of APIs in order to meet the highest level security needs of enterprise customers. New API Builder Orchestration capabilities deliver new collaboration and visual composition capabilities that simplify building and orchestrating API integration and connectivity.Enhanced API Manager capabilities enable organisations to auto scale resources up and down to better manage changing API requirements and a new Connector Builder simplifies building connectors to use with APIs. Further, a new API Central Service makes it possible to harness the collective strength of an entire ecosystem of partners, customers, developers and suppliers. A new open source MQTT connector simplifies and secures working with the IoT.
File transfer innovations: The latest release of AMPLIFY includes greater integration with Axway Syncplicity, a leading enterprise file sync and share (EFSS) solution that provides users with the experience and tools they need for secure collaboration. Enhancements include new data loss prevention and a new Syncplicity connector that enables organisations working with Axway MFT to deliver a variety of hybrid MFT use cases.
Application development innovations: New mobile capabilities enable organisations to create exceptional and immersive cross-platform native mobile apps. Titanium SDK includes new Hyperloop technology and support for the latest versions […]

The post Axway empowers organisations to meet evolving customer expectations appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Axway empowers organisations to meet evolving customer expectations appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Axway empowers organisations to meet evolving customer expectations

By Zenobia Hegde

Axway, a catalyst for digital transformation, announced significant enhancements across AMPLIFY™, a data integration and engagement platform, that enable business managers, IT professionals and developers to innovate faster, increase efficiency and reduce business risk.

The latest release of Axway AMPLIFY includes more than 40 new innovations that span API management, file transfer, application development and analytics. With the new innovations, organisations can quickly and easily unlock business value from a vast array of data sources to transform the customer experience.

The new innovations include significant expansions to the AMPLIFY Marketplace, a toolbox of offerings for customers to find, try, buy and manage subscriptions. These offerings deliver service accelerators for easier and more seamless discovery, promotion, and consumption of composable integration and engagement assets for users of the platform.

These assets include more than 30 connectors that enable applications to access data from different sources ranging from popular CRM systems to marketing automation, databases and Big Data analytics. In addition, new support for Microsoft Azure enhances choice, flexibility, agility and security by enabling organisations to benefit from the productivity, intelligence and hybrid capabilities of the trusted Azure cloud platform.

“Organisations today are experiencing a tsunami of technological changes that have resulted in a dramatic and rapid shift in what it takes to deliver a great customer experience, forcing organisations to rethink traditional customer engagement strategies,” said Vince Padua, VP of Platform and Products, Axway. “At Axway, we are focused on helping organisations embrace these changes and transform the customer experience. The latest innovations in AMPLIFY help deliver on that vision by enabling organisations to innovate faster, reduce risk and drive efficiencies across their business.”

The latest innovations within AMPLIFY enable IT professionals, business managers and developers to accelerate delivery of digital services by making integration of data sources and aggregating services across hybrid platforms faster and easier.

New innovations include:

API management innovations: Transform the creation and deployment of APIs in order to meet the highest level security needs of enterprise customers. New API Builder Orchestration capabilities deliver new collaboration and visual composition capabilities that simplify building and orchestrating API integration and connectivity.Enhanced API Manager capabilities enable organisations to auto scale resources up and down to better manage changing API requirements and a new Connector Builder simplifies building connectors to use with APIs. Further, a new API Central Service makes it possible to harness the collective strength of an entire ecosystem of partners, customers, developers and suppliers. A new open source MQTT connector simplifies and secures working with the IoT.
File transfer innovations: The latest release of AMPLIFY includes greater integration with Axway Syncplicity, a leading enterprise file sync and share (EFSS) solution that provides users with the experience and tools they need for secure collaboration. Enhancements include new data loss prevention and a new Syncplicity connector that enables organisations working with Axway MFT to deliver a variety of hybrid MFT use cases.
Application development innovations: New mobile capabilities enable organisations to create exceptional and immersive cross-platform native mobile apps. Titanium SDK includes new Hyperloop technology and support for the latest versions […]

The post Axway empowers organisations to meet evolving customer expectations appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Gemalto and Huawei claim a ‘cost-effective’ narrowband solution for mass market IoT

By Zenobia Hegde

To help device manufacturers meet a growing demand for long-lasting low-power NarrowBand (NB) IoT modules, Gemalto and Huawei – via its semiconductor arm, HiSilicon – are working together to develop the next generation of modules that combine an extra level of security and consume very low power. By combining the expertise from both companies, these NB-IoT modules will help manufacturers reduce the cost and size of their devices, and lengthen the battery life of the devices to up to ten years.

NB IoT has been developed to address lower bit rates and lower cost segments, and works virtually anywhere. It offers ultra-low power consumption enabling devices to be battery operated for periods of up to 10 years. Applications include smart parking sensors, intruder and fire alarms, personal healthcare appliances, tracking devices, and street lamps to name a few. According to ABI Research, NB IoT modules connecting objects to networks are forecast to represent over 20% of all cellular shipments by 2021.

“2017 is the year of commercial NB IoT rollouts for us, and we will be building 30 such networks in 20 countries worldwide by the end of the year. Huawei has been a major player in this arena, and we continue to capitalise on this vast opportunity,”said XiongWei, president of LTE solution, Huawei. “We look to supply the market with solutions that provide stable connectivity, low energy consumption, and cost efficiency. The network roll-out will now come with an enhanced integration and flexibility thanks to this collaboration with Gemalto.”

“The combination of our expertise in IoT cellular connectivity, and digital s​ecurity, and Huawei’s high-performance NB IoT chipsets will help device manufacturers and service providers take the plunge into cellular IoT mass deployment thanks to a standardised solution,” said Suzanne Tong-Li, SVP Greater China and Korea for Mobile Services and IoT and China president, Gemalto. “Our collaboration simplifies the implementatio​​​n of NB IoT projects combining solid security and flexibility.”

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Gemalto and Huawei claim a ‘cost-effective’ narrowband solution for mass market IoT appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Gemalto and Huawei claim a ‘cost-effective’ narrowband solution for mass market IoT

By News Aggregator

By Zenobia Hegde

To help device manufacturers meet a growing demand for long-lasting low-power NarrowBand (NB) IoT modules, Gemalto and Huawei – via its semiconductor arm, HiSilicon – are working together to develop the next generation of modules that combine an extra level of security and consume very low power. By combining the expertise from both companies, these NB-IoT modules will help manufacturers reduce the cost and size of their devices, and lengthen the battery life of the devices to up to ten years.

NB IoT has been developed to address lower bit rates and lower cost segments, and works virtually anywhere. It offers ultra-low power consumption enabling devices to be battery operated for periods of up to 10 years. Applications include smart parking sensors, intruder and fire alarms, personal healthcare appliances, tracking devices, and street lamps to name a few. According to ABI Research, NB IoT modules connecting objects to networks are forecast to represent over 20% of all cellular shipments by 2021.

“2017 is the year of commercial NB IoT rollouts for us, and we will be building 30 such networks in 20 countries worldwide by the end of the year. Huawei has been a major player in this arena, and we continue to capitalise on this vast opportunity,”said XiongWei, president of LTE solution, Huawei. “We look to supply the market with solutions that provide stable connectivity, low energy consumption, and cost efficiency. The network roll-out will now come with an enhanced integration and flexibility thanks to this collaboration with Gemalto.”

“The combination of our expertise in IoT cellular connectivity, and digital s​ecurity, and Huawei’s high-performance NB IoT chipsets will help device manufacturers and service providers take the plunge into cellular IoT mass deployment thanks to a standardised solution,” said Suzanne Tong-Li, SVP Greater China and Korea for Mobile Services and IoT and China president, Gemalto. “Our collaboration simplifies the implementatio​​​n of NB IoT projects combining solid security and flexibility.”

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Gemalto and Huawei claim a ‘cost-effective’ narrowband solution for mass market IoT appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Gemalto and Huawei claim a ‘cost-effective’ narrowband solution for mass market IoT appeared on IPv6.net.

Read more here:: IPv6 News Aggregator