net ipv6

AI Investment Up, ROI Remains Iffy

By George Leopold

Real-world applications for artificial intelligence are emerging in areas such as boosting the productivity of dispersed workforces. However, early adopters are still struggling to determine the return on initial AI investments, according to a pair of new vendor reports.

Red Hat released research this week indicating that AI deployments have yielded some tangible results in areas such as transportation and utilities that rely heavily on field workers. A separate forecast released Wednesday (Jan.17) by Narrative Science found growing enterprise adoption of AI technologies but little in the way of investment returns.

Chicago-based Narrative Science, which sells natural language generation technology, found that 61 percent of those companies it surveyed deployed AI technologies in 2017. Early deployments focused on business intelligence, finance and product management. “In 2018, the focus will be on ensuring enterprises get value from their AI investments,” company CEO Stuart Frankel noted in releasing the survey.

Early adopters are also encountering many of the hurdles associated with a “first mover” advantage. “More and more organizations are deploying AI-powered technologies, with goals such as improving worker productivity and enhancing the customer experience that are not only laudable, but achievable,” Narrative Science concluded. “A focus on realistic deployment timeframes and accurately measuring the effectiveness and [return on investment] of AI is critical to keeping the current momentum around the technology moving forward.”

Meanwhile, the Red Hat (NYSE: RHT) survey also found an uptick in AI deployments, with 30 percent of respondents planning to implement AI for “field service workers” this year. Other applications include predictive analytics, machine learning and robotics.

While issues such as securing data access and a lack of standards persist, Red Hat found that field workers are “now at the forefront of digital transformation where artificial intelligence, smart mobile devices, the Internet of Things (IoT) and business process management technologies have created new opportunities to better streamline and transform traditional workflows and workforce management practices.”

A predicted 25 percent increase in AI investment through November 2018 is seen transforming field service operations, Red Hat noted in a blog posted on Thursday (Jan. 18). Early movers cited increase field worker productivity (46 percent), streamlining field operations (40 percent) and improving customer service (37 percent) as the top business factors for investing in AI.

Along with a lack of standards, respondents said deployment challenges include keeping pace with technological change and integrating AI deployments with legacy systems. The survey notes that industry groups are focusing on standards and interoperability among IoT devices along with data security while improving integration technologies.

Earlier vendor surveys also have identified barriers to implementation ranging from a lack of IT infrastructure suited to AI applications to a lack AI expertise. For instance, a survey released last fall by data analytics vendor Teradata Corp. (NYSE: TDC) found that 30 percent of those it polled said greater investments would be required to expand AI deployments.

Despite the promise and pitfalls of AI—ranging from freeing workers from drudgery to displacing those same workers—early AI deployments appear to underscore the reality that the technology remains a solution in search of a problem.

Recent items:

AI Seen Better Suited to IoT Than Big Data

AI Adopters Confront Barriers to ROI

The post AI Investment Up, ROI Remains Iffy appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Timing-Architects becomes a part of Vector Informatik

By Zenobia Hegde

Timing-Architects Embedded Systems GmbH (TA) has been acquired by Vector Informatik GmbH. The Stuttgart-based specialist in automotive embedded electronics says it will now offer its customers a more comprehensive portfolio in the field of multi-core real-time systems.

The two IT companies have been cooperating for a number of years. Their relationship was strengthened in 2016, when Vector acquired a 49% share in Timing-Architects. This cooperation enabled the companies to optimise the interplay between the TA Tool Suite and MICROSAR, Vector’s multi-core-capable AUTOSAR basic software. In addition, Vector successfully marketed the TA Tool Suite throughout the world.

For Dr. Michael Deubzer, managing director and co-founder of Timing-Architects, the integration of TA into Vector was a logical step: “It’s good to see that the TA Tool Suite will continue to be developed with a strong focus on AUTOSAR. I look forward to continuing on this path with the TA Team and Vector.”

Dr. Thomas Beck, managing director of Vector Informatik GmbH, adds: “With its expert knowledge, Timing-Architects has created an ideal support tool for multi-core systems. Thanks to the integration of TA, ECU developers and vehicle manufacturers will have the advantage of a comprehensive solution from a single source.”

Dr. Michael Deubzer

TA’s team of nearly 40 people will continue to expand the TA Tool Suite and integrate it with Vector tools like DaVinci Configurator Pro and PREEvision. The team will also work on ways to speed up the integration of software in high-performance real-time platforms.

Timing-Architects will remain in its location at TechBase Regensburg, which is near the Vector office in Regensburg.

Due to their higher computing power, multi-core processors offer ideal conditions for innovative software applications in vehicles, such as for ADAS systems. However, when applications are running on multiple cores, runtime losses are incurred due to data communication between the cores.

For time-critical applications, the challenge is to find an optimal distribution of the application software. With the TA Tool Suite, the network of multi-core ECUs can be analysed and optimised consistently, resulting in a reliable and efficient system. In the line with the vision of AUTOSAR, now developers will have new degrees of freedom in distributing software functions in real-time multi-core processors.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post Timing-Architects becomes a part of Vector Informatik appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Saving cities US$5 trillion with smart, sharing technologies

By Zenobia Hegde

The rate of global urbanisation is growing exponentially. But while living in cities offers both social and economic opportunities, the rising costs of living are threatening to increase social inequality, slow down economic growth, and increase levels of crime.

Smart cities, and the technologies that underpin them, are hailed as a significant solution to this problem, and are set to reduce costs for governments, citizens and enterprises alike. In fact, a recent report published by ABI Research, in partnership with Chordant and CA Technologies, reveals smart city and IoT technologies have the potential to save governments, enterprises and citizens globally over $5 trillion (€4.09 trillion) by 2022, says Jim Nolan, executive vice president, Chordant at InterDigital.

Specifically, it is new sharing and service economy paradigms, and the “Internet of Things” along with artificial intelligence (AI) and automation, that will play a leading role in driving these cost savings.

Cutting costs for governments

Governments can benefit tremendously from the implementation of IoT technology and sharing economy business models in energy, water utilities, transportation, and crime and vandalism.

Energy savings is perhaps the first, and most obvious cost benefit of IoT and smart city technology. Turning street lights into smart, connected systems with intelligent on/off cycles, for example, could yield a 30% cost saving for governments.

When it comes to water utilities, advanced leak detection systems can drive direct cost savings by removing the need for manual inspection, while opportunity cost savings can be made through water waste management and waste prevention systems. These cost savings, in turn, help to reduce end-user prices.

Transportation is a major cost centre in government budgets, but adding smart technology such as electronic toll collection (ETC) vehicle to infrastructure (V2I) technology, as well as intelligent traffic light systems, can optimise the use of existing road capacity.

In regard to government services such as waste collection, mobile resource management (MRM) technology can dispatch, manage and monitor field workers, while the deployment of smart garbage bins can enable real-time, remote fill-level monitoring, and therefore the timely dispatch of garbage collection trucks. This isn’t a fantasy, they’re already in use in Dubai. This enables waste collection fleets to run more efficiently and results in fewer trucks on the road. In fact, this form of smart waste collection has the potential to reach cost savings of 30%.

Finally, AI-based automation for surveillance cameras, along with data optimisation, can reduce the costs associated with monitoring and analysing video footage in support of crime reduction. AI technology can also be used to complement surveillance cameras with crowd sourced intelligence such as data captured from social sites, as well as smartphone footage from citizens.

By taking advantage of these different technologies, city governments in mega cities (a metropolitan area with a total population in excess of ten million people) globally could save up $58 billion (€47.40 billion) annually.

Affordable services for citizens

Smart city technologies are not only key for driving cost savings for governments – they play just as important a role in reducing costs for citizens. After housing, mobility presents the second largest item in family […]

The post Saving cities US$5 trillion with smart, sharing technologies appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Europe falls behind US in adoption of AI-led security, as half of firms surveyed say IoT is making it harder to stay secure

By Zenobia Hegde

IoT (Internet of Things) strategies are hampering security management, with almost half (47%) of executives in a new survey saying it has become more difficult to stay secure in the last year. This is one of the key findings of the 2017-2018 Global Application and Network Security Report, just released by Radware®, a provider of cyber security and application delivery solutions.

Adding to the problem is the complex issue as to who is responsible for IoT security. When asked who needs to take responsibility, there was no clear consensus among security executives. Responses pinned responsibility on the organisations managing the network through to the manufacturers, but the majority said it was down to consumers using these devices (56%).

Andrew Foxcroft, regional director for Radware UK, Ireland and Nordics, says that its time companies closed the debate and assume responsibility themselves: “Everything that is attached to the network is a threat to security. The longer we debate who is responsible the more advantage we hand to the hackers who will do everything that can to exploit weaknesses.

“Governments of the world are taking more and more interest in IoT and if companies fail to be decisive, take responsibility and collaborate on security, legislation will make the decision for them – look at Germany’s decision to ban smart toys.

Lazy assumption

“It’s lazy to assume consumers will think about security. We already know people find it challenging to keep up with software updates and are unlikely to think through the risks regardless of the terms and conditions they sign up to. The network is only as strong as its weakest link and the sooner companies realise IoT devices are the weakest link, and that the buck will always stop with them, the better.”

The study also found that the percentage of companies reporting financially motivated cyber-attacks has doubled over the past two years, with 50% of surveyed companies experiencing a cyber-attack motivated by ransom in the past year. As the value of bitcoin and other cryptocurrencies – often the preferred form of payment among hackers – has appreciated, ransom attacks provide an opportunity for hackers to cash out for lucrative gains months later.

Cryptocurrencies help hackers

“The rapid adoption of cryptocurrencies and their subsequent rise in price has presented hackers with a clear upside that goes beyond cryptocurrencies’ anonymity,” adds Foxcroft. “Paying a hacker in these situations not only incentivises further attacks, but it provides criminals with the vital funds they need to continue their operations.”

Andrew Foxcroft

The number of companies that reported ransom attacks in which hackers use malware to encrypt data, systems, and networks until a ransom is paid – surged in the past year, increasing 40% from the 2016 survey. Companies don’t expect this threat to go away in 2018 either. One in four executives (26%) see ransom as the largest threat to their business sector in the coming year.

“Criminals used various exploits and hacks this year to encrypt vital systems, steal intellectual property, and shut down business operations, all with ransom demands attached to these actions,” Foxcroft said. […]

The post Europe falls behind US in adoption of AI-led security, as half of firms surveyed say IoT is making it harder to stay secure appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

More than 40% of U.S. broadband households plan to buy a smart home device in next 12 months

By Parks Associates

Parks Associates recently announced new research showing 41% of U.S. broadband households plan to purchase a smart home product in the next 12 months, including 27% with high purchase intentions. The most popular devices include smart smoke/CO detectors, thermostats, and lightbulbs. The international research firm, which predicts U.S. broadband households will buy almost 55 million […]

The post More than 40% of U.S. broadband households plan to buy a smart home device in next 12 months appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

On the Road Again: Highlights from ARIN’s Outreach

By Susan Hamlin

We are celebrating our 20th anniversary this December and have found ourselves reflecting over these last two wonderful decades. One of the most important organizational objectives we have here at ARIN is our community outreach efforts. We make it a priority to reach out to you, our community, to provide the tools and advice you need when it comes to Internet number resources. We have hosted and attended an incredible number of events over the years, and thought it would be fun to look back and share where we’ve been and what we’ve accomplished with our community.

What kind of outreach do we do?

Each year, we host or attend a number of different events. Twice annually we hold our Public Policy and Members Meetings in the second and fourth quarters in various locations throughout our region. These meetings provide an opportunity for the entire Internet community to engage in policy discussions, network with colleagues, and attend workshops and tutorials. Everyone with an interest in Internet number resources is welcome to attend the Public Policy & Members Meetings and registration is free!

We also host many ARIN on the Road events around our region throughout the year. These free events provide local communities with the latest news from ARIN, covering everything from requesting IP addresses and Autonomous System Numbers (ASNs) to the status of IPv6 adoption, to current policy discussions, and updates about our technical services. Did you know that you can request an ARIN on the Road in your city, town, or metro area? I encourage you to send an email to info@arin.net if you believe your local Internet community would be interested in participating.

While we do discuss IPv6 at ARIN on the Road, that is not the only way we continue to spread the word in support of IPv6 deployment. Our message has evolved since we started actively promoting IPv6 in 2007, when we set up our TeamARIN site and began exhibiting at major industry shows. Today we exhibit at fewer tradeshows, but we do send speakers to many events across a wide range of industries, where we encourage organizations to prepare for the future by enabling IPv6 on their websites.

Additionally, members of our team attend community events around the world. Whether it be other RIR meetings, Internet Governance events, or partners such as NANOG or CARIBNOG, we believe it’s important to show our support to the wider Internet community. For a full list of events we host or attend, check out our events page.

Where was our first meeting?

Our first members meeting took place in Chantilly, Virginia on 20 March 1998. Since then, we’ve held a total of 40 meetings over the last 20 years!

Where was our first AOTR?

Our first ARIN on the Road event was held in Phoenix, Arizona on 17 August 2010. Since then we have held an additional 46 AOTR events and counting!

How can you get involved?

Phew! As you can see, we’ve done a lot over the last 20 years, but we’ve only just begun. We plan to continue expanding our outreach efforts around our region, including a continued focus on the Caribbean, and it is all possible thanks to our wonderful community.

There are so many ways you can continue to get involved with ARIN, including:

  • Subscribe to our mailing lists to discuss Internet number resource policy development and keep up with ARIN services and activities
  • Attend an ARIN meeting – We have great remote participation capabilities if needed
  • Don’t forget you can apply for a fellowship! We are accepting fellowship applications to ARIN 41 in Miami 15-18 April 2018
  • Attend our ARIN on the Road events
  • Member organizations, get involved in our election process

The post On the Road Again: Highlights from ARIN’s Outreach appeared first on Team ARIN.

Read more here:: teamarin.net/feed/

Bolt IoT platform raises $40K in crowdfunding from its Kickstarter campaign

Bolt, an integrated IoT hardware kit comprising of Bolt WiFi chip, Module, Cloud, mobile apps, and APIs launched its Kickstarter campaign and raised $40,000 pledged of $10K goal through 875 backers.

Bolt Hardware

A key differentiation of Bolt from other IoT kits is the former’s cloud platform has machine learning algorithms that users can run on their sensor data. The other aspect is that Bolt has ‘packaged’ the IoT development hardware and software in a way that lets developers launch a project from zero using the Bolt solution.

The Kickstarter offers by Bolt starts from the basic $17 Bolt IoT platform and upsells to $650 legendary kit. Typical use cases of Bolt platform are home automation, temperature monitoring, soil monitoring, and other real-time monitoring use cases.

Smartphone App

A complete feature list of the platform can be accessed on Bolt’s campaign page. The product is primarily aimed towards makers, hobbyists, and developers planning to build an IoT product (with ML capabilities baked in). Third party services like Twilio, Mailgun, Zapier, and IFTTT can be appended to Bolt and lets developers make custom made alert/notification systems.

Read more here:: feeds.feedburner.com/iot

Accenture and Pivotal launch new business group to help enterprises accelerate and speed up their software development

By Zenobia Hegde

Accenture and Pivotal Software, Inc., have formed a new business group to help Fortune Global 500 companies and other large enterprises accelerate their software development and innovate at startup speed. The Accenture Pivotal Business Group (APBG) will help enterprises migrate legacy applications to the cloud and accelerate cloud-native application development on Pivotal Cloud Foundry® (PCF), one of the world’s most powerful cloud-native platforms. Operated and managed by Accenture and powered by Pivotal’s next-generation software development methodology, the APBG will offer clients new capabilities for developing modern cloud-native products and services that leverage artificial intelligence and other innovative technologies in areas such as the Internet of Things and connected cars and homes.

Accenture and Pivotal will launch two APBG locations, one in Columbus, Ohio, and another in New York City. The joint facilities will be dedicated spaces where APBG will help large enterprises rapidly migrate their businesses onto PCF and prototype innovative products and services. The group has already begun work with companies in the banking and insurance sectors.

While information technology (IT) teams are tasked with building customer-facing products and services that can evolve with changing buyer preferences, many enterprises still face challenges operating their business on modern cloud technologies. This can impede their ability to respond quickly to market opportunities. The APBG can bridge this gap by bringing together the skills, capabilities and experience needed to help clients redesign and modernise their legacy IT applications and infrastructures.

“By combining our cloud services expertise with Pivotal’s software development methodology and platform, the Accenture Pivotal Business Group will help clients accelerate the pace of new innovations and fast-track their digital transformation,” said Paul Daugherty, Accenture’s chief technology & innovation officer. “Together we will help clients adopt cloud-native technology, build software at scale, and use an iterative, high-speed model, enabling them to be more agile, disruptive and competitive.”

Rob Mee, Pivotal’s CEO, said, “The Accenture Pivotal Business Group’s vision is to help the world’s largest enterprises move at startup speed to respond to customer expectations and bring new ideas to market faster. We will help clients continuously improve the software applications that run their businesses, freeing them to focus on higher-value aspects of their businesses while dramatically increasing developer productivity and operational efficiencies.”

Rob Mee

APBG’s integrated product teams will enable clients to work side-by-side with Accenture experts, including product managers, designers and engineers trained in Pivotal’s unique methodology. Using a development process guided by business goals, these collaborative teams can take advantage of comprehensive Agile transformation capabilities. More than one-third of the Fortune Global 100 run their businesses on PCF, and many report significant increases in developer productivity and lower IT costs.

Accenture and Pivotal will invest significant resources in APBG over the next several years, with plans to expand to additional locations and scale the group’s software development, application transformation and training capabilities.

Accenture brings more than two decades of experience building open systems; significant Java capabilities through a global team of 40,000 Java professionals; and experience executing more than 20,000 cloud projects for three-quarters of the Fortune […]

The post Accenture and Pivotal launch new business group to help enterprises accelerate and speed up their software development appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

What’s Keeping Deep Learning In Academia From Reaching Its Full Potential?

By Scott Clark

Deep learning is gaining a foothold in the enterprise as a way to improve the development and performance of critical business applications. It started to gain traction in companies optimizing advertising and recommendation systems, like Google, Yelp, and Baidu. But the space has seen a huge level of innovation over the past few years due to tools like open-source deep learning frameworks–like TensorFlow, MXNet, or Caffe 2–that democratize access to powerful deep learning techniques for companies of all sizes. Additionally, the rise of GPU-enabled cloud infrastructure on platforms like AWS and Azure has made it easier than ever for firms to build and scale these pipelines faster and cheaper than ever before.

Now, its use is extending to fields like financial services, oil and gas, and many other industries. Tractica, a market intelligence firm, predicts that deep learning enterprise software spending will surpass $40 billion worldwide by 2024. Companies that handle large amounts of data are tapping into deep learning to strengthen areas like machine perception, big data analytics, and the Internet of Things.

In the academic world outside of computer science from physics to public policy, though, where deep learning is rapidly being adopted and could be hugely beneficial, it’s often used in a way that leaves performance on the table.

Where academia falls short

Getting the most out of machine learning or deep learning frameworks requires optimization of the configuration parameters that govern these systems. These are the tunable parameters that need to be set before any learning actually takes place. Finding the right configurations can provide many orders of magnitude improvements in accuracy, performance or efficiency. Yet, the majority of professors and students who use deep learning outside of computer science, where these techniques are developed, are often using one of three traditional, suboptimal methods to tune, or optimize, the configuration parameters of these systems. They may use manual search–trying to optimize high-dimensional problems by hand or intuition via trial-and-error; grid search–building an exhaustive set of possible parameters and testing each one individually at great cost; or randomized search–the most effective in practice, but unfortunately the equivalent of trying to climb a mountain by jumping out of an airplane hoping you eventually land on the peak.

(gor kisselev/Shutterstock)

While these methods are easy to implement, they often fall short of the best possible solution and waste precious computational resources that are often scarce in academic settings. Experts often do not apply more advanced techniques because they are so orthogonal to the core research they are doing and the need to find, administer, and optimize more sophisticated optimization methods often wastes expert time. This challenge can also cause experts to rely on less powerful but easier to tune methods, and not even attempt deep learning. While researchers have used these methods for years, it’s not always the most effective way to conduct research.

The need for Bayesian Optimization

Bayesian optimization automatically fine tunes the parameters of these algorithms and machine learning models without accessing the underlying data or model itself. The process probes the underlying system to observe various outputs. It detects how previous configurations have performed to determine the best, most intelligent thing to try next. This helps researchers and domain experts arrive at the best possible model and frees up time to focus on more pressing parts of their research.

Bayesian optimization has already been applied outside of deep learning to other problems in academia from gravitational lensing to polymer synthesis to materials design and beyond. Additionally, a number of professors and students are already using this method at universities like MIT, University of Waterloo and Carnegie Mellon to optimize their deep learning models and conduct life-changing research. George Chen, assistant professor at Carnegie Mellon’s Heinz College of Public Policy and Information Systems, uses Bayesian Optimization to fine tune the machine learning models he uses in his experiments. His research consists of medical imaging analysis that automates the process of locating a specific organ in the human body. The implications of his research could help prevent unnecessary procedures in patients with congenital heart defects and others. Before applying Bayesian Optimization to his research, Chen had to guess and check the best parameters for his data models. Now, he’s able to automate the process and receive updates on his mobile phone so he can spend time completing other necessary parts of the research process.

Unfortunately, the vast majority of researchers leveraging deep learning outside of academia are not using these powerful techniques. This costs them time and resources or even completely prevents them from achieving their research goals via deep learning. When those experts are forced to do multidimensional, guess-and-check equations in their head, they usually have to spend valuable computational resources on modeling and work with sub-optimal results. Deploying Bayesian Optimization can accelerate the research process, free up time to focus on other important tasks and unlock better outcomes.

Scott Clark is the co-founder and CEO of SigOpt, which provides its services for free to academics around the world.. He has been applying optimal learning techniques in industry and academia for years, from bioinformatics to production advertising systems. Before SigOpt, Scott worked on the Ad Targeting team at Yelp leading the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open sourcing MOE. Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University and BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. Scott was chosen as one of Forbes’ 30 under 30 in 2016.

Related Items:

Getting Hyped for Deep Learning Configs

Dealing with Deep Learning’s Big Black Box Problem

Machine Learning, Deep Learning, and AI: What’s the Difference?

The post What’s Keeping Deep Learning In Academia From Reaching Its Full Potential? appeared first on Datanami.

Read more here:: www.datanami.com/feed/

What’s Keeping Deep Learning In Academia From Reaching Its Full Potential?

By News Aggregator

By Scott Clark

Deep learning is gaining a foothold in the enterprise as a way to improve the development and performance of critical business applications. It started to gain traction in companies optimizing advertising and recommendation systems, like Google, Yelp, and Baidu. But the space has seen a huge level of innovation over the past few years due to tools like open-source deep learning frameworks–like TensorFlow, MXNet, or Caffe 2–that democratize access to powerful deep learning techniques for companies of all sizes. Additionally, the rise of GPU-enabled cloud infrastructure on platforms like AWS and Azure has made it easier than ever for firms to build and scale these pipelines faster and cheaper than ever before.

Now, its use is extending to fields like financial services, oil and gas, and many other industries. Tractica, a market intelligence firm, predicts that deep learning enterprise software spending will surpass $40 billion worldwide by 2024. Companies that handle large amounts of data are tapping into deep learning to strengthen areas like machine perception, big data analytics, and the Internet of Things.

In the academic world outside of computer science from physics to public policy, though, where deep learning is rapidly being adopted and could be hugely beneficial, it’s often used in a way that leaves performance on the table.

Where academia falls short

Getting the most out of machine learning or deep learning frameworks requires optimization of the configuration parameters that govern these systems. These are the tunable parameters that need to be set before any learning actually takes place. Finding the right configurations can provide many orders of magnitude improvements in accuracy, performance or efficiency. Yet, the majority of professors and students who use deep learning outside of computer science, where these techniques are developed, are often using one of three traditional, suboptimal methods to tune, or optimize, the configuration parameters of these systems. They may use manual search–trying to optimize high-dimensional problems by hand or intuition via trial-and-error; grid search–building an exhaustive set of possible parameters and testing each one individually at great cost; or randomized search–the most effective in practice, but unfortunately the equivalent of trying to climb a mountain by jumping out of an airplane hoping you eventually land on the peak.

(gor kisselev/Shutterstock)

While these methods are easy to implement, they often fall short of the best possible solution and waste precious computational resources that are often scarce in academic settings. Experts often do not apply more advanced techniques because they are so orthogonal to the core research they are doing and the need to find, administer, and optimize more sophisticated optimization methods often wastes expert time. This challenge can also cause experts to rely on less powerful but easier to tune methods, and not even attempt deep learning. While researchers have used these methods for years, it’s not always the most effective way to conduct research.

The need for Bayesian Optimization

Bayesian optimization automatically fine tunes the parameters of these algorithms and machine learning models without accessing the underlying data or model itself. The process probes the underlying system to observe various outputs. It detects how previous configurations have performed to determine the best, most intelligent thing to try next. This helps researchers and domain experts arrive at the best possible model and frees up time to focus on more pressing parts of their research.

Bayesian optimization has already been applied outside of deep learning to other problems in academia from gravitational lensing to polymer synthesis to materials design and beyond. Additionally, a number of professors and students are already using this method at universities like MIT, University of Waterloo and Carnegie Mellon to optimize their deep learning models and conduct life-changing research. George Chen, assistant professor at Carnegie Mellon’s Heinz College of Public Policy and Information Systems, uses Bayesian Optimization to fine tune the machine learning models he uses in his experiments. His research consists of medical imaging analysis that automates the process of locating a specific organ in the human body. The implications of his research could help prevent unnecessary procedures in patients with congenital heart defects and others. Before applying Bayesian Optimization to his research, Chen had to guess and check the best parameters for his data models. Now, he’s able to automate the process and receive updates on his mobile phone so he can spend time completing other necessary parts of the research process.

Unfortunately, the vast majority of researchers leveraging deep learning outside of academia are not using these powerful techniques. This costs them time and resources or even completely prevents them from achieving their research goals via deep learning. When those experts are forced to do multidimensional, guess-and-check equations in their head, they usually have to spend valuable computational resources on modeling and work with sub-optimal results. Deploying Bayesian Optimization can accelerate the research process, free up time to focus on other important tasks and unlock better outcomes.

Scott Clark is the co-founder and CEO of SigOpt, which provides its services for free to academics around the world.. He has been applying optimal learning techniques in industry and academia for years, from bioinformatics to production advertising systems. Before SigOpt, Scott worked on the Ad Targeting team at Yelp leading the charge on academic research and outreach with projects like the Yelp Dataset Challenge and open sourcing MOE. Scott holds a PhD in Applied Mathematics and an MS in Computer Science from Cornell University and BS degrees in Mathematics, Physics, and Computational Physics from Oregon State University. Scott was chosen as one of Forbes’ 30 under 30 in 2016.

Related Items:

Getting Hyped for Deep Learning Configs

Dealing with Deep Learning’s Big Black Box Problem

Machine Learning, Deep Learning, and AI: What’s the Difference?

The post What’s Keeping Deep Learning In Academia From Reaching Its Full Potential? appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post What’s Keeping Deep Learning In Academia From Reaching Its Full Potential? appeared on IPv6.net.

Read more here:: IPv6 News Aggregator