ipv6 news

Autotalks raises $30m in Round D funding to speed global deployment of technologies for improving road safety

By News Aggregator

By Sheetal Kumbhar

Autotalks, a global provider of V2X (Vehicle to Everything) communication solutions, announced the completion of its Series D round with $30 million to expand its worldwide operations and accelerate deployment of technologies for safer and smarter autonomous vehicles. The new funding round includes the company’s existing investors: Magma Venture Capital, Gemini Israel Fund, Amiti Fund, Mitsui […]

The post Autotalks raises $30m in Round D funding to speed global deployment of technologies for improving road safety appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Autotalks raises $30m in Round D funding to speed global deployment of technologies for improving road safety appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Marc Sollars, Teneo’s CTO, is glad he’s not a pilot

By News Aggregator

By Sheetal Kumbhar

As a child what job did you want to have when you grew up? I wanted to be a TV cameraman: the thought of being involved with the creation of something that impacts people’s lives so much, and the fact that it wasn’t sat behind a desk all day, which is something I never wanted […]

The post Marc Sollars, Teneo’s CTO, is glad he’s not a pilot appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post Marc Sollars, Teneo’s CTO, is glad he’s not a pilot appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Big Data’s Relentless Pace Exposes Old Tensions and New Risks in the Enterprise

By News Aggregator

By Alex Woodie

Over the past two week, we’ve explored some of the difficulties that enterprises have experienced in trying to adopt the Hadoop stack of big data technologies. One area that demands further attention is how the rapid pace of development of open source data science technology in general, and the new business opportunities it unlocks, is simultaneously exposing old fault lines between business and IT while opening them to new risks.

Events like Cloudera and O’Reilly‘s recent Strata + Hadoop World conference and Hortonworks‘ upcoming DataWorks Summit 2017 are showcases for the burgeoning market for big data technology. While Hadoop itself may not be the center of gravity that it once was, there is no doubt that we’re in the midst of a booming marketplace for distributed computing technologies and data science techniques, and it’s not going to let up anytime soon.

The rapid pace of technological evolution has plusses and minuses. On the plus side, users are getting new technologies to play with all the time. Apache Spark has captured people’s imaginations, but already a replacement is on the horizon for those who think Spark is too slow. Enter Ray, a new technology that RISELab director Michael Jordan discussed during a keynote at last week’s Strata (and which we’ll cover here at Datanami).

Data scientists and developers are having a veritable field day with new software. Meanwhile, new hardware innovations from Intel, IBM, Nvidia, and ARM promise to unleash another round of disruptive innovation just in time for the IoT revolution.

This is a great time to be a data scientist or a big data developer. Like kids in a candy store with $100 to spend — and no parents to tell them what to do — it’s a technological dream come true in many respects.

Too Much, Too Fast?

And therein lies the rub: the kid in the candy store with eyes as big as dinner plates will invariably have a stomach ache of similar proportion.

“We’ve never seen technology change so rapidly,” says Bill Schmarzo, the chief technology officer of the big data practice at Dell EMC and the Dean of Big Data. “I don’t think we know what we’re doing with it yet.”

CIOs are struggling to keep up with the pace of change while retaining the order and organizational structure that their bosses demand, Schmarzo says. “They’ve got the hardest job in the world because the world around them has changed so dramatically from what they were used to,” he says. “Only the most agile and the most business-centric companies are the ones who are going to survive.”

How exactly we got to this point in business technology will be fodder for history books. Suffice it to say, the key driver today is the open source development method, which allows visionaries like Doug Cutting, Jay Kreps, Matei Zaharia and others to share their creations en masse, creating a ripple effect of faster and faster innovation cycles.

As you ogle this technological bounty that seemingly came out of nowhere, keep this key point in mind: All this awesome new open source big data technology was designed by developers for other developers to use.

This is perhaps the main reason why regular companies — the ones in non-tech fields like manufacturing and distribution and retail that are accustomed to buying their technology as shrink-wrapped products that are fully backed and supported by a vendor – are having so much difficulty using it effectively.

The partnership between business leaders and IT is a rocky one (kentoh/Shutterstock)

So, where are the software vendors? While many are working to create end-to-end applications that masks the complexity, many of the players in big data are hawking tools, such as libraries or frameworks that help developers become more productive. We’re not seeing mad rush of fully shrink-wrapped products in large part because software vendors are hesitant to get off the merry-go-round and plant a stake in the ground to make the tech palatable to Average Joe for fear of being left behind by what’s coming next.

The result is we have today’s culture of roll-your-own big data tech. Instead of buying big data applications, companies hire data scientists, analysts, and data engineers to stitch together various frameworks and use the open source tools to build one-off big data analytics products that are highly tailored to the needs of the business itself.

This is by far the most popular approach, although there are a few exceptions. We’re seeing Hortonworks building Hadoop bundles to solve specific tasks, like data warehousing, cybersecurity, and IoT, while Cloudera is going upstream and competing with the data science platform vendors with its new Data Science Workbench. But homegrown big data analytics is the norm today.

Don’t Lock Me In

While this open source approach works with enough time and money (and blood, sweat, and tears), it’s generally at odds with traditional IT organizations that value things like stability and predictability and 24/7 tech hotlines.

All this new big data technology sold under the “Hadoop” banner has run headlong into IT’s sensibility and organizational momentum, says Peter Wang, the CTO and co-founder of Continuum Analytics.

“One of the points of open source tools is to provide innovation to avoid vendor lock in, and then part of that innovation is agility,” he tells Datanami. “When new innovation comes out, you consume it. What enterprise IT has tended to do is once it deploys some of these open source things is it locks them down and makes them less agile.”

Some CIOs gravitated toward Hadoop because they didn’t want to go through a six-month data migration for some classic data warehouse, Wang says. “Now they’re finding that the IT teams make them go through the same [six-month] process for their Hadoop data lake,” he says.

That’s the source of some of the Hadoop pain enterprises are feeling. They were essentially expecting to get something for nothing with Hadoop and friends, which can be downloaded and used without paying any licensing fees. Even if they understood that it would require investing in people who had the skills to develop data applications using the new class of tools, they vastly underestimated the DevOps costs of creating it and operating it.

There is necessary complexity in big data, says Continuum Analytics CTO and co-founder Peter Wang

In the wider data science world, a central tenet holds that data scientists must be free to seek out and discover new data sources that are of value, and find new ways to extract additional value from existing sources. But even getting that level of agility is anathema to traditional IT’s approach, Wang says.

“All of data science is about being fast, both with the algorithms as well as new kinds of data sets and being able to explore ideas quickly and get them into production quickly,” Wang explains. “There’s a fundamental tension there.”

This tension surprised enterprises looking to adopt Hadoop, which in its raw Apache form, is largely unworkable for companies that just want to use the product, and not hire a team of developers to learn how to use it. Over the past few years, the Hadoop distributors have worked out the major kinks and filled in the functionally gaps and have something resembling a working platform. It wasn’t easy (don’t forget the battles fought over Hortonworks’ attempts to standardize the stack with its Open Data Platform Initiative), but today you can buy a functioning stack.

The problem is, just as Hadoop started to harden, the market shifted, and new technology emerged that wasn’t tied to Hadoop (although much of it was shipped in Hadoop distributions). Companies today are hearing about things like deep learning and wondering if they should be using Google‘s TensorFlow, which has no dependencies on Hadoop, although an organization may use it store the huge amount of training data they’re going to need to train the neural networks data scientists will build with TensorFlow.

Necessary Vs. Unnecessary Complexity

The complexity of big data tech will increase, Wang says. And while software vendors may eventually take all of the technology and deliver shrink-wrapped products that take the developer-like complexity out of using this technology, any company that wants to take advantage of the current data science movement will need to stiffen up, accept the daunting complexity level, and just try to make the most of it.

“People are going to have to hire very talented individuals who can draw from this giant pile of parts and build extremely vertically integrated, targeted apps or cloud services or whatever, and have to own, soup-to-nuts, the whole thing,” Wang says. “Before you could rely on Red Hat or Microsoft to provide you an operating system. You could get a database from some vendor or get a Java runtime and Java tooling from somebody else.

Complexity in big data can cause project failure, but it can also lead to technological flexibility (Sergey Nivens/Shutterstock)

“At the end of the day,” Wang says, “you now have six or seven layers of an enterprise software development stack, and then you hire some software developers to sprinkle some magic design pattern stuff and write some things, and you’ve got an app.”

Not all complexity is evil, according to Wang, who differentiates between necessary complexity and unnecessary complexity.

“There’s a central opportunity available in this space right now, and that essential opportunity is ultimately the oxygen that’s driving all these different kinds of innovation,” Wang says. “The insight that’s available with the data we have – that is the oxygen causing everything to catch fire.”

We’re experiencing a Gold Rush mentality at the moment in regards to data and the myriad of different ways organizations can monetize data or otherwise do something productive with it. If you can get over the complexity and get going with the data, you have the potential to shake up an industry and get rich in the process, which is ultimately what’s driving the boom.

“There’s a concept of the unreasonable effectiveness of data, where you just have a [big] ton of data in every category,” Wang says. “You don’t have to be really smart, but if you can get the right data and harness it and do some fairly standard thing with it, you are way ahead of the competition.”

Hedging Tech Dynamism

There is a lot of uncertainty around what technologies will emerge and become popular, and companies don’t want to make bad bets on losing tech. One must have the stomach to accept relentless technological change, which Hadoop creator Doug Cutting likened to Darwinian evolution through random digital mutations.

One hedge against technology irrelevancy is flexibility, and that’s generally what open source provides, Schmarzo says.

“We think we have the right architecture, but we really don’t know what will change,” he says. “So how do I give myself an architecture that gives me as much agility and flexibility as possible, so when things change I haven’t locked myself in?”

Adopting an open source platform allows you, theoretically, the most flexible environment, he says, even if it runs counter to the prevailing desire in organizations to rely on outside vendors for technology needs. Investing in open source also makes you more attractive to prospective data scientists who are eager to use the latest and greatest tools.

The tsunami of data and relentless pace of technological evolution threatens to leave tech executives all wet (Couperfield/Shutterstock)

“Our approach so far has been, on the data science side, to let them use every tool they want to do their exploration and discovery work,” Schmarzo says. “So if they come out of university with experience or R or Python, we let them use that.”

Organizations may want the best of all worlds, but they will be forced to make tradeoffs at some point. “There is no silver bullet. Everything’s a trade off in life,” Schmarzo says. “You’ve got to build on something. You’ve got to pick something.”

The key is to try and retain that flexibility as much as possible so you’re able to adapt to new opportunities that data provides. The fact that open source is both the source of the flexibility and the source of the complexity is something that technology leaders will simply have to deal with.

“The IT guys want everything locked down. Meanwhile the business opportunity is passing you by,” he adds. “I would hate to be a CIO today. It was easy when you had to buy SAP and Oracle [ERP systems]. You bought them and it took you 10 years to put the stupid things in but it didn’t matter because it’s going to last 20 years. Now we’re worried if it doesn’t go in in a couple of months because in two months, it may be obsolete.”

While there’s a risk in betting on the wrong big data technology, getting flummoxed by Hadoop, or making poor hiring decisions, the potential cost of not even trying is potentially even bigger.

“Enterprises really need to understand the business risks around that,” Wang says. “I think most of them are not cognizant yet of what that means. You’re going to tell your data scientists ‘No you can’t look at those five data sets together, just because.’ Because the CIO or the CDO making that decision or that call does not recognize the upside for them. There’s only risk.”

Related Items:

Hadoop Has Failed Us, Tech Experts Say

Hadoop at Strata: Not Exactly ‘Failure,’ But It Is Complicated

Anatomy of a Hadoop Project Failure

Cutting On Random Digital Mutations and Peak Hadoop

The post Big Data’s Relentless Pace Exposes Old Tensions and New Risks in the Enterprise appeared first on Datanami.

Read more here:: www.datanami.com/feed/

The post Big Data’s Relentless Pace Exposes Old Tensions and New Risks in the Enterprise appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

2017 North American IPv6 Summit to Be Held at LinkedIn Headquarters

By News Aggregator

By CircleID Reporter

​​The collective North American IPv6 Task Forces announced the 2017 North American IPv6 Summit will be held at LinkedIn headquarters in Sunnyvale, CA. The two-day event (April 25-26), designed to educate network professionals on the current state of IPv6 adoption, will feature a variety of speakers from leading organizations, including LinkedIn, ARIN, Google Fiber, Microsoft, Cisco, Comcast, and others. The IPv6 North American Summit, first held in 2007, will cover such topics as exemplary IPv6 adoption, best practices in IPv6 deployment, methods for driving increased usage of IPv6, current IPv6 adoption trends, and future IPv6 growth projections. Awards will be presented to the top 10 North American service providers who achieved connecting over 20% of their subscribers with IPv6.

Follow CircleID on Twitter

More under: IPv6

Read more here:: feeds.circleid.com/cid_sections/news?format=xml

The post 2017 North American IPv6 Summit to Be Held at LinkedIn Headquarters appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Google Cloud Platform gets IPv6 support

By News Aggregator

By Kevin Meynell

The Google Cloud Platform (GCP) is now able to support IPv6 clients using HTTP(S), SSL proxy and TCP proxy load balancing. The load balancer will accept IPv6 connections from users, and proxy those over IPv4 to virtual machines (i.e. instances). This allows instances to appear as IPv6 services to IPv6 clients.

At the moment, this functionality is an alpha release and is not currently recommended for production use but it demonstrates a commitment to support IPv6 services. GCP allocates a /64 address range for forwarding purposes.

Google Cloud Platform is a cloud computing service offering website and application hosting, data storage and compute facilities on Google’s infrastructure.

More information on how to set-up IPv6 support is available on the GCP website.

Read more here:: www.internetsociety.org/deploy360/blog/feed/

The post Google Cloud Platform gets IPv6 support appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Innovation at CeBIT Leans Heavily Toward the Internet of Things

By News Aggregator

Some of the most popular exhibits at the 2017 CeBIT show are from startup companies that are demonstrating some highly creative thinking.

Read more here:: www.eweek.com/rss.xml

The post Innovation at CeBIT Leans Heavily Toward the Internet of Things appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

Technology for a Cashless Society | @ThingsExpo #IoT #M2M #Sensors

By News Aggregator

I happened to be in India last November when prime minister Modi announced the demonetization program, where 86% of the currency in the form of two paper bills (Rs. 500 and 1000 denomination) were made defunct. People were given time to deposit their existing currencies in the bank. Those who had unusually high volume of such currencies were supposed to declare the legal source or face stiff penalties such as 60-75% tax. The goal was to catch the money hoarders and black marketers who avoid paying taxes on such undeclared money.

read more

Read more here:: iot.sys-con.com/index.rss

The post Technology for a Cashless Society | @ThingsExpo #IoT #M2M #Sensors appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

13th edition of Asia IoT Business Platform

By News Aggregator

By IoT Now Magazine

Event date: July 24 – 25, 2017 Renaissance Bangkok Ratchaprasong, Bangkok, Thailand IoT Thailand is held annually in Bangkok. With a focus on local telecommunication companies and enterprises in key verticals, the program offers Southeast Asia’s most comprehensive platform for solution providers targeting enterprise adoption of Internet of Things (IoT) and Machine-to-Machine (M2M) technologies in Thailand. IoT […]

The post 13th edition of Asia IoT Business Platform appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

The post 13th edition of Asia IoT Business Platform appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

How police unmasked suspect accused of sending seizure-inducing tweet

By News Aggregator

By David Kravets

(credit: zodman)

The man accused of sending a Newsweek writer a seizure-inducing tweet left behind a digital trail that the Dallas Police Department traced—beginning with the @jew_goldstein Twitter handle, leading to a burner mobile phone SIM card, and ending with an Apple iCloud account, according to federal court documents unsealed in the case.

Rivello with driver’s license. (credit: Court documents)

John Rayne Rivello was arrested Friday at his Maryland residence and is believed to be the nation’s first defendant accused of federal cyberstalking charges for allegedly victimizing an epileptic with a strobing, epileptogenic online image—in this instance a GIF sent via Twitter.

According to court documents, when Newsweek writer Kurt Eichenwald of Dallas, Texas opened his Twitter feed on December 15, he was met with a strobing message that read, “you deserve a seizure for your post.” Eichenwald, who has written that he has epilepsy, went into an eight-minute seizure where he lost control of his body functions and mental faculty. His wife found him, placed him on the floor, called 911, and took a picture of the offending tweet, according to court records.

Read 7 remaining paragraphs | Comments

Read more here:: feeds.arstechnica.com/arstechnica/index?format=xml

The post How police unmasked suspect accused of sending seizure-inducing tweet appeared on IPv6.net.

Read more here:: IPv6 News Aggregator

AT&T & IBM Partner for New IoT Analytics Tech

By News Aggregator

By Kelsey Kusterer Ziser AT&T and IBM partner on delivering new IoT analytics technology to enterprise customers.

Read more here:: www.lightreading.com/rss_simple.asp?f_n=1249&f_sty=News%20Wire&f_ln=IPv6+-+Latest+News+Wire

The post AT&T & IBM Partner for New IoT Analytics Tech appeared on IPv6.net.

Read more here:: IPv6 News Aggregator