KSK Rollover Postponed

The Internet Corporation for Assigned Names and Numbers (“ICANN”) today announced that the plan to change the cryptographic key that helps protect the Domain Name System (DNS) is being postponed.

Changing the key involves generating a new cryptographic key pair and distributing the new public component to the Domain Name System Security Extensions (DNSSEC)-validating resolvers. Based on the estimated number of Internet users who use DNSSEC validating resolvers, an estimated one-in-four global Internet users, or 750 million people, could be affected by the KSK rollover.

The changing or “rolling” of the KSK Key was originally scheduled to occur on 11 October, but it is being delayed because some recently obtained data shows that a significant number of resolvers used by Internet Service Providers (ISPs) and Network Operators are not yet ready for the Key Rollover. The availability of this new data is due to a very recent DNS protocol feature that adds the ability for a resolver to report back to the root servers which keys it has configured.

There may be multiple reasons why operators do not have the new key installed in their systems: some may not have their resolver software properly configured and a recently discovered issue in one widely used resolver program appears to not be automatically updating the key as it should, for reasons that are still being explored.

ICANN is reaching out to its community, including its Security and Stability Advisory Committee, the Regional Internet Registries, Network Operator Groups and others to help explore and resolve the issues.

In the meantime, ICANN believes it prudent to follow its process and to delay the changing of the key rather than run the risk of a significant number of Internet users being adversely affected by the changing of the key. ICANN is committed to continuing its education, communication and engagement with the relevant technical organizations to ensure readiness for the key change.

“The security, stability and resiliency of the domain name system is our core mission. We would rather proceed cautiously and reasonably, than continue with the roll on the announced date of 11 October,” said Göran Marby. “It would be irresponsible to proceed with the roll after we have identified these new issues that could adversely affect its success and could adversely affect the ability of a significant number of end users.”

A new date for the Key Roll has not yet been determined. ICANN’s Office of the Chief Technology Officer says it is tentatively hoping to reschedule the Key Roll for the first quarter of 2018, but that it will be dependent on more fully understanding the new information and mitigating as many potential failures as possible.

ICANN will provide additional information as it becomes available and the new Key Roll date will be announced as appropriate.

“It’s our hope that network operators will use this additional time period to be certain that their systems are ready for the Key Roll,” said Marby. “Our testing platform ( will help operators ensure that their resolvers are properly configured with the new key and we will continue our engagement and communications to these operators.”


To easily identify resources on the Internet, the underlying numerical addresses for these resources are represented by human readable strings. The conversion of these strings to numbers is done by the distributed hierarchical Domain Name System (DNS). Increased sophistication in computing and networking since its design in 1983 have made this “phone book” vulnerable to attacks. In response to these threats, the international standards organization, IETF, developed DNSSEC to cryptographically ensure DNS content cannot be modified from its source without being detected. Once fully deployed, DNSSEC will stop the attacker’s ability to redirect users using the DNS.


To keep informed about KSK Rollover developments go here:

On social media use: #Keyroll

Read more here::

Book Review: Implementing the Internet of Things

By Jeremy Cowan

Implementing the Internet of Things by Boban Vukicevic and Bob Emmerson is not a technical manual, although the title might suggest otherwise. It’s a business-focused and practical guide to “Strategy, Implementation and Considerations”, as the sub-heading puts it, for those considering an Internet of Things (IoT) solution for their organisation. It describes the “profound impact […]

The post Book Review: Implementing the Internet of Things appeared first on IoT Now – How to run an IoT enabled business.

Read more here::

The Next Generation Analytics Database – Accelerated by GPUs

By Ana Vasquez

The Next Generation Analytics Database - Accelerated by GPUs

As organizations demand more and more from their analytics and data science teams, processing power looms as one of the fundamental roadblocks to easy success.

Organizations are facing a variety of processing-related data challenges: data centers sprawling to hundreds of nodes to provide processing power; data architects turning to convoluted pipelines to accommodate specialized tools; and business users frustrated by slow BI tools and latent results from batch queries.

A new generation of databases, accelerated by NVIDIA GPUs, are providing the way forward.

GPUs offer thousands of processing cores, and are ideal for general-purpose parallelized computation—in addition to video processing! GPUs differ significantly from standard CPUs in that today’s GPUs have around 4,500 cores (computational units) per device, compared to a CPU which typically has 8 or 16 cores. GPUs are now exploding in popularity in areas such as self-driving cars, medical imaging, computational finance, and bioinformatics, to name a few.

Analytics and data science tasks in particular benefit from parallelized compute on the GPU. Kinetica’s GPU-accelerated database vectorizes queries across the many thousands of GPU cores and can produce results in a fraction of a time compared to standard CPU-constrained databases. Analytics queries, such as SQL aggregations and GROUP BYs, are often reduced to seconds—down from several minutes with other analytics systems. Business users, working with tools such as Tableau or PowerBI, see dashboards reload in an instant—no time to even think about getting coffee!

Solving the compute bottleneck also results in substantially less hardware needs than before. Organizations are replacing 300-node Spark clusters with just 30 nodes of a GPU-accelerated database. They’re running models and queries significantly faster than with any other analytics solution and also benefiting from huge savings on datacenter and data management overhead.

Many financial organizations have been innovating with GPUs for more than five years and are among the first businesses to realize their value. Some of these financial companies are deploying thousands of GPUs to run algorithms — including Monte Carlo simulations — on rapidly changing, streaming trading data to compute risk, for example—essential for regulatory compliance.

A GPU-accelerated database that can natively perform custom computation on data distributed across multiple machines makes it easier for data science teams. Kinetica’s in-database analytics framework provides an API that makes it possible to do compute on the GPU using familiar languages such as Python. Customized data science workloads are now able to be run alongside business analytics in a single solution and on a single copy of the data.

Enterprises can run sophisticated data science workloads in the same database that houses the rich information needed to run the business and drive day-to-day decisions. This neatly solves the data movement challenge because there isn’t any data movement, which leads to simpler Lambda architectures and more efficient Machine Learning and AI workloads. Quants, data scientists, and analysts can deploy a model from a deep learning framework via a simple API call, or train their models on the latest data; users can experience the GPU’s benefits and in-memory processing without needing to learn new programming languages.

With Kinetica, in-database compute can be extended to machine learning libraries such TensorFlow, Caffe (a deep learning framework), and Torch (a machine learning and neural-network framework). These libraries can be extremely compute-hungry, especially on massive datasets, and benefit greatly from GPU horsepower and distributed compute capabilities

GPU-accelerated database solutions can be found in utilities and energy, healthcare, genomics research, automotive, retail, telecommunications, and many other industries. Adopters are combining traditional and transactional data, streaming data, and data from blogs, forums, social media sources, orbital imagery, and other IoT devices in a single, distributed, scalable solution.

Learn more: Advanced In-Database Analytics on the GPU

Download the eBook: How GPUs are Defining the Future of Data Analytics

Pick up your free copy of the new O’Reilly book “Introduction to GPUs for Data Analytics” from Kinetica booth #825 at Strata Data Conference in NY next week.

The post The Next Generation Analytics Database – Accelerated by GPUs appeared first on Datanami.

Read more here::

Hitting IoT’s heights

By Sheetal Kumbhar

I’m finally reading Wuthering Heights, says Donna Prlich, chief product officer, Pentaho, a book which at the time of publishing was considered ground-breaking and highly controversial for its dark, twisted storyline and cruel characters. Emily Brontë chose these story elements to make a point about hypocrisy in the context of Britain’s Victorian era, with its […]

The post Hitting IoT’s heights appeared first on IoT Now – How to run an IoT enabled business.

Read more here::

What is the ‘First Receiver’ concept? Is it the ultimate IoT deliverable? Part 1

By Sheetal Kumbhar

IoT is progressing faster than many could have imagined so we shouldn’t be surprised when it throws up a new term. I came across the “first receiver” when reviewing a book titled “The Future of IoT” that outlined and then explained the concept in detail, but it took a while before the penny finally dropped. BTW: […]

The post What is the ‘First Receiver’ concept? Is it the ultimate IoT deliverable? Part 1 appeared first on IoT Now – How to run an IoT enabled business.

Read more here::

The Future of IoT

By Sheetal Kumbhar

This is a big book: 256 pages, 77,000 words. It’s sub-titled “Leveraging the shift to a data centric world” and it targets the executives and senior managers of both enterprise organisations and technology companies. The book does not assume prior knowledge of the subject so it’s an easy way to start thinking about the IoT, […]

The post The Future of IoT appeared first on IoT Now – How to run an IoT enabled business.

Read more here::

Datafication of Everything: Who Owns All The Data ?

By Maria Fonseca

Datafication of Everything: Who Owns All The Data ?

Datafication of Everything: Who Owns All The Data ?

The full digitalisation of everything surrounding us has transformed our lives. At this very moment more and more parts of the world are being digitised, resulting in escalating amounts of data being produced everyday. If digitisation has had a major impact in the world, datafication, is the next step.

Digitisation began in the late 1950s, with the birth of the semiconductor industry. It refers to the conversion of pieces of information into digital formats, for example text into HTML pages, music into MP3s, images into JPEG or similar, Datafication, on the other hand, is a more recent process, which means that everything – from how much energy and water one uses, to what each one’s food purchasing habits are, to what interests one displays when using social networks, the air quality of the local neighborhood, the knowing if one is anxious or stress, how many cups of tea one takes each day, in sum, EVERYTHING, can and is being transformed into data that can now be measured, quantified and compared to other people.

What is Datafication

Datafication is a modern technological trend turning many aspects of our life into computerised data and transforming this information into new forms of value.
Datafication adds meaning and value to those bits of information being gathered each second.

In 2014, Ericsson wrote a report about datafication where they explore the profound impact of this novel trend:

“A new process of datafication is emerging across the world. In contrast to digitalisation, which enabled productivity improvements and efficiency gains on already existing processes, datafication promises to completely redefine nearly every aspect of our existence as humans on this planet. Significantly beyond digitalisation, this trend challenges the very foundations of our established methods of measurement and provides the opportunity to recreate societal frameworks, many of which have dictated human existence for over 250 years.”

Datafication appears in society in a multiple ways, partly resulting from the ubiquitous use of sensors/actuators and the emerging Internet of Things (IoT). Over the last decade, the cost of sensors decreased immensely, whereas the processing capacity and availability of low-cost bandwidth augmented. The process became more capable to enter into business processes that have until recently been un-monitored.

Datafication of personalities Image source:

An interesting area of datafication is the datafication of personality. This corresponds to the capability to accurately predict personality types by using mobile phone data. The commercial potential for this is amazing. According to a study conducted by Montjoye et al (2013) is is now possible to proceed with “costeffective, questionnaire-free investigation of personality-related questions at a scale never seen before”.

There is a question now that needs to be asked: who owns and takes economical advantage of your data?

Who Owns And is Making Money With Your Data ?

The use of datafication applied to social and communication media is widespread. Just look at how Twitter datafies stray thoughts or the datafication of human resources by LinkedIn and others. An interesting article on Wired gives a great example that helps us understand this issue: The new cars usually have a kind of “black box” and various types of computers. The black box is able to gather information about exactly where you are, how fast you’re driving, and many other details of your driving habits. The crucial question is who owns that data. The insurance company? The auto manufacturer? The driver?

Another example is how companies like Facebook and Google sell our information so that advertisers know the tastes of certain groups. With such specific data, advertising companies can study in detail various age groups, locations and tastes to advertise products.

Every day people generate streams of metadata about themselves constantly, through their interactions on their social media accounts, as described previously. Who is collecting and controlling this constant stream of personal data ? And who is making business and gaining profit with that data?

The issue at stake with data ownership is that data is radically different from a physical assets. Data is done by bits, not atoms. In the past laws and regulation focused on physical assets that couldn’t be duplicated. We live now in a world of bits, where it is possible to make and distribute, say, a million copies of a virtual book, each as good as the original, nearly at cost zero.

Data law is confusing and we don’t know yet out to regulate it and what is own by who. Big Data on the other hand is different. The general tendency is to consider that the act of collecting and analyzing massive amounts of public and private data, that results afterwards in even more data, belongs to whomever performed the analysis, which tend to be large online companies. As expected this has been highly criticised for hoarding data about people and refusing to make it available publicly.

DECODE: Giving People Ownership of Their Personal Data through Blockchain

A very recent project trying to experiment with data ownership in novel and fruitful ways, benefiting all, is DECODE. DECODE is a three-year EU-funded project, dubbed the Decentralised Citizen Owned Data Ecosystem (DECODE), which plans to launch four pilot trials in Barcelona and Amsterdam at the end of 2017. In each city, 1000 people will be given an app through which they can share data about themselves to help companies or government groups create products or services to improve the city. The project is due to end in 2019. The app is done through blockchain technology. Blockchain technology is defined as a digital ledger that securely stores data across a network of computers in a decentralised way. Blockchain is the technology that underpins bitcoin transactions.

With DECODE each citizen will be able to decide exactly how much of their data is uploaded to the platform and how it should be used. For example, a person may decide that location-tracking data about parks they visit can be used by the city council but not private companies

In the UK, it is Nesta, UK’s innovation charity, the organization working in cooperation with 13 partner agencies across the European Union on the DECODE project.


Tom Symons, a Nesta researcher leading the work on DECODE, hopes that the project will end up channelling public data into a greater number of socially beneficial projects. The project is still in its inception, so Nesta is consulting with local governments, entrepreneurs and other groups to understand what kind of data they would like to have access to through the DECODE platform.

According to Symons, it could be possible to combine publicly available data such as social media posts with location information to better understand how people feel about different parts of the city. The DECODE project also plans to launch a website or app that lets citizens share things with other people in the city. People might use the platform to offer up spare power tools or even their car for other residents to borrow, which could eventually end up being a better alternative to the current centralized sharing economy companies such as Uber and others.

Finally, residents will be able to use the platform to comment on city legislation, put forward their own ideas and vote on proposals. Thishas already been experimented on others platforms such as Decidim Barcelona, which was put in place to encourage more open and collaborative decision-making in the city.

Even if the issue of the monetization of our data is not being addressed with DECODE, the project is a first experiment with using common data for the public benefit of all. Blockchain technology can have an important role in the development of platforms where fostering a fairer use of datafication.

Article powered by Humaniq

Launched in 2016, Humaniq aims to provide mobile finance to the 2 billion unbanked population through its mobile app for good, that uses biometric authentication to replace traditional methods of ID and security. Humaniq’s open source stack and API will be available for startups and other businesses to build services on its core technology, making it easy to adapt their service and plug it into Humaniq’s network to reach a huge, untapped audience.

The post Datafication of Everything: Who Owns All The Data ? appeared first on Intelligent Head Quarters.

Read more here::

The Outcome Economy

By Sheetal Kumbhar

Book Review: A review of a timely publication sub-titled “How the Industrial Internet of Things is Transforming Every Business”, written by Joe Barkai, a consultant, speaker, author and blogger. It’s timely because while the business case for the IIoT is compelling, all too often it is overlaid with noise, buzzwords and tired clichés. Therefore it has […]

The post The Outcome Economy appeared first on IoT Now – How to run an IoT enabled business.

Read more here::

Why the “Internet of People” Is Going to Replace the “Internet of Things”

By admin

Why the “Internet of People” Is Going to Replace the “Internet of Things”
Why the “Internet of People” Is Going to Replace the “Internet of Things”

Why the “Internet of People” Is Going to Replace the “Internet of Things”

Why the “Internet of People” Is Going to Replace the “Internet of Things”

This is a guest post by Chris Richards. Chris is the Global Marketing Manager at He is Social Media Ninja and IOT lover. With 34 years old, he works for Barclays – Group Innovation. He is also an influencer, a Speaker and Wine lover.

Within the technology space, there’s perhaps no bigger buzzword than the Internet of Things (IoT). Go to any big tech trade show, and all the biggest tech vendors are pushing their own version of the IoT. Talk to any top-level business executive, and there’s likely some kind of showcase IoT project in the works.

Manufacturers are coming up with IoT solutions, politicians are talking up the advantages of “smart cities,” car makers are pushing autonomous cars hooked up to the Internet, and technology companies are pushing connected homes.

That combination of the IoT being embraced by both the B2C and B2B space has led to a single dominant paradigm in the mainstream media: billions of digital devices, seamlessly integrated, and all talking to each other via the Internet.

That’s what people typically have in mind when they talk about the “Internet of Things.”

As a result, the rush is on to create new connectivity protocols to link up all those devices. There’s also a rush to solve all the security problems created by the Internet of Things – like the massive botnet crash of 2016 caused by hackers using internet-connected devices to crash the system. There’s even talk of a 5G network connecting devices by 2018 that would make this giant IoT even faster and more powerful.

IOT applied to automobile industry

IOT applied to automobile industry

The Internet of People

But that’s missing the big picture. The one variable missing in that vision of the future is people. Yes, people.

But aren’t we already connected? Aren’t billions of people connected via the Internet? Aren’t Facebook’s 1.8 billion global users connected into one giant hive mind? Aren’t Wikipedia’s millions of users all connected via the crowd?

Yes and no. You can think of that as a foundational or transitional phase. What’s coming within the next two decades is a world in which people and devices are interconnected, and where man and machine are one.

In the future, we may no longer differentiate between devices and people. Think of a social network involving you, your friends, a few AI-powered chatbots, and your AI-powered digital assistant like Amazon’s Alexa or Apple’s Siri. We’d all be connected together via the Internet, and we’d all have the option of connecting our brains to an even bigger cloud involving every single member of humanity.

The Internet of People by Chris Richards

The Internet of People by Chris Richards

The new Singularity

If that vision sounds familiar, it’s because futurists have been predicting a version of such a future for decades. They typically refer to this as the Singularity – the moment when man and machine become one. Until recently, the conventional thinking was that the Singularity wasn’t going to be here for at least a few decades. Futurist Ray Kurzweil, now the Director of Engineering at Google, had famously predicted the date of the Singularity as 2045 – still more than two decades into the future.

But then at this year’s SXSW digital event in Texas, Kurzweil moved up that timetable considerably. He suggested that the Singularity would be here by 2029. That’s just a decade away. Kurzweil isn’t sure exactly how it’s going to happen, but he thinks it’s going to involve connecting each person’s neocortex (the part of the brain that does the thinking) to the cloud.

In doing so, says Kurzweil, we’ll become funnier, smarter and more talented in just about every field of human endeavor. Presumably, once our brains are connected to the cloud, we’d become part of a massive superintelligence. We’d learn languages immediately, and we’d download new skills as easily as we download apps today. If you think Google makes you smart today, just wait until your neocortex is wired right into it!

Singularity and The Internet of People by Chris Richards

Singularity and The Internet of People by Chris Richards

How do we get to the Internet of People?

That vision of the future, of course, involves a blurring of the line between artificial intelligence (AI) and human intelligence. And, as you might imagine, when it comes to artificial intelligence, there are multiple paths to that great superintelligence in the cloud, aka the Internet of People. In his highly-acclaimed 2014 book Superintelligence, philosopher Nick Bostrom of the University of Oxford laid out several different paths to superintelligence.

One of these paths – you guessed it – was enhancing humanity’s own biological cognition through the use of genetic engineering. (In other words, we’d tinker with the genetic code of the brain) But other paths involved futuristic scenarios like a “whole brain emulation” – in which biologists and computer scientists work side by side to create a digital copy of the human mind. You’d basically slice and dice a human brain into thin enough pieces that you’d be able to stick it on a microchip or somehow create a purely digital representation of a biological phenomenon.

Singularity and The Internet of People by Chris Richards

Singularity and The Internet of People by Chris Richards

People are more important than things

There’s just one problem here, says Bostrom. If machine brains eventually surpass human brains in general intelligence, then any new machine-only superintelligence could replace humans as the dominant species on the planet. The way we think about apes and monkey now is the way this new superintelligence would think about us poor humans in the future. If this superintelligence were imbued with some sort of moral values, then it might decide to go easy on us, and maybe keep us around in some kind of Matrix. But, the more likely scenario, says Bostrom, is that this superintelligence would pose an existential risk to us mere humans and decide to get rid of us before we finish destroying the planet.

That’s why there’s a real imperative to create an “Internet of People” before the “Internet of Things” becomes too powerful. The important point here is that the focus has to be on people, and not on machines. That’s why some of the smartest people on the planet – including both Bill Gates of Microsoft and Elon Musk of SpaceX and Tesla, have suggested that humans must merge with machines or risk becoming irrelevant.

People are more important than things by Chris Richards

People are more important than things by Chris Richards

The future

But just think of what might happen if we create the Internet of People. We’d break down geopolitical barriers, creating a massive Internet hive-mind of humanity all united around common goals – such as solving the problems of climate change, eradicating disease, and solving all the problems that are too computationally challenging, even for today’s supercomputers.

There’s a lot of momentum already behind this vision of the world. Softbank CEO Masayoshi Son has already announced that he’s preparing to launch a $100 billion venture capital fund to invest in the Singularity. Philosophers like Nick Bostrom are working with some of the smartest AI minds on the planet for the ethical creation of superintelligence.

And technologists, of course, are already dreaming up some big sky ideas for connecting everyone. Literally. The European Space Agency, for example, is working on an Internet of Things connected by satellites. Imagine if the ESA takes that one step further – an Internet of People powered by AI and hooked up in outer space. That’s a big, big picture. And if futurists like Ray Kurzweil are right, it could be here before your kids finish growing up.

People are more important than things, IOT, Singularity and the Future by Chris Richards

People are more important than things, IOT, Singularity and the Future by Chris Richards

The post Why the “Internet of People” Is Going to Replace the “Internet of Things” appeared first on Intelligent Head Quarters.

Read more here::

OT and Octo Telematics Revolutionize Car Sharing With EasyOpen by Omoove

By IoT – Internet of Things

OT and Omoove are joining forces to transform user experience and increase security for car sharing services. The two companies will present EasyOpen, a service powered by OT’s Secure IOT Cloud that transforms smartphones into car keys, at the Mobile World Congress in Barcelona.

EasyOpen combines OT’s expertise in secure service enablement for smartphones and wearable devices with Omoove’s multiple years of experience in car sharing platforms and on-board technology.

Until now, the majority of car sharing fleet systems relied on direct communication between components installed in vehicles and a central management system. These purely online systems tended to create latencies for the users, and potentially made the vehicles vulnerable to illicit use. To offer a superior level of reliability and solve latency issues, EasyOpen introduces the use of a smartphone with Near Field Communication (NFC) technology to quickly and safely manage access to vehicles enabling users to directly interact with the vehicle using a device they already own, even if their smartphone is out of range of network coverage or runs out of battery. The service can also be extended to any wearable device equipped with an eSE (embedded Secure Element) supporting NFC technology.

Relying on OT’s Secure IOT Cloud, EasyOpen ensures strong authentication of the user’s smartphone and secure provisioning and storage of the digital keys in its eSE from the enrolment to the effective use of the service. In practice, it will provide an extremely simple, intuitive and fast user experience. After initially registering to the service through the EasyOpen application provided by Omoove, users will be able to immediately find vehicles in the area using their smartphone and book the one of their choice. Digital keys will be sent remotely and in a secure way via OT’s Secure IOT Cloud to the user’s device. The user will simply have to place the smartphone close to the windshield to unlock the door. OT’s eSE equipping the NFC receiver installed in the cars will then ensure the secure communication between the user’s device and the car and will also enable the remote management of the access rights at the car level. In case of device loss or theft, the digital key will be disabled remotely to avoid misuse.

“OT is a strategic supplier both to mobile device makers and to the car industry. As such we have a natural position to build the trusted and secured link between smartphones and cars. Octo Telematics has already established its capability and legitimacy to deploy car services at scale in the Usage-Based Insurance sector and we are really pleased to team-up with them as they now ambition to reinvent the car sharing experience.” said Viken Gazarian, Deputy Managing Director of Connected Device Makers business at OT.

“As a pioneer and leader of Insurance Telematics, Octo places innovation at the heart of its growth. The agreement between Octo and Oberthur Technologies allows us to leverage the mutual expertise in automotive solutions and IoT applications, where both companies are investing, and to cover the needs of the emerging sharing mobility market for fleet managers or car sharing operators.

Digitalizing and securing the keys in end-users’ devices is the first achievement of the agreement, led by Omoove, focused on shared mobility solutions, and OT, a field-proven expert in securing digital services covering a wide range of smartphones and wearable devices, as well as the automotive space. This is a key asset to offer a ready-to-use and fully-secured solution to Omoove customers, supported by cutting edge technologies, serving a strong and unique user experience” said Giuseppe Zuco, CEO at Omoove and Octo Telematics co-founder.

No keys, no card, you only need a smartphone to open the car and start the journey! Discover the service at the Mobile World Congress on OT’s booth, Hall 6 Stand 6H30.

The post OT and Octo Telematics Revolutionize Car Sharing With EasyOpen by Omoove appeared first on IoT – Internet of Things.

Read more here::