stream

DTS Play-Fi announces wireless speakers to support Works with Amazon Alexa

By Zenobia Hegde

DTS, a global provider in high-definition audio solutions and a wholly owned subsidiary of Xperi Corporation, is pleased to announce the first DTS Play-Fi®-enabled wireless speakers to support the Works with Amazon Alexa functionality.

Initial products supported include the Pioneer Elite Smart Speaker F4, Onkyo Smart Speaker P3 and the Phorus PS10. Additional DTS Play-Fi-enabled products, including the Klipsch Stream wireless multi-room audio lineup, as well as McIntosh Laboratories, MartinLogan and THIEL Audio, will add the capability by the end of Q1 2018.

Available via a Works with Amazon Alexa over-the-air firmware update, consumers can now control audio playback on select DTS Play-Fi products from another room using an Amazon Echo, Dot or Show. This functionality allows users to verbally ask Alexa to play a song in a specific room, groups of rooms or the whole house, adjust volume, skip the track forward, mute, pause and stop the music.

“We continue to expand our range of Alexa voice control solutions available to licensees, making DTS Play-Fi the first open wireless multi-room audio platform to offer both integrated Alexa Voice Services (AVS) and Works with Amazon Alexa,” said Dannie Lau, general manager, DTS Play-Fi, at Xperi. “We look forward to continuing to forge a strong relationship with Amazon, the most widely recognized and adopted voice service on the market.”

“We’re thrilled to offer the Pioneer Elite Smart Speaker F4 and Onkyo Smart Speaker P3 as the first speakers in the DTS Play-Fi ecosystem that can be controlled via Amazon Alexa,” Nobuaki Okuda, director and CTO, Onkyo Corporation and president, Onkyo and Pioneer Technology Corporation. “With this update, consumers can not only control their whole-home music system using their voice with our speakers, but using a third party voice control product as well.”

DTS Play-Fi technology enables lossless multi-room wireless audio streaming from the world’s most popular music services including Amazon Music, Deezer, iHeartRadio, Juke, KKBox, Napster, Pandora, Qobuz, QQ Music, SiriusXM, Spotify and TIDAL, thousands of Internet radio stations, as well as personal music libraries, on any supported product. In addition, DTS Play-Fi features advance streaming functionality like wireless surround sound, stereo pairing, music station presets, and audio/video synchronisation.

The DTS Play-Fi ecosystem features the largest collection of products in the whole-home wireless audio space, with more than 200 interoperable speakers, sound bars, set-top boxes, and A/V receivers from the top names in premium audio including Aerix, Anthem, Arcam, Definitive Technology, DISH TV, Elite, Integra, Fusion Research, Klipsch, MartinLogan, McIntosh, Onkyo, Paradigm, Phorus, Pioneer, Polk Audio, Rotel, Sonus faber, Soundcast, SVS Sound, THIEL Audio and Wren Sound.

Comment on this article below or via Twitter: @IoTNow_OR @jcIoTnow

The post DTS Play-Fi announces wireless speakers to support Works with Amazon Alexa appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Dutch Blockchain company, LegalThings aims to update criminal justice via smartphone

By Zenobia Hegde

The justice system is known for many things, but efficiency is not one of them. Neither is being up-to-speed with technology. One joke goes that the unofficial IT slogan of the courts is, “Yesterday’s technology, tomorrow!”

Into this space comes LegalThings, an Amsterdam-based digital contracts company that’s aiming to update how those accused of a crime move through the justice system by making the law accessible while making judicial record-keeping more open and secure.

After winning a “blockathon” competition in September hosted by the Dutch Ministry of Justice and Security, LegalThings began a pilot project with the Public Prosecution Service of the Netherlands, known as the “Openbaar Ministerie,” or OM, in Dutch. The project aims to build a system to process low-level criminal offenders quickly and with more transparency. If successful, it could be a huge time- and money-saving enterprise for the government.

“What you see now [in the justice system] is there is a lot of procedures, and those procedures are important to create a fair legal system, but they’re also really labor-intensive,” said Arnold Daniels, a co-founder of LegalThings and its chief software engineer. “What we’re trying to do is create an alternative to that.”

How might that work in practice? Imagine someone nabbed for possession of a small amount of illicit drugs, a crime that, in the Netherlands, can carry a fine of a few hundred euros. There are a number of parties involved in processing such a law enforcement action: the police who catch the alleged offender, the forensics expert that examines the drugs, and the OM.

Depending on whether the forensic expert is on-site to test the drugs, processing such an enforcement action can take anywhere from several hours to a couple days, said Sanne Giphart, innovation manager at OM. While some record-keeping systems have been made digital, that’s an ongoing process, Giphart explained. Things can move slowly.

Arnold Daniels

By contrast, with the LegalThings application, the accused can get an explanation of the relevant law, choose whether to be represented by counsel, and agree to pay the relevant fine—all on their smartphone. All told, the actual processing of the offender takes about 30 minutes, and every step of the exchange is recorded, time-stamped, and made unchangeable using cryptography to ensure records can’t be fudged.

So far, OM, which is comparable to a mashup of the Department of Justice and local district attorneys in the U.S., has only experimented with the technology on “dummy data” involving a drug offense and a domestic violence offense, Giphart said. “The next step is to let people get familiar with this type of technology within the [OM] and then hopefully we can implement on one stream of cases.”

The challenges to implementing such a system are not purely technological. It also will likely require some changes in both public and institutional attitudes toward judicial record-keeping, said Daniels. “With this system, there’s really no backsies,” he explained. “You can correct it, but you can always see your initial action.”

Unlike other blockchain systems that use a publicly distributed ledger, the LegalThings project with OM allows […]

The post Dutch Blockchain company, LegalThings aims to update criminal justice via smartphone appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Wearable Data Analytics Bring Humans into the IoT

By IoT – Internet of Things

Data analytics provides companies, healthcare professionals, and consumers alike with further insight into the long stream of data that they receive from various sensors and devices, which increasingly includes wearable devices. Traditional wearables, such as fitness tracking devices, provide the user with raw information such as their heart rate or step count. By applying analytics […]

The post Wearable Data Analytics Bring Humans into the IoT appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Embracing the Future – The Push for Data Modernization Today

By Steve Wilkes

There is growing recognition that businesses today need to be increasingly ‘data driven’ in order to succeed. Those businesses that can best utilize data are the ones that can better serve their customers, out-compete their competitors and increase their operational efficiency. However, to be data driven, you need to be able to access, manage, distribute and analyze all of your available data while it is still valuable; and to understand and harness new potential data sources.

Key to this is Data Modernization. Data Modernization starts with the recognition that existing systems, architectures and processes may not be sufficient to handle the requirements of a data-driven enterprise, and that new innovative technologies need to be adopted to succeed. While the replacement of legacy technology is not a new phenomenon, the particular sets of pressures, leading to the current wave of modernization, are.

In this article we will delve into the very real pressures pushing enterprises down the path of data modernization, and approaches to achieving this goal in realistic time frames.

Under Pressure

Business leaders world-wide have to balance a number of competing pressures to identify the most appropriate technologies, architectures and processes for their business. While cost is always an issue, this has to be measured against the rewards of innovation, and risks of failure versus the status quo.

This leads to cycles for technology, with early adopters potentially leap-frogging their more conservative counterparts who may not then be able to catch up if they wait for full technological maturity. In recent years, the length of these cycles has been dramatically reduced, and formally solid business models have been disrupted by insightful competitors, or outright newcomers.

Data Management and Analytics are not immune to this trend, and the increasing importance of data has added to the risk of maintaining the status quo. Business are looking at Data Modernization to solve problems such as:

  • How do we move to scalable, cost-efficient infrastructures such as the cloud without disrupting our business processes?
  • How do we manage the expected or actual increase in data volume and velocity?
  • How do we work in an environment with changing regulatory requirements?
  • What will be the impact and use cases for potentially disruptive technologies like AI, Blockchain, Digital Labor, and IoT, and how do we incorporate them?
  • How can we reduce the latency of our analytics to provide business insights faster and drive real-time decision making?

It is clear to many that the prevalent and legacy Data Management technologies may not be up to the task of solving these problems, and a new direction is needed to move businesses forward. But the reality is that many existing systems cannot be just ripped out and replaced with shiny new things, without severely impacting operations.

How We Got Here

From the 1980s to the 2000s, databases were the predominant source of enterprise data. The majority of this data came from human entry within applications, web pages, etc. with some automation. Data from many applications was collected and analyzed in Data Warehouses, providing the business with analytics. However, in the last 10 years or so, it was recognized that machine data, logs produced by web servers, networking equipment and other systems, could also provide value. This new unstructured data, with a great amount of variety, needed newer Big Data systems to handle it, and different technologies for analytics.

Both of these waves were driven by the notion that storage was cheap and, with Big Data, almost infinite, whereas CPU and Memory was expensive. Outside of specific industries that required real-time actions – such as equipment automation and algorithmic trading – the notion of truly real-time processing was seen to be out of reach.

However, in the past few years, the industry has been driven to rethink this paradigm. IoT has arrived very rapidly. Connected devices have been around for some time, and industries like manufacturing have been utilizing sensors and automation for years. But it is the consumerization of devices, coupled with the promise of cloud processing, that have really driven the new wave of IoT. And with IoT comes the realization that storage is not infinite, and another processing paradigm is required.

As I outlined in this article, the predicted rate of future data generation – primarily, but not solely, driven by IoT – will massively outpace our ability to store it. And if we can’t store all the data, yet need to extract value from it, we are left to conclude it must be processed in-memory in a streaming fashion. Fortunately, CPU and memory have been become much more affordable, and what was unthinkable 10 years ago, is now possible.

A Streaming Future

Real-time in-memory stream processing of all data, not just IoT, can now be a reality, and should be part of any Data Modernization plans. This does not have to happen overnight, but can be applied use-case-by-use-case without necessitating a rip and replace of existing systems.

The most important step enterprise companies can make today is to move towards a ‘streaming first’ architecture. A Streaming First architecture is one in which at least the collection of all data is performed in a real-time, continuous fashion. Understanding that a company can’t modernize overnight, at least achieving the capability of continuous, real-time data collection enables organizations to integrate with legacy technologies, while reaping the benefits of a modern data infrastructure that can combat the ever-growing business and technology demands within the enterprise.

In practical terms, this means:

  • Using Change Data Capture to turn databases into streams of inserts, updates and deletes;
  • Reading from files as they are written to instead of shipping complete logs; and
  • Harnessing data from devices and message queues without storing it first.

Once data is being streamed, the solutions to the problems stated previously become more manageable. Database change streams can help keep cloud databases synchronized with on-premise while moving to a hybrid cloud architecture. In-memory edge-processing and analytics can scale to huge data volumes, and be used to extract the information content from data, massively reducing its volume prior to storage. Streaming systems with self-service analytics can be instrumental in remaining nimble, and continuously monitoring systems to ensure regulatory compliance. And new technologies become much easier to integrate if, instead of separate silos and data stores, you have a flexible streaming data distribution mechanism that provides low latency capabilities for real-time insights.

Data Modernization is becoming essential for businesses focused on operational efficiency, customer experience, and gaining a competitive edge. And a ‘streaming first’ architecture is a necessary component of Data Modernization. Collecting and analyzing data in a streaming fashion enables organizations to act on data while it has operational value, as well as storing only the most relevant data. With the data volumes predicted to grow exponentially, a streaming-first architecture is the truly the next evolution in Data Management.

The world operates in real-time, shouldn’t your business as well?

About the author: Steve Wilkes is co-founder and CTO of Striim. Prior to founding Striim, Steve was the senior director of the Advanced Technology Group at GoldenGate Software, focused on data integration. He continued in this role following the acquisition by Oracle, where he also took the lead for Oracle’s cloud data integration strategy. Earlier in his career, Steve served in senior technology and product roles at The Middleware Company, AltoWeb and Cap Gemini’s Advanced Technology Group. Steve holds a Master of Engineering degree in microelectronics and software engineering from the University of Newcastle-upon-Tyne in the UK.

Related Items:

Streaming Analytics Picks Up Where Hadoop Lakes Leave Off

Streaming Analytics Ready for Prime Time, Forrester Says

Investments in Fast Data Analytics Surge

The post Embracing the Future – The Push for Data Modernization Today appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Backing Up Big Data? Chances Are You’re Doing It Wrong

By Peter Smails

The increasing pervasiveness of social networking, multi-cloud applications and Internet of Things (IoT) devices and services continues to drive exponential growth in big data solutions. As businesses become more data driven and larger, more current data sets become important to support the online business processes, analytics, intelligence and decisions. Additionally, data availability and integrity become increasingly critical as more and more businesses and their partners rely on these (near) real-time analytics and insights to drive their business. These big data solutions typically are built upon a new class of hyper-scale, distributed, multi-cloud, data-centric applications.

While these NoSQL, semi-structured, highly distributed data stores are perfect for handling vast amounts of big data on a large number of systems, they can no longer be effectively supported by legacy data management and protection models. Not only based on the sheer data size and the vast number of storage and compute nodes, but also because of built-in data replication, data distribution, and data versioning capabilities – a different approach for backup and recovery is needed. Even though these next-generation data stores have integrated high availability and DR capabilities, events like logical data corruption, application defects, and/or simple user errors still require another level of recoverability.

To meet the requirements of these high-volume and real-time applications in a scale-out, cloud centric environments, a wave of new data stores and persistence models has emerged. Gone are the days of just files, objects and relational databases. The next-generation key-value stores, XML/JSON document stores, arbitrary width column stores and graph-databases (sometimes characterized as NoSQL stores) share several fundamental characteristics that enable the big data driven IT. Almost without exception, all big data repositories are based on a cloud-enabled, scale-out, distributed data persistence model that leverages commodity infrastructure while providing some form of integrated data replication, multi-cloud distribution and high-availability. The big data challenges aren’t limited to just the data ingest, data storage, data processing, data queries, result set capturing, visualization, but also pose increasing difficulties around data integrity, availability, recoverability, accessibility and mobility/movement. Let’s see how this plays out in a couple example case studies.

(Tatiana Shepeleva/Shutterstock)

A first case study revolves around an Identity and Access Management service provider that uses Cassandra as its core persistence technology. The IDaaS (Identity as a service) is a multi-tenant service with a mixture of large enterprise, SMB and development customers and partners. The Cassandra database provides them with a highly scalable, distributed, high available data store that supports per tenant custom user and group profiles (i.e. read dynamic extensible schemas). While the data set may not be very large in absolute storage size, the number of records definitely will be in the 10’s, if not 100’s of millions.

What drives the unique requirements for recoverability is the multi-tenancy and the 100% availability targets of the service. Whether it is through user error, data integration defects and changes, or simply tenant migrations, it may be required to recover a single tenant’s data set without having to restore the whole Cassandra cluster (or replica thereof) in order to restore just one tenant instance. Similarly, the likelihood that the complete Cassandra cluster is corrupt is slim and in order to maintain (close to) 100% availability for most tenant service instances, partial recovery would be required. This drives the need for some level of application aware protection and recovery. In other words, the protection and recovery solution must establish and persist some application data semantic knowledge to be able to recover specific, consistent Cassandra table instances or point-in-times.

The second case study is centered around a Hadoop clustered storage solution, whereby the enterprise application-set persists its time-series data from devices and their end-user activities in the Hadoop filesystem. The Hadoop storage acts as de-facto “data lake” fed from multiple diverse data sources in different formats, whereby the enterprise can now apply various forms of data processing and analysis through map-reduce batch processing, real-time analytics, streaming and/or in-memory queries and transformations. Even though a map-reduce job creates ephemeral intermediate and end results that in principle could be recreated by running the job once more in case of failure or corruption, the data set can be too large (and therefore too expensive to reprocess) and undergoing constant updates.

Even though Hadoop provides replication and erasure encoded duplication (for high-availability and scale-out), there really is no data versioning or snapshots for that matter (given the original ephemeral model of the map-reduce processing). Any logical error, application or service failure or plain user error, coul result in data corruption or data loss. Data loss or corruption could occur to the original ingested data, any intermediate ephemeral data or data streams, as well as any resulting datasets or database instances and tables. Rather than creating a full copy of the Hadoop file-system for backup and recovery of intermediate files and database tables (which would be cost prohibitive and/or too time consuming), a different approach is needed. In order to do so, a better understanding of the application data sets and their schema’s, semantics, dependencies and versioning is required.

Looking at both case studies, there is common thread amongst them driving the need for a different approach to data management and specifically backup and recovery:

  • Both Cassandra and Hadoop provide integrated replication and high-availability support. Neither capability, however, provides sufficient, if any protection against full or partial data corruption or data loss (human, software or system initiated). An actual application data centric or aware backup is needed to support data recovery of specific files, tables, tenant data, intermediate results and/or version thereof
  • However, a storage centric (file or object infrastructure) backup solution is not really feasible. The data set is either too large to repeatedly be copied in full, or a full data set takes too large an infrastructure to recover fully or to extract just specific granular application data items. In addition, storage centric backups (file system snapshot, object copies, volume image clones or otherwise) do not provide any insight into the actual data set or data objects that the application depends on. On top of the fully recovered storage repository, an additional layer of reverse engineered application knowledge would be required as well.
  • Application downtime is critical now more than ever. In both case studies, multiple consuming services or clients depend on the scale-out service and persistence. Whether it’s a true multi-tenant usage pattern, or multitude of diverse data processing and analytics applications, the dataset needs to be available close to 100%. Secondly, a full data-set recovery would simply take too long and the end-users or clients would incur too much downtime. Only specific, partial data recovery would support the required SLA’s.

The requirement for an alternate data management and recovery solution is not limited to just the above described Cassandra and Hadoop case studies. Most big data production instances ultimately do require a data protection and recovery solution that supports incremental data backup and specific partial or granular data recovery. More importantly the data copy and recovery must acquire semantic knowledge of the application data in order to capture consistent data copies with proper integrity and recoverable granularity. This would allow the big data DevOps and/or Production Operations teams to just recover data items that are needed without having to do a full big data set recovery on an alternate infrastructure. For example, the data recovery service must be able to expose the data items in the appropriate format (e.g. Cassandra tables, Hadoop files, Hive tables, etc.) and within a specific application context. At the same time the protection copies must be able to be distributed across on-premise infrastructure as well as public cloud storage to leverage both cost effective protection storage tiering and scaling as well as support alternate cloud infrastructure recovery.

A solution that provides big data protection and recovery in a granular and semantic aware approach not only addresses “Big Data Backup” in the appropriate fashion, but it also creates opportunities to extract and use data copies for other purposes. For example, the ability to extract application specific data copies or critical parts of the big data set enables other users to efficiently get down-stream datasets for test and dev, data integrity tests, in-house analytics, 3rd party analytics or potential data market offerings. Combining this with multi-cloud data distribution, we then get closer to realizing a multi-cloud data management solution that starts to address today’s and tomorrow’s needs for application and data mobility, as well as their full monetization potential.

About the author: Peter Smails is vice president of marketing and business development at Datos IO, provider of a cloud-scale, application-centric, data management platform that enables organizations to protect, mobilize, and monetize their application data across private cloud, hybrid cloud, and public cloud environments. A former Dell EMC veteran, Peter brings a wealth of experience in data storage, data protection and data management.”

Related Items:

Big Data Begets Big Storage

Data Recovery Gets Speed, Security Boost

The post Backing Up Big Data? Chances Are You’re Doing It Wrong appeared first on Datanami.

Read more here:: www.datanami.com/feed/