Graph Databases Ascend to the Cloud

By George Leopold

Bucking the trend toward open source databases, cloud vendors continue to promote proprietary graph databases that combine the ability to handle multiple data models with the distribution of data across different cloud regions.

While Amazon Web Services (NASDAQ: AMZN) has embraced an open-source approach with its new Neptune graph database, cloud database competitors Microsoft and Oracle (NYSE: ORCL) are betting there is plenty of life—and revenues—in cloud-based proprietary approaches. Oracle is promoting its upcoming “self-driving database” that leverages AI features to automate administrative tasks. Automation makes the database cheaper to run in the Oracle cloud, Oracle CTO Larry Ellison claimed last month.

Microsoft is meanwhile zeroing in on the vibrant graph market with a multi-modal graph database called Azure Cosmos DB.

Multi-modal graph databases are designed to support different model types such as a combination of document and key-value store along with a graph capability. Observers note that among the advantages of the approach are that graph and key value queries can be run against the same data. The downside is that performance cannot match a dedicated database management system, they add.

Along with Azure Cosmos DB, other multi-modal graph databases include ArangoDB and Sqrrl.

Microsoft (NASDAQ: MSFT) released Azure Cosmos DB details in December, including specs on APIs used to access and query data. Emphasizing the ability to distribute data across different Azure cloud regions, the company released a “multi-homing API” designed to reduce latency by identifying customers’ nearest cloud region, then sending data queries to the closest datacenter.

In support of its multi-model approach, Microsoft also released a batch of APIs that support SQL, MongoDB, Cassandra, Graph (Apache Gremlin) and a key-value database service called Table API. The company said in December that it plans to add support for other data models.

Microsoft is aiming Azure Cosmos DB at the emerging Internet of Things market along with web and mobile applications requiring “massive amounts of data, reads and writes at a global scale with near-real response times for a variety of data,” the company said.

The introduction Azure Cosmos drew mixed reviews in a lively, detailed discussion on the Azure Cosmos DB web site. After identifying several shortcomings, one long-time Azure storage table user concluded that the new database “is not production ready.”

Another user raised an issue that cloud vendors are likely to hear frequently in coming months: A globally distributed database may reduce latency, “but we need to give our users the choice of where their data is stored—particularly in regards to the [General Data Protection Regulation] that goes live in May 2018″ within the European Union.

Recent items:

A Look at the Graph Database Landscape

AWS, Others Seen Moving Oracle Databases

The post Graph Databases Ascend to the Cloud appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Cloudera announces the upcoming beta release of Cloudera Altus Analytic DB

By Zenobia Hegde

Cloudera, Inc., the modern platform for machine learning and analytics, optimised for the cloud, announced the upcoming beta release of Cloudera Altus Analytic DB. Cloudera Altus Analytic DB is the first data warehouse cloud service that brings the warehouse to the data through a unique cloud-scale architecture that eliminates complex and costly data movement.

Built on the Cloudera Altus Platform-as-a-Service (PaaS) foundation, Altus Analytic DB delivers instant self-service BI and SQL analytics to anyone, easily, reliably, and securely. Furthermore, by leveraging the Shared Data Experience (SDX), the same data and catalog is accessible for analysts, data scientists, data engineers, and others using the tools they prefer – SQL, Python, R – all without any data movement.

For many enterprises, challenges with existing analytic environments have resulted in a number of limitations for both business analysts and IT. Constraints on resources mean critical reporting and SLAs are given priority while limiting self-service access for other queries and workloads.

To support additional workloads and access beyond SQL, data silos have proliferated, resulting in inefficiencies managing the multiple data copies, difficulties in applying consistent security policies, and governance issues. While business users struggle to analyse data across these silos and limiting the ability to collaborate with groups including data scientists and data engineers.

Cloudera Altus Analytic DB removes those limitations through the speed and scale of the cloud. Central to Altus Analytic DB is its unique architecture that brings the warehouse to the data, enabling direct and iterative access to all data in cloud object storage.

This simple, yet powerful design delivers dramatic benefits for IT, business analysts, as well as non-SQL users.

IT benefits from simple PaaS operations to easily and elastically provision limitless isolated resources on-demand, with simple multi-tenant management and consistent security policies and governance.
Business analysts get immediate self-service access to all data without risking critical SLAs, and with predictable performance no matter how many other reports or queries are running. Additionally, they can continue to leverage existing tools and skills, including integrations with leading BI and integration tools such as Arcadia Data, Informatica, Qlik, Tableau, Zoomdata, and others.
With no need to move data into the database, shared data and associated data schemas and catalog are always available for iterative access beyond just SQL, so data scientists, data engineers, and others can seamlessly collaborate.

“With Cloudera’s unique architecture, we have helped our customers modernise their data warehouse both on-premises and in cloud environments,” said Charles Zedlewski, senior vice president, Product at Cloudera. “Cloudera Altus Analytic DB continues that trajectory, making it even easier for analysts to get dedicated, self-service access for BI and SQL analytics, all with an enterprise focus.

With no need to move data into the Cloudera Altus platform, users can quickly spin up clusters for business reporting, exploration, and even use Altus Data Engineering to deploy data pipelines, all over the same data and Shared Data Experience without impacting performance or SLAs.”

Key capabilities of Cloudera Altus Analytic DB

Cloudera Altus Analytic DB, built with the leading high-performance SQL query engine, Apache Impala (recently graduated to a […]

The post Cloudera announces the upcoming beta release of Cloudera Altus Analytic DB appeared first on IoT Now – How to run an IoT enabled business.

Read more here:: www.m2mnow.biz/feed/

Convergence of Big Data, IoT and AI to Drive Next Generation Applications

By A.R. Guess

by Angela Guess A new press release reports, “Big Data Analytics is bringing a step change in innovation across all sectors of the economy for efficient data management. Disruptive technology innovations in the information and communication technology (ICT) space, such as artificial intelligence (AI), Internet of Things (IoT), self-service visualization and structured query language(SQL), have […]

The post Convergence of Big Data, IoT and AI to Drive Next Generation Applications appeared first on DATAVERSITY.

Read more here:: www.dataversity.net/feed/

AI Seen Better Suited to IoT Than Big Data

By George Leopold

More cold water is being thrown on the notion that AI might be the next big thing for big data. Rather, a new survey finds that AI branches such as machine learning are perceived as be more useful for applications such as Internet of Things deployments where automation tools can be used to streamline business operations.

By contrast, the survey of a thousand or so IT professionals by market research GlobalData found continuing “heavy reliance” on business intelligence tools, with 40 percent ranking BI above all other tools for analyzing data. Hence, the researcher concludes, AI is likely to play a larger role on IoT deployment than data analytics.

The downside, the survey finds, is that “with the broad market trend toward the democratization of data now well-established, such do-it-all BI software platforms have already given way to numerous smaller, more discrete ways of deriving value from enterprise data,” including direct SQL query, predictive data modelers and auto-generated data discovery visualizations.

Meanwhile, of the list of key benefits associated with IoT deployments, analytics applications such as “enhanced insight and decision-making” finished dead last among respondents, trailing considerations such as using AI to automate operations and help reduce costs.

The researcher concludes that much of the misalignment between AI and data analytics stems from centralization, which is seen as the basis of traditional BI analysis and reporting along with predictive modeling. “Where AI is most valuable, however is at the edge,” GlobalData notes. “IoT deployments need to employ tools like [machine learning], not centrally, but at the edge, close to the device itself.

“And like today’s enterprise software, those analytics endeavors should be brief and to the point, and focused on solving specific challenges,” the researcher continued.

Other branches of AI such as deep learning may prove more broadly applicable to analytics as well as IoT. In the case of deep learning, the researcher argues that combinations of smaller decision-making algorithms can be used to create “a larger, seemingly intelligent system.”

Industry veterans sometimes lump IoT together with traditional applications such as predictive analytics and big data, generally. For instance, Tom Siebel, who’s latest venture is called C3 IoT, described to Datanami earlier this year a list of emerging “vectors” that include AI, big data, cloud computing and IoT.

“Those are the horses that we’re riding, and those vectors appear to be converging,” Siebel said. “With those technologies, we’re able to solve classes of problems that were previously unsolvable.”

Recent items:

Q&A With C3 IoT’s Tom Siebel

Data, Security Frameworks Emerge FOR IoT

The post AI Seen Better Suited to IoT Than Big Data appeared first on Datanami.

Read more here:: www.datanami.com/feed/

ITW 2017: Visualization is the Key to Understanding Connectivity

By Frank J. Ohlhorst

Chicago, ITW 2017: Ben Edmond, CEO and Founder of Connected2fiber posed an interesting query to those looking to connect locations to data centers and other locations. Edmond asked, “what if you had a crystal ball that could show you all fiber routes?”

For many, the answer to that question is obvious; huge amounts of time could be saved, connectivity would accelerate, prices would drop, and CIOs would jump for joy. While that may be an oversimplification, knowing how, where, when, and why to connect proves critical in today’s world. As businesses move to the cloud, diversify data centers, and build branch offices, affordable and reliable connectivity becomes the crux of success.

What’s more, the flow of data is increasing exponentially, and elements such as latency and bandwidth are becoming ever more important, add to that QoS concerns and it becomes evident that connectivity has evolved into more than joining point A to point B. In essence, connectivity has become the backbone to a business’s nerve center.

Edmond has his own answer to his crystal ball query, one that has resulted in the development of a SaaS platform that does more than just map out fiber, it also gives context to what fiber is all about. In other words, the platform wraps all sorts of meta-data around fiber maps, allowing businesses to make more intelligent decisions about connectivity.

Edmond said “It’s more than just visualizing the physical locations for connectivity, it’s really all about offering a platform that drives efficiency.” That efficiency comes in many forms, Connected2Fiber has processed 220 million addresses in the last 18 months, creating a data set that is unrivaled by any other. That data is quickly becoming the key for connectivity providers, cable companies, rural ILECs, and many others to introduce efficiency into their interconnects and also garner a better understanding of what is happening over their links, as well as a better understanding of utilization.

Edmond added, “Another interesting use case is one of the commercial real estate market, where those managing large properties are now gaining a better perspective of the options tenants have when it comes to connectivity.”

An interesting point one must concede, especially as businesses such as retail and services, have come to rely more and more on technology. Technology that demands reliable connectivity back to cloud services, IoT devices, VoIP systems, and countless other digital use cases. That said, today’s REITs have to explore the connectivity question to remain competitive. After all, no business is going to locate into an office building if they can not connect to their cloud-based services.

With Connected2Fiber, Edmond has proved that information is a good thing to have, especially when attempting to link to services in the world of technology.

Read more here:: gigaom.com/feed/

Domain Names Are Fading From User View

By Karl Auerbach

The internet has changed and evolved ever since it’s ancestors first came to life in the late 1960’s. Some technology fades away and is forgotten; other aspects continue but are overlaid, like geological sediments, so that they are now longer visible but are still present under the surface.

The Domain Name System — both the technology of DNS and the deployed naming hierarchy we all use — are among those aspects of the internet that, although they feel solid and immutable, are slowly changing underneath our feet.

Act I: In Which DNS Fades to Translucent Grey

Internet Domain Names had a good twenty-year run from the early days of the World Wide Web (1995) through 2015.

Some people made a lot of money through domain name speculation. Others made money by wallpapering Google Ad Sense advertisements over vacuous websites. And a busload of attorneys made a good living chasing down shysters trying to make a buck off of the trademarks of others.

And through its perceived control of domain name policy, ICANN grew into a ever-bloating, money absorbing bureaucracy worthy of Jonathan Swift.

But things are changing. The days of domain names as the center of internet policy and internet governance are ending. Domain name speculation will slowly become a quaint shadow of its former self.

What is driving these changes?

It is not that the Domain Name System (DNS) is becoming less important as a technical way of mapping structured names into various forms of records, most often records containing IP addresses.

Nor is the Domain Name System used less then heretofore.

Nor are the knights of intellectual property becoming any less enthusiastic about challenging every domain name that they feel does not pay adequate homage to the trademarks they are protecting.

And national governments continue to believe that domain names are the holy grail of levers they can use to impose their views of right and proper behavior onto the internet.

All of that remains. And it will remain.

What is happening to DNS is more subtle: Domain names are slowly becoming invisible.

For many years internet users could not avoid domain names. DNS names were highly visible. And domain names were everywhere. DNS names were part of e-mail addresses, DNS names were prominent parts of World Wide Web URLs, and DNS names that were based on words formed a rough, but useful, taxonomy of web content.

But the sea-level of internet technology is slowly rising. We now live in a world of web search engines. We now have personalized lists of “contacts”. We now use a profusion of “apps”. And we now spend much of our online lives inside walled gardens and social networks (such as Facebook, Twitter, or various games.)

Even in places where users formerly uttered or typed email-addresses (containing domain names) or web addresses, we now enter keystrokes or words that are used by user interface code to search for the thing we want and make suggestions.

For example, when I send an e-mail, I usually don’t need to type more than two or three characters of the name of the desired recipient; for every keystroke the software goes to my contact list, does a search, and shows me the possible outcomes. Similarly, on web browsers the old “address bar” has become a place for the user to send search targets to a web search company. In both of these examples the user no longer really deals with domain names (even though in both of these examples there are domain names — sometimes visible, sometimes hidden — underneath the search results.)

In the world of Apps, games, and walled gardens there may not even be a way for a user to utter a domain name.

And if a user does mention a domain name it is frequently in the form of a shortened URL that has no resemblance to the actual domain name of the target resource.

You can confirm this by asking yourself: “When was last time I used a domain name while using Facebook or Twitter, or when playing my favorite game?” Few of us have ever used a domain name when giving an order to an Amazon Echo (“Alexa”) or a Google Home (“OK Google”).

Act II: DNS Remains, But Quietly Hovers In The Background

DNS is not being abandoned; the domain name system is as robust, powerful, and important today as it ever was.

However, DNS is being veiled. So that rather than being a central figure, visible to all, it does its job behind the scenes where few but internet operators and repair techs see it.

In days past, you or I may have gone to a web browser and entered a URL that looked like, http://upstairs-thermostat.myhouse.tld/. But today I use an Internet of Things device and say “Alexa, set the upstairs thermostat to 68 degrees.”

Same activity, same request, same devices, but the domain name has gone away and been replaced with a more convenient handle.

I use the word “handle” quite intentionally. One of the aspects of the post-DNS internet is that names are becoming contextual. These new names often exist within the context of a particular person (as in a personal contact list) or a particular walled garden (such as Twitter).

Contextual names let us escape the rules and disputes — and costs — that came from the “globally unique identifier” view of the domain name system. You and I can each use the name “upstairs thermostat”; the context prevents collisions; the context differentiates between your “upstairs thermostat” and mine.

These new names will often be used on software that internally uses domain names to tie things together. There is no doubt that Twitter, for example, has lots of internal domain names. But those domain names have become merely internal gears and wheels, they have become as invisible as the pistons in the motor of a gasoline powered automobile.

The DNS system will remain as a means of using structured names — words connected by dots — to obtain various forms of records that can contain things as varied as IP addresses, geographic locations, e-mail exchange server lists, VoIP PBX locations, etc. But it will be software rather than humans that originates those structured names and uses the lookup results. That software may, and frequently will not, make those underlying structured DNS names, or the lookup results, visible to the human user.

The fading of domain names brings benefits.

Some troublesome things will begin to end.

Domain names will no longer be perceived as being particularly valuable ways to express semantically meaningful labels.

  • This will remove much of the energy that powered the DNS trademark wars that we have seen over the past twenty years. (But don’t expect the trademark protection industry to give up their relentless effort to own even private, local uses of some names — that company in Atlanta will probably want to try to prevent people in their own homes from using the word “coke” to refer to any brown carbonated sugar drink other than their own.)
  • And it will also tend to de-energize marginal internet activities such as typosquatting in order to pick up advertising impressions or click dollars from people who accidentally mis-typed a domain name into a browser.
  • And it will obviate the need for most of the functions of ICANN.

Opportunities will arise for application-specific or community specific naming systems:

  • New names can be more descriptive of classes of possible targets rather than being tightly bound: You could, for example, say “ATM” and not be locked into ATMs operated by wellsfargo.com.
  • New names do not need to fit into the confined strictures of domain names.
  • New names need to be less like “names” and more like “descriptions” — more on that below.

Act III: There’s Still Gold In Internet Naming

The loss of domain names as baubles for speculation does not mean that entrepreneurs and Procrustean government bureaucrats must fold their tends and skulk away in to the night.

Even after DNS becomes merely an internal organ of the internet, there will be plenty of opportunities for fun and profit.

Companies that are presently operating as domain name registries and registrars, are well poised to capitalize on the new systems: they already have much of the customer and user facing “front office” infrastructure that will be required to service whatever naming systems may arise.

There are two broad areas in which internet naming will probably evolve: entity naming and describing.

Entity Naming

The need to attach names to specific things will be with us forever. And there will always be a need to turn names into some sort of concrete handle to those things. This will be, as it always has been, tied to the problems of figuring out where that thing is (i.e. its address) and how to get there (i.e. the route.)

One of the prime values of DNS as it exists today is that almost everybody voluntary chooses to use a single base root. So we have a global shared system that assures that all names attached to that root are unique.

That uniqueness is important, but it is not always necessary — sometimes people want a solid distributed name-to-record lookup system that is not dependent on a global root outside of their control. Sometimes people just want a private name space for some private purpose. DNS technology, as opposed to “the” domain name system, provides a useful tool for these purposes.

The name model of DNS is extremely useful, but it is simplistic: It is a hierarchy, represented by names separated by dots, that leads to sets of records that can contain various types of data. That simplicity has allowed DNS to be robust and reliable. But that same simplicity creates limits.

The world is evolving so that that simple model of names-to-records will become increasingly inadequate. I’ve written a couple of papers on this topic:

  • On Entity Associations In A Cloud Network – The argument made here is that as we move towards cloud based resources the simple mapping of DNS names to a relatively fixed set of answers is not sufficient to accommodate the motion, partitioning, and coalescing of cloud based computing and data resources.
  • Thoughts On Internet Naming Systems ‐ This presentation addresses certain presumed characteristics of domain names that are not necessarily true in practice and likely to become even less true in the future. For example, many of us tend to presume that DNS names will generate the same answers no matter who requests a DNS lookup. And all of us are increasingly aware that DNS names are not permanent, the underlying records, and sometimes even the DNS names themselves, sometimes change or even disappear.

And even though name-to-record look machinery such as DNS will remain valuable, it must evolve so that it can have greater security and consistency.

The larger area of future change lies in the area described by the first of the papers above — in the realm of lookups based on descriptions and attributions.

Attribute and Description Based Systems

Whether in real life or on the internet, often you want something that is a member of a class rather than a specific member of that class. You often just want “a Pepsi” rather than a specific bottle of that drink; you usually don’t care which bottle for your needs, the various bottles are equivalent and interchangeable. A word for this is “fungible”.

As is described in my paper On Entity Associations In A Cloud Network the internet is evolving so that there may be many resources that would satisfy any one of our (or our application’s) needs. DNS is often not the best solution for this kind of resource search. Attribute and description based systems would be better, particularly if they had some leeway to find things that are “near” or “similar to” the description or attributes.

We are familiar with this kind of search. For example, web search engines, such as Google, try to show us web search results that locate the best or nearest solutions, not necessarily the perfect solution. And many apps on mobile devices aspire to discover resources based on their distance to your current location.

We can anticipate that use of this kind of thing will increase.

Descriptions and attributes can be self-published by devices and services as they are deployed (or as cloud entities split or coalesce) or they can be published by those who manage such devices or services. This publication could be in the form of simple ad hoc text, as is done for much that is on the web, or be formalized into machine-readable data structures in JSON or XML.

There is lots of room for innovation in this realm; and possibly lots of room to glue-on revenue producing machinery, much as Google did when it attached advertising to web searching.

Epilogue: The Internet Twenty Years Hence (2037)

Relatively few of us remember the internet as it was twenty years ago when the World Wide Web was just getting started. What will it look like twenty years in the future?

We can be sure that whatever it looks like to users, that there will be a lot of ancient machinery, such as DNS, lurking inside.

It is likely that human users will increasingly interact with computer and networks resources much as they interact with other humans — in ad hoc and informal ways. Humans are notoriously vague and ambiguous; that will not change in the future. This means that our computerized systems will have to become more human in the ways that they resolve that ambiguity into concrete results and actions. This, in turn, means that computerized systems will have to become more aware of context and use fewer “names” and more “descriptions” when trying to satisfy human requests.

The introduction of context into network naming will mean more opportunities for damage to human privacy. The tension between convenience and privacy will increase.

As the network world becomes more contextual, it will become harder to diagnose and isolate problems and failures.

Footnote: What Do We Do With An ICANN That Has Lost Most Of Its Purpose?

The vast bulk of ICANN’s machinery and staff is present to support the domain name selling industry. As this paper indicates, we can anticipate that that industry will shrink and consolidate. And fights over domain names will fade as domain names lose their semantic weight or become hidden artifacts rarely seen by anyone except internet technicians.

The ICANN traveling circus of international meetings will become as interesting as a meeting about the future of Lotus 123.

ICANN’s income stream will shrink; ICANN will no longer be able to support its grandiose office suites, staff, and hyperbolic procedures.

ICANN will have to retreat back to what it should have been in the first place — a technical coordinator, a source of operational service levels for DNS roots and TLD servers, and secretariat for protocol parameters such as DNSSEC keys and IP protocol numbers.

Written by Karl Auerbach, Chief Technical Officer at InterWorking Labs

Follow CircleID on Twitter

More under: DNS, Domain Names, ICANN, Web

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

Investors Bullish on GPU-Based Database Startup

By George Leopold

MapD Technologies, the big data analytics platform startup developing a parallel SQL database that runs on GPUs, has more than doubled its venture-funding total with the close of its latest investment round led by New Enterprise Associates (NEA).

San Francisco-based MapD, which leverages GPU technology to speed its SQL query engine, said Wednesday (March 29) it latest funding round garnered $25 million from NEA and existing investors Nvidia (NASDAQ: NVDA), Vanedge Capital and Verizon Ventures. The Series B round brings MapD’s total venture funding to just over $37 million.

The analytics startup said it would use the funding to accelerate analytics platform development as it seeks to make inroads in the enterprise big data market. The company uses Nvidia’s GPUs to run enterprise applications that include machine learning and numerical computations. An upgraded platform will expand data analytics capabilities.

The startup’s GPU-based SQL query engine platform combines with data visualization designed to allow analysts and data scientists to crunch multi-billion-row data sets. “GPU-powered analytics is going to radically change the data analytics market,” predicted Greg Papadopoulos, a venture partner at NEA.

Since announcing a Series A funding round in March 2016, MapD said it has unveiled new query engine features, including the second version of its Core database and Immerse visual analytics platforms. Meanwhile, Amazon Web Services, Google, Microsoft Azure and IBM SoftLayer have all launched GPU cloud instances that have increased enterprise access to the MapD platform.

Amazon (NASDAQ: AMZN) Web Services announced last fall it would offer public cloud services based on the Tesla K80 GPUs from MapD investor Nvidia.

Founded in 2013, MapD Technologies originated from research at the MIT Computer Science and Artificial Intelligence Laboratory. Other seed investors include In-Q-Tel, the CIA’s venture capital arm.

The startup places heavy emphasis on leveraging fast hardware to maximize speed. “The first thing is we try to cache the hot data across multiple GPUs,” MapD CEO Todd Mostak stressed in an interview last fall. “We’re a column store. We’re compressing the data, so you can have many, many billions of rows in that GPU memory.”

MapD investor and GPU leader Nvidia has been pitching its latest GPU technology as a way to speed SQL workloads. In one example, MapD’s platform fuses its GPU-based database with a collection of visualization tools to enable users to work with huge geospatial data sets.

Another area ripe for GPU-backed databases is Internet of Things deployments that continue to generate huge troves of data. Mostak has argued that current CPU-based approaches won’t scale as new requirements emerge for merging streamed data and historical analysis extending out to 90 days.

Mostak predicted recently that infrastructure-heavy industries such as telecommunications would be among the early adopters of the GPU-based analytics. That prediction is supported by early investments by Verizon Ventures.

Recent items:

Why America’s Spy Agencies Are Investing in MapD

GPUs Seen Lifting SQL Analytic Constraints

The post Investors Bullish on GPU-Based Database Startup appeared first on Datanami.

Read more here:: www.datanami.com/feed/

Time Series Database SiriDB is now Open Source

By IoT – Internet of Things

Transceptor Technology is proud to announce SiriDB to the world. This TSDB is created to analyze and aggregate time series data from any source, from IoT to financial transactions to any other metric data stream. SiriDB is a fully open sourced time series database written in native C that has been in private development for two years. Optimized to grow with your insert and query needs, SiriDB gives you the control over endless time series data. Time series data occurs wherever the same measurements are recorded on a regular basis. Common examples are temperature, rainfall, cpu usage, stock prices, and even sun spots.

What is a time series database?

As its name suggests, it involves time. Time series is a set of data points collected, most of the time, at regular time intervals. These data points can be used to analyze what happened in the past to help forecast the future or to perform a different analysis. These analyses can help you find patterns, trends, cycles, anomalies, variability and rates of change and much more. The information provides an organization with valuable insights and the ability to anticipate the future before it happens.


  • Robust – SiriDB’s clustering mechanism provides the possibility to update and maintain the database while remaining online.
  • Scalable – SiriDB is scalable by using a unique pool mechanism that allows pools to be added on the fly when needed. When a pool is added data is automatically divided evenly over all available pools providing optimal usage of all available computing resources.
  • Fast – SiriDB uses a unique algorithm to store its time series data without using bulky indexes. This algorithm allows the custom query language to distribute queries over all pools making data retrieval incredibly fast.

SiriDB, with a multiple nodes clustering function, is available as open source under the MIT license. It gives you the ability to analyze data quickly, whether an organization is in the financial, technical, educational or care-providing sector doesn’t matter, data is everywhere and this is a valuable thing.

Try it!

Get started with SiriDB at siridb.net

The post Time Series Database SiriDB is now Open Source appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed

Levyx Expands Global Footprint with Major Center for IoT and Other Big Data Application Adoption

By IoT – Internet of Things

Levyx, whose high-performance processing software dramatically reduces infrastructure costs associated with Big Data applications, announced that it is partnering with Nissho Electronics Corporation, a leading provider of information communications equipment, IT infrastructure, and other cutting-edge solutions in Japan, tto offer its customers Levyx’s software solutions. To address the evolving data processing needs of Nissho’s enterprise customers, the agreement covers the full range of Levyx’s software product line, which is targeted at applications that require interactive (i.e. real-time) high-speed processing of large-scale datasets in industries such as cybersecurity (fraud or threat detection), online transaction processing (OLTP), real-time analysis of sensor data (Internet of Things or IoT), high-speed trading (HST), low-latency messaging, and machine learning. Levyx’s target sectors are Financial Services, Government & Aerospace, Biotech & Healthcare, eCommerce, Social Networking, Digital Advertising, and Cloud Infrastructure (including Telcos).

“The level of activity and overall adoption rate of some of the latest Big Data technologies in Japan makes us very excited to work with Levyx,” said Toshiro Sakai, Senior General Manager, IT Platform Business Unit, of Nissho Electronics. “The opportunities for Helium™ and Levyx-Spark Connector span the full spectrum of how Japanese enterprises are currently using, and will expand, their monetization and better use of Big Data. We see Levyx as an important partner and facilitator in how many applications get deployed because their solutions address high-performance needs that are both cost-effective and scalable.”

Levyx’s software fundamentally disrupts the economics of Big Data applications, bringing the benefits of high-speed data processing to the masses (organizations of all sizes). Levyx’s Helium solution offers a high-performance key value storage engine that enables input/output (I/O) intensive legacy and Big Data applications to operate faster, simpler, and cheaper.

  • Faster (by over 10X) than other in-memory key value stores because of its multi-core, flash-optimized, query pushdown, and patented indexing design.
  • Simpler than other architectures, which make trade-offs between performance and storage tiering complexity — all data is persisted by Levyx at memory speeds.
  • Cheaper because Levyx substitutes random access memory (RAM) with less costly Flash storage (typically 10X cheaper per GB), yet achieves equivalent or greater performance using drastically fewer distributed commodity server nodes.

Levyx’s software supports a broad number of programming integration options and is ideal for boosting analytics and operations of Internet and machine-machine applications. In particular, Helium is an excellent building block for highly-dense/highly-efficient systems for Big Data applications because it bridges the gap between conventional Big Data software solutions and the latest hardware innovations. Levyx’s Helium can be deployed as a pluggable engine to “supercharge” database platforms and other applications that support/embed popular Key Value Store application program interfaces (APIs) from RocksDB, LevelDB, and Memcached.

In addition, the company offers a software solution called Levyx-Spark Connector to allow Helium to run Apache Spark deployments far more efficiently by offering in-memory speed resilient distributed datasets (RDDs) and data frames that are also persistent. With additional query pushdown features, Levyx-Spark Connector greatly accelerates operations, such as large sorts and joins; streamlines multi-job/multi-tenant workflows; and reduces the overall number of Spark nodes needed to execute those jobs.

“As a direct result of partnering with Nissho, Levyx has built a significant roster of engagements in Japan, including leading OEMs, Telcos and IT players, and we are jointly developing plans to build customer awareness and market momentum for Helium and Levyx-Spark Connector,” Reza Sadri, Levyx’s CEO and Co-Founder. “Nissho is well-versed in our product line and will play a vital role in helping proliferate our technologies in the region. Their relationships, channels, and customer support are good complements to our Big Data solutions. Global expansion is important to us. It brings us closer to the epicenters of where Big Data solutions are gaining traction most quickly – and Japan is one of those places.”

More information is available at www.levyx.com.

The post Levyx Expands Global Footprint with Major Center for IoT and Other Big Data Application Adoption appeared first on IoT – Internet of Things.

Read more here:: iot.do/feed