DataHub overview

This section outlines the structure of the document and gives a high-level introduction into Cumulocity IoT DataHub (CDH) and its concepts.

Documentation overview

The following sections will walk you through all the functionalities of Cumulocity IoT DataHub in detail.

For your convenience, here is an overview of the contents of this document:

Section Content
Getting started Log into DataHub and get an overview of the UI features
Setting up DataHub Set up DataHub and its components
Working with DataHub Manage offloading pipelines and query the offloaded results
Operating DataHub Run administrative tasks
Running DataHub on the Edge Run the Edge edition of DataHub
Release notes Get news about the latest DataHub releases

Cumulocity IoT DataHub at a glance

The Cumulocity IoT platform allows you to manage and monitor a variety of devices. The data emitted by these devices is stored in the Operational Store of Cumulocity IoT, with older data potentially being removed (based on data retention settings). In order to run an ad-hoc query against recent device data, Cumulocity IoT offers a REST API.

In addition to this simple ad-hoc querying, various use cases require more sophisticated analytical querying over the device data, potentially covering long periods of time. Cumulocity IoT DataHub is the tool designed for this purpose.

With Cumulocity IoT DataHub, you can connect existing tools and applications to Cumulocity, such as:

The main features of the Cumulocity IoT DataHub application are:

The following diagram illustrates the high-level concepts.

DataHub high level concept

The central component of Cumulocity IoT DataHub is Dremio, a distributed SQL engine that is used for the two purposes mentioned above. It offers an SQL API which can be accessed via JDBC, ODBC, and REST. Dremio enables the creation of Extract-Transform-Load (ETL) pipelines that:

When a user submits an SQL query, the query runs against data in the data lake. Thus, the Operational Store of Cumulocity IoT is not accessed during query processing; the Operational Store is only accessed by the regular ETL process to extract data.

The table below summarizes the main terms used throughout this documentation.

Component Explanation
Cumulocity IoT DataHub Cumulocity IoT application for offloading data from the Operational Store of Cumulocity IoT to a data lake and querying the data lake contents
DataHub Scheduler component for triggering periodic offloading and UI component for defining, managing, and monitoring offloading pipelines
Cumulocity IoT Operational Store Internal datastore of Cumulocity IoT where all data (alarms, events, inventory, measurements, …) is stored
Dremio Internal SQL engine for extracting data from the Cumulocity IoT Operational Store and writing to and reading from the data lake
Data lake Storage container for offloaded data either on the basis of ADLS Gen2/Azure Storage (Azure), S3 (Amazon), or HDFS.

Info: Google Cloud Storage (GCS) is currently not supported.

Design of offloading pipeline

Offloading refers to moving data from the Operational Store of Cumulocity IoT to a data lake in order to:

The starting point is one of the base Cumulocity IoT collections, such as the measurements collection, that is to be offloaded into the data lake. Once an offloading pipeline for this collection has been configured and started, a couple of actions take place.

Info: DataHub only supports offloading for the base Cumulocity IoT collections, which are alarms, events, inventory, measurements. Offloading other collections is currently not supported.

When an offloading job runs, the contents of the collection are offloaded. The document-based entities of the Operational Store of Cumulocity IoT are transformed into a relational format by flattening the entries and mapping them to relational rows.

Info: The mapping automatically extracts a “standard” set of attributes from each entity, such as “time”, “source”, “id”, and “type”. It transforms them into columns in the data lake table. Furthermore, it automatically transforms the contents of measurement fragments into columns of the table. Here, the fragment name becomes part of the column name; the fragment’s value is stored in a column suffixed with “.value” (resulting in <fragment name>.value as the column name); the unit is stored in a column suffixed “.unit”. Non-standard fields can also be processed to a limited extent as described in Configuring offloading jobs.

As a result of these extraction and transformation steps, the flattened data is stored in Parquet files in the data lake. Apache Parquet is a column-based storage format which enables compression and efficient data fetching. These Parquet files are managed in a folder structure based on a temporal hierarchy, because analytical queries commonly have a temporal background, e.g. compute the average oil pressure of last month. In order to ensure a compact layout of the Parquet files, DataHub also regularly runs a compaction algorithm over these files in the background. When data is stored in a time-based hierarchical manner in the data lake, DataHub can efficiently prune partitions. In addition, queries can explicitly leverage the structure to increase query performance.

Important: You must not modify the data lake contents as this will corrupt your offloading pipelines and neither data consistency nor completeness can be guaranteed any more.

DataHub’s scheduler runs the offloading pipeline in a periodic manner. The UI displays the execution schedule next to each configuration. Within each of these executions, newly arrived data is extracted from the Cumulocity IoT collection and transformed and stored in the same way as described above. These incremental offloading tasks are designed to ensure a loss-free and duplicate-free offloading from the collection. For example, if one offloading execution fails, the next execution will automatically pick up the increments the failed one should have processed.

For each of the offloading pipelines, target tables are created in Dremio that point to the corresponding data folders in the data lake. When you run queries against the offloaded data, Dremio uses these target tables.