Release 10.18.0
Cumulocity IoT DataHub Release 10.18 includes the following improvements, limitations, and known issues:
New version of internal query engine
Cumulocity IoT DataHub now uses version 24 of Dremio as query engine. The new version comes along with various features and improvements like a revamped user interface or a simplified interaction with PowerBI. Due to major changes and improvements in the new Dremio version, performance characteristics for certain queries might differ from the previous one.
Enforcement of single data types
Columns having multiple data types, so-called mixed types, can no longer occur. During the offloading process, Cumulocity IoT DataHub takes care that the offloaded data has one type per column. If incoming data might lead to multiple types, either the associated offloading pipeline will be stopped or the resulting schema is automatically evolved. The enforcement of a single type and the associated extensions of the offloading process might affect the performance of an offloading run compared to previous versions.
Improved handling of additional columns
The UI simplifies duplicating an additional column and changing the data type of the column.
Limitations
Description |
---|
If two attribute names in a document only differ by case, only one of the values will be used. |
If the collection to be offloaded has JSON attributes consisting of more than 32,000 characters, its data cannot be offloaded. One specific case where this limitation applies is Cumulocity IoT’s application builder, which stores its assets in the inventory collection when being used. |
If the collection to be offloaded has more than 800 JSON attributes, its data cannot be offloaded. This limitation also includes nested JSON content, which will be expanded into columns during offloading. Therefore, measurements documents with more than 800 series/series value fragments are not supported. |
In previous releases, the childDevices and childAssets fields were part of the default offloading columns for the inventory collection. They are now excluded from the offloading process as their potentially high number of list items leads to problems. When that number exceeds a corresponding Dremio limit, the offloaded data is no longer readable. New inventory offloading configurations exclude those two fields. For existing inventory configurations, they are automatically added as additional result columns, in order to preserve the set of columns handled by the offloading configuration. |
When using Microsoft Azure as data lake with IP white-listing being activated, Cumulocity IoT DataHub cannot access the data lake. |
Known issues
Edition |
Description |
---|---|
Cloud | Data lake configuration validation is broken in terms of wrong bucket names (AWS S3) and wrong account names (Azure Storage). When saving the settings with an invalid bucket/account name, DataHub fails to quickly detect the problem and will instead run a time-consuming check, which shows up as an ongoing save request in the UI. Eventually, the request will fail in the UI with a timeout and the save request in the backend will fail as well. In such a case, please carefully check the bucket/account name and try saving again. |
Edge | There are no retention policies in place that prevent the data lake contents from exceeding the hard disk limits. |
Edge | TLS is not supported for ODBC and JDBC. |
Important announcements
Important announcements related to Cumulocity IoT DataHub can be found in the section Important announcements.