Overview

The Machine Learning application enables you to manage your models by providing options for uploading, downloading, or activating/deactivating your models. The application supports models represented in PMML and ONNX formats. It also provides you insights into your models by capturing runtime performance and showcasing it via meaningful KPIs.

The Machine Learning application also allows you to create model groups, manage custom resources which your models might need, create inference pipelines by combining custom pre-processing and post-processing code with relevant models, and schedule batch jobs for processing measurements from devices or device groups against an available model or model group.

The following sections will walk you through all functionalities of the Machine Learning application in detail.

Home screen

In Cumulocity IoT, you access the Machine Learning application through the app switcher.

Clicking Machine Learning in the app switcher will open the Machine Learning application showing the Home screen of the application.

Home screen

The Home screen provides:

Managing models

The Models page allows you to perform model management operations on machine learning models represented in PMML and ONNX format.

Model management functionality includes:

Click Models in the navigator, to open the Models page.

Models manager

Uploading models

To upload a new model, first click on the tab (PMML or ONNX) corresponding to the model format you want to upload.

Then click Add model, navigate to the desired file and finally click Open.

Once your model is successfully uploaded, you will see a corresponding confirmation message. The new model will be added to the models list.

Info
For uploading PMML models, use the Apply PMML Cleanser toggle in the top menu bar to enable/disable the PMML cleanser. By default, the toggle is enabled.
During a PMML upload if the Apply PMML Cleanser toggle is enabled, comprehensive semantic checks and corrections will be performed on the provided PMML file. Disabling it will improve upload time, but this is not recommended. If the PMML file is large, such as Random Forest model, we recommend you to compress the file with ZIP/GZIP before uploading. This will reduce the upload time drastically.

Downloading models

A model listed under the PMML tab can be downloaded in various formats for future use.

For each format a clickable icon is provided in the model cards.

Icon Download format
Download icon 1 downloads the PMML source as PMML file without annotations
Download icon 2 downloads the PMML source as PMML file with annotations
Download icon 3 downloads the model´s serialized version as binary file

For a model listed under ONNX tab, the download icon in the model card can be used to download the ONNX file.

Activating or deactivating models

A model, if not being used for a long time, can be deactivated so that it doesn’t occupy space in the memory of the system.

Click the Active/Inactive toggle in a model´s card to deactivate/activate the model.

Deleting models

To delete a model, click the delete icon on its card and confirm the deletion.

Once a model is deleted, it will be removed permanently from your list of models.

Viewing model properties and KPIs

A PMML model has many important properties such as model inputs and outputs, as well as meaningful KPIs like memory snapshots which help you to get an insight into the run-time performance of the model.

Click the details icon Details on the top right of a card, to view the properties and KPIs of a PMML model.

Besides the name, description and status of the model, the Model Details window shows the inputs and outputs of the model and some useful charts created using the KPIs. These charts currently include the Memory Metrics and the Prediction Metrics.

Model details

Info

1. By default, the Inputs and Outputs panels are in collapsed state. Click the labels to expand them.

2. Memory Metrics provides information about the memory footprint of the model on the server and its related attributes like used memory, free memory and total memory of the application. The same information is represented as a vertical bar chart.

3. Prediction Metrics provides a scoring result summary for the models. Prediction Metrics of a classification model displays the predicted categories and its respective counts as a pie chart. Prediction Metrics of a regression model displays the Five Point Summary of predicted values, i.e. Minimum, FirstQuartile, Median, ThirdQuartile and Maximum values as a box plot. Initially, the Prediction Metrics of any model is empty and it will be displayed only if scoring is applied on the model. Prediction Metrics of a model will be reset when the model is deleted or deactivated. Also the Prediction Metrics information that shows up is always the cumulative result with the past scoring of the model. Currently, the Prediction Metrics feature is supported only for classification and regression models.

An ONNX model has many important properties such as model inputs and outputs, along with ONNX version, Opset version, model version, producer name and producer version. Click the info icon Info to view the properties of an ONNX model. The ONNX model properties are represented in JSON format.

Model details

Managing model groups

The Model groups page allows you to perform group management operations for PMML models. PMML models can be grouped together as long as they have the same model signature, i.e. the models are homogenous in terms of model inputs and outputs.

Although each model group can contain multiple models, each model can only be part of one group. A model group is a logical collection of multiple models aimed for a specific use case. The model group can also contain multiple versions/iterations of the same model.

Model group management functionality includes:

Click Model groups in the navigator, to open the Model groups page.

Model groups

Adding model groups

To add a new model group, perform the following steps:

  1. Click Add Group in the Model groups page.
  2. In the Assign Model Groups wizard, enter the name of the group you want to create. The Available list shows the models which are not part of any other group. Select the models which belong to a particular use case and assign them to the group by moving them under the Selected list. Add group
    Click Next to proceed.
  3. Select a primary model from the set of models assigned to this group. The significance of the primary model is that it will be considered as the default model during data processing. Click Complete to add the new group.

Once your model group is created successfully, you will see a corresponding confirmation message. The new model group will be added to the list of model groups.

Updating model groups

In order to edit the models assigned to a group and also to update the primary model, perform the following steps:

  1. Click the edit icon on the card of the group.
  2. In the Edit Group wizard, update the models by selecting only those models which should be part of the group. Update group
    Click Next to proceed.
  3. Select a primary model from the list of models displayed. Click Save to update the group with your changes.

Once your model group is updated successfully, you will see a corresponding confirmation message.

Deleting model groups

To delete a model group, click the delete icon on its card and confirm the deletion.

Once a model group is deleted, it will be removed permanently from your list of model groups.

Viewing model group properties

To view the properties of a model group, click the info icon Info on its card.

Model group properties Properties include the names of all the models which are part of this group and also the name of the group’s primary model.

Managing resources

In the Resources page you manage the resources, i.e. the custom functions and look-up tables for PMML models or the python pre-processing/post-processing scripts for creating ONNX pipelines.

Resource management functionality includes:

Click Resources in the navigator, to open the Resources page.

Resources

Uploading resources

To upload a new resource, first click on the tab (PMML or ONNX) depending on whether the resource is for a PMML model or an ONNX pipeline, then click Add resource, navigate to the desired resource file and then click Open.

Once your resource is successfully uploaded, you will see a corresponding confirmation message. The new resource will be added to the resources list.

Downloading resources

To download the source file of a resource, click the download icon in its card.

Typically the source will either be a JAR file or an Excel sheet for PMML resource and python file for ONNX resource.

Deleting resources

To delete a resource, click the delete icon on its card and confirm the deletion.

Once a resource is deleted, it will be removed permanently from your resources list.

Managing pipelines

The Pipelines page allows you to manage your ONNX pipelines.

A pipeline represents an end-to-end workflow that encapsulates any pre-processing or post-processing steps that need to be performed in addition to invoking a machine learning model. A pipeline must include a machine learning model, whereas the inclusion of a pre-processing or post-processing step is optional. If an ONNX model requires pre-processing or post-processing steps, the corresponding pre-processing/post-processing resources must be uploaded via the Resources page. These pre-processing/post-processing resources are represented using python scripts which must follow the conventions defined below. A pipeline’s end-to-end workflow can be depicted using the following sequence:

Input data > Pre-processing > ONNX model > Post-processing > Output data

When a pre-processing step is part of a pipeline, the input data is first processed by the pre-processing script. The pre-processed data is then passed on to the model and finally the outputs from the model are processed by the post-processing script. The output generated by the post-processing script is returned as the final output.

Info
  1. The pre-processing/post-processing scripts must contain a python function named process which takes in just one argument. The processing logic should be contained in this function. Any inputs to these scripts can be accessed using the argument of the process function.
  2. The process function of any pre-processing script must return a Dictionary with the keys containing the input field names of the model and its corresponding values containing the pre-processed data.
  3. If there is a post-processing script, the outputs of the ONNX model will be the input to the process function of the post-processing script which may or may not return an output depending on the use case.

Pipeline management functionality includes:

Click Pipelines in the navigator, to open the Pipelines page.

Pipelines

Adding pipelines

To add a new pipeline, perform the following steps:

  1. Click Add Pipeline in the Pipelines page.
  2. In the Add Pipeline wizard, enter the name of the pipeline you want to create followed by choosing your ONNX model.
    Add pipeline
  3. Select the pre-processing and post-processing resources if you have any or leave them empty.
    Click Apply to create the new pipeline.

Once your pipeline is created successfully, you will see a corresponding confirmation message. The new pipeline will be added to the list of pipelines.

Deleting pipelines

To delete a pipeline, click the delete icon on its card and confirm the deletion.

Once a pipeline is deleted, it will be removed permanently from your list of pipelines.

Viewing pipeline properties

To view the properties of a pipeline, click the info icon Info on its card.

Pipeline properties Properties include the name of the pipeline with the name of the ONNX model and pre-processing/post-processing resources which are part of this pipeline.

Processing data

The Predictions menu allows you to do meaningful predictions by scoring the data from your devices against your predictive models.

Clicking Predictions in the navigator allows you to choose from two different modes of processing: Batch processing and Scheduled processing.

Predictions

Batch processing

Batch processing allows you to process data records against a model, model group or pipeline. Batch processing is applicable for both PMML and ONNX models.

To process data against PMML models/groups, choose the PMML tab. Similarly, to process data against ONNX models/pipelines, choose the ONNX tab.

Model/group/pipeline Supported input data file types Supported compression for input files
PMML model CSV, JSON, JPEG, PNG ZIP (for CSV and JSON files)
PMML model group CSV only ZIP (for CSV files)
ONNX model JSON only -
ONNX pipeline Any -

Running the batch process

For PMML models, batch processing can be used for verifying the accuracy of your predictive models by applying it against test data obtained from the model training environment. The goal is to ensure that model development environment and model deployment environment produce the same results. We call this score matching. To run the batch process on PMML model/group, perform the following steps:

  1. Click Start in the PMML tab to initiate the processing.

  2. In the Batch Processing wizard, first choose whether the processing should be applied on a model or a model group. Then select a model/group from the dropdown list. The dropdown list shows all models or groups which you have added either using the Models page or the Model groups page respectively. Use the Enable score matching toggle to enable/disable score matching. Use the Apply across all models toggle to choose whether to process the data across all the models in the group or just process the data through the primary model only. Batch process 1 Click Next to proceed.

  3. Upload the file containing your input data. Drag and drop a file or select it by browsing.
    Batch process 2 On uploading a valid file, you will see an uploading message. After the processing has been completed, you will see a corresponding notification.

Info
The size of the uploaded file must not exceed 500 MB.

The steps involved in running the batch process on ONNX models/pipelines remain similar to the ones for PMML models. However, there is no option to enable score matching. Also, model groups for ONNX models are not supported yet.

Viewing and downloading the results

In order to view the results, click Show results on the Batch processing completed notification.

For PMML models/groups, the Preview page will only preview maximum 500 records in a paginated manner, displaying 10 records per page.

Show Preview PMML

In the top right of the Preview, you find several buttons to perform the following actions:

Button Action
Download Download the entire set of processed results.
Filter Enable or disable filters.
Configure Configure the columns to be shown in the preview table.

Ideally, for measuring the accuracy of the model against your data, you should specify the desired outputs as part of you data file. If score matching was enabled, the processed results will include a separate column called Match which indicates if the computed and the expected outputs have matched.

Click the cogwheel icon File and select Hide matching rows, to hide all rows where the Match column is true, i.e. to display only records where computed and expected outputs differ.

Click the file icon File in front of a row, to download a full execution trace, showing what exactly happened when that record was applied against the model. In this way, you can investigate why the outputs did not match.

For ONNX models, the Results page will show the entire set of records processed in JSON format. However, for ONNX pipelines, the Results page may not show any content if the post-processing script associated with the pipeline does not return any data. There are no options available for filtering or any sort of configuration for ONNX models/pipelines. The only option available is to download the processed results.

Show Results ONNX

Scheduled processing

Scheduled processing allows you to schedule batch jobs for processing measurements from devices or device groups against an available model or model group.

The job scheduler can be used to trigger one-time or periodic jobs on data captured from devices. The scheduler allows you to map device data to model inputs by providing a mapping tool. Periodic executions of batch jobs can be useful when aggregate information on model’s predictions is required for a desired time period.

Info
Currently, scheduled processing is only applicable for PMML models and model groups. However, time series models must not be used for processing data in a scheduled manner.

Scheduling a job

To schedule a new job, perform the following steps:

  1. Click Create job in the Scheduled processing page.
  2. In the Job config wizard, enter the name and description of the job you want to create. Select a target device or device group from the dropdown list. The list shows maximum 2000 devices or groups but you can search for the device you are interested in. Once done, select a target model or model group which will be used for processing the data captured from your selected device or device group. The dropdown list shows all models and groups which you have already added. Use the Apply across all models toggle if you want the processing to happen on all the models of a model group. When this option is disabled, processing will happen on primary model of the model group.
    Scheduled process 1
    Click Next to proceed.
  3. Each device can have various measurements which are persisted in Cumulocity IoT. In the Mapping section, map the device measurements to the corresponding model inputs.
    Scheduled process 2
    Click Next to proceed.
  4. Set the schedule of the job by selecting the frequency for the job followed by when it should run. You also need to specify the data range to be used for processing when the job is executed. Scheduled process 3
    Click Finish to schedule the job that you just configured.
Info
  1. For a periodic frequency, a CRON expression is generated and used by the scheduler.
  2. The data range selected for the schedule must not exceed 24 hours.
  3. For a one-time job, you need to select the date when the job should run. You also need to specify the data range to be used for processing when the job is executed.

After the job is scheduled, you will see a corresponding notification.

Note that if there are too many jobs scheduled, then, over time the underlying database of a tenant might become over-populated with execution data from these jobs. Hence it is recommended to have a retention rule in place to clean up data which is too old.
In order to do so, create a retention rule for events containing ZementisExecution in its type field. This rule would not remove the jobs themselves but only the data from the execution of the jobs. For details on adding retention rules, see To add a retention rule.

Viewing the scheduled jobs

Scheduled Jobs

By design, the Scheduled processing page shows a list of all the scheduled jobs in a paginated manner, displaying 10 jobs per page.

Click any link in the NAME column to view the configuration of that specific job. Click the delete icon of any job to remove the job.

Viewing the execution results of jobs

To view the execution results of any job, click on the history icon associated to that job in the My Jobs section of the Scheduled processing page.

Job History

By design, the Execution results page previews all executions of the job in a paginated manner, displaying 10 executions per page.

For executions with status Warning or Failure, hover over the status to see the detailed reason behind the status. Click Back to see all scheduled jobs.

Viewing the inferences of a job execution

To view the inferences generated by any execution of a job, click on the details icon associated to that execution in the Execution results page.

Execution Inferences Continuous Execution Inferences Categorical

The Inferences window shows two different types of charts, a line-chart plotting the continuous outputs of the model and a pie-chart plotting the model’s categorical outputs.

The inferences are shown in a paginated manner, displaying 2000 inferences per page. For executions containing device groups and model groups, it will also allow you to shuffle between different devices and models which were part of that execution.