Create and bring your own AI/ML model (BYOM)
There is a wide variety of open-source libraries (such as, TensorFlow®, PyTorch, Keras, Scikit-learn) and commercial 3rd-party tooling (such as, Microsoft Azure Machine Learning Studio, Amazon SageMaker, IBM Watson, MATLAB, Google Cloud) available for developing AI/ML models. Therefore, Cumulocity offers you the flexibility to have your data science team remain working in their own optimized technology stack but still leverage their results in the field.
Some examples of model creation in the mentioned tools that can inspire you:
- Prediction of Remaining Userful Life with TensorFlow
- Anomaly Detection with AWS SageMaker
- Image classification with Azure ML Studio
- Remaining Userful Life Estimation with MATLAB
If you do not have access to tooling/expertise in-house for the BYOM and/or are looking for a very specific AI/ML use case, such as predictive maintenance, there are more out-of-the-box solutions that we can recommend. For example, we have partnered with BoonLogic who provide ML-based anomaly detection capabilities with their Amber product. This product can be embedded as a custom microservice within Cumulocity (that is, scenario B) and can be integrated using Streaming Analytics. To facilitate the integration even further, a plugin has been created consisting of an integration microservice to manage the communication between Cumulocity and BoonLogic Amber plus a set of front-end widgets to perform the configuration and visualize the output of the anomaly detection. More information can be found at https://github.com/Cumulocity-IoT/Cumulocity-Amber-Boon-Logic/.
This section highlighted how you can access historical data and train an AI/ML model for your specific use-case. The next section details how you can bring your created AI/ML model into action on new incoming data, aka perform model inferencing or model scoring. When deploying a model outside of the training environment, it is important to consider the portability of your model to a different platform. To overcome potential issues, a community of partners has created the Open Neural Network Exchange (ONNX) standard for representing ML models, allowing models from many frameworks (including the ones mentioned earlier) to be exported or converted into the standard ONNX format. Once your model is in the ONNX format, they are able to run on a variety of platforms and devices.