Installing Edge on Kubernetes

This documentation provides information about installing Cumulocity IoT Edge using the Edge operator and accessing Cumulocity IoT Edge. Use the c8yedge.yaml file which includes Edge CR and the secrets necessary to deploy Cumulocity IoT Edge.

Prerequisites

Item
Details
Hardware CPU: 6 cores
RAM: 10 GB
CPU Architecture: x86-64

Info: These are the minimum system requirements for deploying Edge. If a custom microservice requires additional resources, you must configure the system accordingly in addition to the minimum requirements. For example, if a custom microservice requires 2 CPU cores and 4 GB of RAM, then the Kubernetes node must have 8 CPU cores (6 cores for standard workloads + 2 cores for your microservice) and 14 GB of RAM (10 GB for standard workloads + 4 GB for your microservice).

Important: MongoDB requires a CPU that supports AVX instructions. Ensure that the CPU type of the Kubernetes node supports AVX instructions. Use the command sudo lscpu to check whether the CPU supports AVX instructions.

Kubernetes Version 1.25.x has been tested (with potential compatibility for subsequent versions) and is supported across the following platforms:

- Lightweight Kubernetes (K3s). To enable the proper functioning of the Edge operator on K3s, you must install K3s with the following configuration options. For more information, see Special instructions for K3s.

- Kubernetes (K8s)

- Amazon Elastic Kubernetes Service (EKS)

- Microsoft Azure Kubernetes Service (AKS)

Info: Edge on Kubernetes has undergone testing on the Kubernetes platforms mentioned above, using the Containerd, CRI-O, and Docker container runtimes.

Important
Edge on Kubernetes is tested and supported on single-node Kubernetes clusters.
Helm version 3.x Refer to Installing Helm for the installation instructions.
Disk space Three static Persistent Volumes (PV) or a StorageClass configured with dynamic provisioning to bind.

- 75 GB for the Persistent Volume Claim (PVC) made for MongoDB (configurable through the Custom Resource).

- 10 GB for the Persistent Volume Claim (PVC) made for the Private Registry to host custom microservices.

- 5 GB for the Persistent Volume Claim (PVC) made for application logs.


For more information about configuring the storage, see Configuring storage.
Edge license file To request the license file for Edge, contact the logistics team for your region:

- North and South America: LogisSrvus@softwareagusa.com

- All Other Regions: LogisticsServiceCenterGER@softwareag.com


In the email, you must include

- Your company name, under which the license has been bought

- The domain name (for example, myown.iot.com), where Edge will be reachable


For more information, see Domain name validation for Edge license key generation.
The Edge operator registry credentials You will receive the Edge operator registry credentials along with the Edge license.
TLS/SSL key and certificates Optional. TLS/SSL private key and domain certificates in PEM format.
Generate a TLS/SSL key pair and a Certificate Signing Request (CSR) according to your company policies, and submit it to your internal or external Certificate Authority (CA). When creating the CSR, in addition to providing the Common Name (CN), Organization (O), and other required details, you must specify the Subject Alternative Name (SAN) to request a multi-domain certificate. Ensure that the SAN includes the domain names for the “edge” tenant and Management tenant. If you plan to install Cumulocity IoT DataHub, include its domain name as well. For instance, if your Edge domain is myown.iot.com, make sure myown.iot.com, management.myown.iot.com, and datahub.myown.iot.com are listed in the SAN field.
Additionally, verify that the TLS/SSL certificate includes the complete certificate chain in the correct order.
Connect Edge to the cloud Optional. To connect and manage one or more Edge deployments from your Cumulocity IoT cloud tenant, you will need an active Cumulocity IoT Standard tenant with a subscription plan that includes the advanced-software-mgmt microservice.

Special instructions for K3s

To enable the proper functioning of the Edge operator on K3s, you must install K3s with the following configuration options.

Run the command below to install Kubernetes version 1.25.13:

USER_NAME=$(whoami)
USER_HOME=$(eval echo ~${USER_NAME})
sudo sh -c '
    touch /etc/sysctl.d/90-kubelet.conf  && \
    sed -i "/^vm\.panic_on_oom=/d; /^vm\.overcommit_memory=/d; /^kernel\.panic=/d; /^kernel\.panic_on_oops=/d" /etc/sysctl.d/90-kubelet.conf && \
    printf "vm.panic_on_oom=0\nvm.overcommit_memory=1\nkernel.panic=10\nkernel.panic_on_oops=1\n" | tee -a /etc/sysctl.d/90-kubelet.conf && \

    sysctl -p /etc/sysctl.d/90-kubelet.conf && \

    curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.25.13+k3s1 sh -s - \
        --write-kubeconfig-mode 644 \
        --disable=traefik \
        --protect-kernel-defaults true \
        --kube-apiserver-arg=admission-control=ValidatingAdmissionWebhook,MutatingAdmissionWebhook && \
    
    mkdir -p '"$USER_HOME"'/.kube && \
    cp /etc/rancher/k3s/k3s.yaml '"$USER_HOME"'/.kube/config && \
    chown '"$USER_NAME:"' '"$USER_HOME"'/.kube/config && \
    chmod 600 '"$USER_HOME"'/.kube/config && \

    printf "\e[32mSuccessfully installed k3s!\e[0m\n" && \
    
    k3s crictl pull rancher/klipper-lb:v0.4.4 && \
    k3s crictl pull rancher/mirrored-metrics-server:v0.6.3 && \
    k3s crictl pull rancher/local-path-provisioner:v0.0.24
'

For configuration options, see K3s configuration options.

  • Added --disable=traefik in the install command to disable Traefik to avoid port conflicts between Traefik and cumulocity-core service, as both are LoadBalancer type services which expose port 443.
  • Added --kube-apiserver-arg=admission-control=ValidatingAdmissionWebhook,MutatingAdmissionWebhook to enable admission controllers. The flag is set to enable the ValidatingAdmissionWebhook and MutatingAdmissionWebhook admission controllers, as Edge requires them. See https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/.
  • Added --protect-kernel-defaults true to protect the default kernel settings on the host system. It prevents modifications to critical kernel parameters by container workloads running in Kubernetes. For more information, see https://docs.k3s.io/security/hardening-guide#host-level-requirements.
Info
To install a later version of Kubernetes, update the environment variable INSTALL_K3S_VERSION.

Configuring proxy

When Cumulocity IoT Edge is deployed behind a proxy, it must be configured to communicate with external endpoints over the internet through the proxy server. To configure Edge to use a proxy, you must create or update a ConfigMap named custom-environment-variables in the c8yedge namespace (or the one you deployed Edge into) with the required proxy settings. The keys http_proxy, https_proxy and socks_proxy must be set to the URLs of the HTTP, HTTPS and Socks proxies, respectively. The key no_proxy must be set to specify a comma-separated list of domain suffixes, IP addresses, or CIDR ranges that Edge should bypass the proxy server for.

Here is an example of a ConfigMap with proxy settings:

##
## An optional ConfigMap to configure the Edge operator with
##    - Proxy details when accessing external endpoints through a Proxy
##    - TLS/SSL certificates to trust
##
## http_proxy, https_proxy and optionally socks_proxy must be configured with the relevant URLs.
## no_proxy must be configured with a comma-separated list of addresses or domains for which the proxy should be bypassed.
##

apiVersion: v1
kind: ConfigMap
metadata:
  ## The name is fixed and cannot be changed.
  name: custom-environment-variables
  ## Namespace name into which you installed the Edge operator.
  namespace: c8yedge
data:
  http_proxy: <HTTP Proxy URL>
  https_proxy: <HTTPS Proxy URL>
  socks_proxy: <SOCKS Proxy URL>

  ## A comma-separated list of addresses or domains for which the proxy will be bypassed.
  ## This must be configured with the specified entries, Edge domain name, Kubernetes Pod CIDR (Cluster Pod IP Address Range), 
  ## Kubernetes Service CIDR (Cluster Service IP Address Range) and any other domains, hosts or IPs 
  ## you want to bypass the proxy when accessed.
  no_proxy: 127.0.0.1,::1,localhost,.svc,.cluster.local,cumulocity,<edge domain name, for example, myown.iot.com>,<kubernetes cluster IP range, for example, 10.43.0.0/16>

  ## TLS/SSL certificates in PEM format that the Edge operator can trust, in addition to those included in the default system trust store.
  ## You can provide multiple TLS/SSL certificates for trust by combining them into a single string.
  ca.crt: <CA-CERTIFICATES TO TRUST>

By configuring Edge with the appropriate proxy settings, you ensure that it can seamlessly communicate with external endpoints through the proxy server, allowing it to function effectively in environments where proxy usage is mandated.

The table below provides more information:

Field
Required
Type
Default Description
http_proxy No String Specifies the URL of the HTTP proxy to be used for network connections.
https_proxy No String Specifies the URL of the HTTPS proxy to be used for secure network connections.
socks_proxy No String Specifies the URL of a SOCKS proxy.
no_proxy No String Specifies a comma-separated list of addresses or domains for which the proxy will be bypassed. This is configured with the specified entries, Edge domain name, Kubernetes Pod CIDR (Cluster Pod IP Address Range), Kubernetes Service CIDR (Cluster Service IP Address Range) and any other domains, hosts or IPs you want to bypass the proxy when accessed.
ca.crt No String TLS/SSL certificates in PEM format that the Edge operator can trust, in addition to those included in the default system trust store.
You can provide multiple TLS/SSL certificates for trust by combining them into a single string.

Configuring storage

Kubernetes makes physical storage devices available to your cluster in the form of two API resources, PersistentVolume and PersistentVolumeClaim.

A Persistent Volume (PV) is a storage resource in Kubernetes that is provisioned and managed independently from the Pods that use it. It provides a way to store data in a durable and persistent manner, even if the Pod that uses it is deleted or restarted.

PVs are typically used to store data that must be preserved across Pod restarts or rescheduling, such as databases or file systems. They can be backed by various storage technologies, such as local disks, network-attached storage (NAS), or cloud-based storage services.

To use a PV in Kubernetes, you must define a PersistentVolume object that describes the characteristics of the storage, such as capacity, access modes, and the storage-provider-specific details. Once the PV is created, you can create a PersistentVolumeClaim object that requests a specific amount of storage with specific access requirements. The Persistent Volume Claim (PVC) binds to a matching PV, and the Pod can then use the PVC to mount the storage and access the data.

By using PVs and PVCs, you can decouple the storage management from the application deployment, making it easier to manage and scale your applications in Kubernetes.

PVs represent cluster resources, while PVCs serve as requests for these resources and also serve as validation checks for the resource they request. Provisioning PVs can be done in two ways: statically or dynamically.

  • Static provisioning: In this method, a cluster administrator manually creates PVs, specifying details about the actual storage available for cluster users. These PVs are registered in the Kubernetes API and are ready for consumption.

  • Dynamic provisioning: When none of the statically created PVs match a PVC’s requirements, the cluster can automatically provision storage on-demand, specifically tailored for the PVC. This dynamic provisioning relies on StorageClasses. To trigger dynamic provisioning, the PVC must request a StorageClass, and the administrator must have set up and configured that class accordingly. Claims that request an empty string (“”) for the class effectively disable dynamic provisioning for themselves. If no StorageClass is specified in a claim, it falls back to using a default StorageClass if one is configured in the cluster. To enable a default StorageClass, the cluster administrator must activate the DefaultStorageClass admission controller on the API server. This can be achieved, for instance, by ensuring that DefaultStorageClass is included in the comma-delimited, ordered list of values for the –enable-admission-plugins flag of the API server component. For more details on API server command-line flags, refer to the kube-apiserver documentation.

Persistent Volume Claims made by the Edge operator

The Edge operator requests three PVCs, as outlined in the table below. Each of these PVCs utilizes the StorageClass if specified within the spec.storageClassName field of the Edge CR.

  • In case you omit the spec.storageClassName, the Edge operator requests PVCs without a StorageClass, thereby instructing Kubernetes to utilize the default StorageClass configured in the cluster.

  • If you explicitly specify an empty StorageClass as "", the Edge operator requests PVCs with an empty StorageClass, thereby instructing Kubernetes to carry out static provisioning.

  • Finally, if you specify the name of an existing StorageClass for which dynamic provisioning is enabled, the Operator requests PVCs with that class name, thereby instructing Kubernetes to utilize dynamic provisioning according to the specified class.

Persistent volume
Persistent Volume Claim
Description
75 GB mongod-data-edge-db-rs0-0 Claimed by the MongoDB server to retain application data. The default size is 75 GB, but this value can be adjusted using the spec.mongodb.resources.requests.storage field in the Edge CR file.
10 GB microservices-registry-data Claimed by the private docker registry to store microservice images.
5 GB edge-logs Claimed by the Edge logging component to store the application and system logs.

To guarantee the retention of physical storage even after the PVC is deleted (for example, when Edge is deleted) and to enable future storage expansion if needed, it’s crucial to configure the StorageClass and/or the PVs with the following settings:

  1. Reclaim Policy: Ensure that the reclaim policy is set to Retain. This setting preserves the storage even after the PVC deletion.
  2. Volume Expansion: Set the volume expansion option to true. This setting enables the storage to be expanded when necessary.

If these recommended settings are not configured in the StorageClass, in the Edge CR status you receive the warnings below:

  • persistent volume reclaim policy of StorageClass [storage-class] is currently set to [Delete] instead of the recommended value [Retain]

  • allow volume to expand setting of the StorageClass [storage-class] is currently set to [false] instead of the recommended value [true]

These warnings serve as reminders to adjust these settings for optimal storage management.

Kubernetes provides a variety of persistent volume types, but two specific types enable Pod containers to access either a Network File System (NFS) or the cluster node’s local filesystem (often set up as a NFS drive mapped to a local folder). This configuration is especially prevalent in on-premises deployments.

Static provisioning of PVs

Info
You can skip this section if your Kubernetes cluster is already configured for dynamic provisioning of PVs.

This section outlines the steps for configuring the Kubernetes cluster to enable Edge to utilize NFS as a source for the PVs. For additional storage options, refer to the Kubernetes documentation.

  • Storage provisioning by connecting directly to the NFS server via PV configuration

    • Download the c8yedge-pv-nfs.yaml file.

    • Create and export the folders required for the 3 PVs defined in the c8yedge-pv-nfs.yaml file. Ensure that the user running Kubernetes server has read/write access to these folders.

    • Run the command below:

      kubectl apply -f c8yedge-pv-nfs.yaml
      
  • Storage provisioning by mapping NFS drive to a local folder into the cluster node

    • Download the c8yedge-pv-local-path.yaml file.

    • Create the folders in the local file system or mount NFS folders required for the 3 PVs defined in the c8yedge-pv-local-path.yaml file. Ensure that the user running Kubernetes server has read/write access to these folders.

    • Run the command below:

      kubectl apply -f c8yedge-pv-local-path.yaml
      
Info
Since you manually created the PVs, you must specify an empty StorageClass as "" in the spec.storageClassName field of the Edge CR for Kubernetes to carry out static provisioning, thereby binding PVC claims made by the Edge operator.

Installing the Edge operator

To begin, create a new single-node Kubernetes cluster with the Kubernetes distribution of your choice, and configure kubectl to use that cluster. See Prerequisites for the supported Kubernetes distributions and versions.

A script to install the Edge operator is available at c8yedge-operator-install.sh.

To install the Edge operator, download and run the script, refer to a sample command below. Enter the version (-v option, for example, 1018.0.1) you want to install, registry hostname (-r option) and the registry credentials you received along with the license when prompted. Use -h option to display the usage details.

Info
If you are installing Edge from a local/private registry, provide the hostname (-r option) as : and the respective credentials when prompted.
curl -sfL https://cumulocity.com/docs/files/edge-k8s/c8yedge-operator-install.sh -O && bash ./c8yedge-operator-install.sh -v "1018.0.1" -r registry.c8y.io

Provide the Edge operator registry credentials in the prompt:

Enter username to access Edge operator registry:  
Enter password to access Edge operator registry:
Info

To request the Edge registry credentials, contact the Software AG logistics team for your region:

By default, the Edge operator is deployed within the c8yedge namespace. If you wish to install the Edge operator and Edge in a different namespace, you can specify it using the -n option in the installation script.

Run the following command to follow the logs for the Edge operator pod:

kubectl logs -f -n c8yedge deployment/c8yedge-operator-controller-manager manager
Info
Substitute the namespace name c8yedge in the command above with the namespace name where you have installed the Edge operator.

Installing the Edge operator (offline)

Frequently, portions of a data center might not have access to the Internet, even via proxy servers. You can still install Edge in such an environment, but you must make the required software, Helm Charts and Docker images, available to the disconnected environment through an Open Container Initiative (OCI) compliant private registry.

To enable this, you need to have an OCI compliant registry available in the network which is accessible to the Kubernetes cluster in which you intend to install Edge. You would also need a workstation that has full internet access, to pull the required software from the Cumulocity registry and push them into the private registry installed or available in the restricted network.

Installing a private registry

Any OCI compliant registry can be used as a private registry, however, the Edge installation is tested with Harbor and Nexus Repository OSS.

Refer to Harbor Installation and Configuration for installing Harbor and Nexus Installation and Upgrades for installing Nexus.

After installing and configuring a private registry, ensure that all the machines (the workstation and the Kubernetes cluster nodes) which need access to the private registry can resolve its domain or host and trust the private regsitry’s certificate (if it is configured with a self-signed certificate).

Update /etc/hosts to resolve the domain

Run the below commands to update the /etc/hosts file on every machine (the workstation and the Kubernetes cluster nodes) which needs access to the private registry can resolve its domain or host:

PRIVATE_REGISTRY_HOSTNAME="<PRIVATE-REGISTRY-HOSTNAME>"  	# Change it with your private registry's domain or hostname
PRIVATE_REGISTRY_IP_ADDRESS="<PRIVATE-REGISTRY-IP-ADDRESS>" # Change it with your private registry's IP Address 

# Update /etc/hosts to resolve the Harbor domain
echo "${PRIVATE_REGISTRY_IP_ADDRESS} ${PRIVATE_REGISTRY_HOSTNAME}" | sudo tee -a /etc/hosts

Update CoreDNS configuration

Run the commands below to modify the CoreDNS configuration of the Kubernetes cluster to enable resolution of the private registry’s domain or host:

PRIVATE_REGISTRY_HOSTNAME="<PRIVATE-REGISTRY-HOSTNAME>"  	# Change it with your private registry's domain or hostname
PRIVATE_REGISTRY_IP_ADDRESS="<PRIVATE-REGISTRY-IP-ADDRESS>" # Change it with your private registry's IP Address 

# Retrieve the existing NodeHosts value
EXISTING_NODEHOSTS=$(kubectl get configmap coredns -n kube-system -o jsonpath='{.data.NodeHosts}')
EXISTING_NODEHOSTS=$(echo -n "${EXISTING_NODEHOSTS}" | sed ':a;N;$!ba;s/\n/\\n/g')

# Append the new domain and IP address to the existing NodeHosts value
UPDATED_NODEHOSTS=$(echo "${EXISTING_NODEHOSTS}\\n${PRIVATE_REGISTRY_IP_ADDRESS} ${PRIVATE_REGISTRY_HOSTNAME}")

# Patch the CoreDNS ConfigMap with the updated NodeHosts value
kubectl patch configmap coredns -n kube-system --type merge -p "{\"data\":{\"NodeHosts\":\"${UPDATED_NODEHOSTS}\"}}"

Trust the private registry’s certificate

Run the below commands to trust the private regsitry’s certificate (if it is configured with a self-signed certificate), on every machine (the workstation and the Kubernetes cluster nodes) which needs access to the private registry including the Kubernetes cluster nodes:

sudo sh -c '
PRIVATE_REGISTRY_HOST="<PRIVATE-REGISTRY-HOSTNAME>:<PRIVATE-REGISTRY-PORT>"  # Change it with your private registry domain or hostname:port or ip-address:port

PRIVATE_REGISTRY_CA_CERT=$(echo quit | openssl s_client -showcerts -servername ${PRIVATE_REGISTRY_HOST} -connect ${PRIVATE_REGISTRY_HOST}) && \
if command -v "update-ca-certificates" > /dev/null 2>&1; then
	mkdir -p /usr/local/share/ca-certificates
	echo "${PRIVATE_REGISTRY_CA_CERT}" > /usr/local/share/ca-certificates/private-registry-ca.crt
	update-ca-certificates
elif command -v "update-ca-trust" > /dev/null 2>&1; then
	mkdir -p /etc/pki/tls/certs
	echo "${PRIVATE_REGISTRY_CA_CERT}" > /etc/pki/tls/certs/private-registry-ca.crt
	update-ca-trust extract
fi
'
Important
You should restart the container runtime and Kubernetes cluster after running the above commands for the changes to take effect. For example, you can restart k3s using sudo systemctl restart k3s or sudo service k3s restart commands and docker using sudo systemctl restart docker or sudo service docker restart commands.

Download and publish required software to the private registry

This section outlines the steps to download the required software from the Cumulocity registry and publish them to the private registry.

For this you need a workstation with full internet access to download the required software from the remote registry and push them into the private registry. Make sure this workstation meets the following prerequisites.

Item
Details
Workstation A workstation that has full internet access to pull the required software from the remote registry and push them into the private registry.
Python 3 Install Python 3. Refer to Python Setup and Usage for installing Python 3 required to run the registry sync script.
Docker CLI Install docker-ce and docker-ce-cli packages. Refer to Installing Docker for installation instructions.
Helm version 3.x Refer to Installing Helm for the installation instructions.
ORAS CLI version 1.0.0 OCI Registry As Storage (ORAS) CLI is used to publish non-container artifacts to the Harbor registry. Refer to Installing ORAS CLI for installation instructions.

Install registry sync script

To install registry synchronization script, run the commands below:

pip install --force-reinstall https://cumulocity.com/docs/files/edge-k8s/c8yedge_registry_sync-1018.0.1-py3-none-any.whl

Run registry sync script

To download the required software from the Cumulocity registry and publish them to the private registry, run the command below:

Info

If your private registry is a Harbor registry, you need to pass an extra option --target-registry-type=HARBOR to the instruct the script to create the required projects before publishing the required software to it.

Use -h or --help option to display the usage details.

EDGE_REGISTRY_USER="<EDGE-REGISTRY-USER>"     	# Edge registry credentials can be obtained from the Software AG logistics team for your region
EDGE_REGISTRY_PASSWORD="<EDGE-REGISTRY-PASS>" 	# Edge registry credentials can be obtained from the Software AG logistics team for your region

PRIVATE_REGISTRY_HOST="<PRIVATE-REGISTRY-HOSTNAME>:<PRIVATE-REGISTRY-PORT>"  # Change it with your private registry domain or hostname:port or ip-address:port
PRIVATE_REGISTRY_USERNAME="<PRIVATE-REGISTRY-USER>"                          # Change it with the credentials to access your private registry
PRIVATE_REGISTRY_PASSWORD="<PRIVATE-REGISTRY-PASSWORD>"                      # Change it with the credentials to access your private registry

c8yedge_registry_sync sync -v 1018.0.1 -sr registry.c8y.io -sru "${EDGE_REGISTRY_USER}" -srp "${EDGE_REGISTRY_PASSWORD}" -tr "${PRIVATE_REGISTRY_HOST}" -tru "${PRIVATE_REGISTRY_USERNAME}" -trp "${PRIVATE_REGISTRY_PASSWORD}" --dryrun False
Info

To request the Edge registry credentials, contact the Software AG logistics team for your region:

Update custom-environment-variables ConfigMap

Run the below commands to create or update the custom-environment-variables ConfigMap with key “ca.crt” for the Edge operator to trust the private regsitry’s certificate (if it is configured with a self-signed certificate):

EDGE_NAMESPACE=c8yedge                    									 # Change namespace name if you want to deploy Edge operator and Edge in a different namespace

PRIVATE_REGISTRY_HOST="<PRIVATE-REGISTRY-HOSTNAME>:<PRIVATE-REGISTRY-PORT>"  # Change it with your private registry domain or hostname:port or ip-address:port

PRIVATE_REGISTRY_CA_CERT=$(echo quit | openssl s_client -showcerts -servername ${PRIVATE_REGISTRY_HOST} -connect ${PRIVATE_REGISTRY_HOST})
mkdir -p /tmp
echo "${PRIVATE_REGISTRY_CA_CERT}" > /tmp/private-registry-ca.crt

# Create/Update custom-environment-variables ConfigMap with key "ca.crt" for the edge operator to trust
kubectl create namespace "${EDGE_NAMESPACE}" --dry-run=client -o yaml | kubectl apply -f -
kubectl create configmap custom-environment-variables -n "${EDGE_NAMESPACE}" --from-file=ca.crt="/tmp/private-registry-ca.crt" --dry-run=client -o yaml | kubectl apply -f -

Installing the Edge operator

Continue with installing the Edge operator by following the instructions in Installing the Edge operator passing the private registry’s host (-r option) as <private-registry-hostname>:<private-registry-port> and the respective registry credentials when prompted.

Installing Edge

Before you start the installation, ensure that you have fulfilled the prerequisites and configured the storage as described in Configuring storage.

Download and edit the Edge CR (c8yedge.yaml), before applying it to your Kubernetes cluster by running the command below:

kubectl apply -f c8yedge.yaml

For more information about the structure and configuration options available in the Edge CR, see Edge Custom Resource.

Verifying the Edge installation

To monitor the installation progress, run the command below:

kubectl describe edge c8yedge -n c8yedge

This command allows you to view the details about the installation of c8yedge in the c8yedge namespace.

Info
Substitute the Edge name and namespace name, which is currently c8yedge in the command, with the specific Edge name and namespace name you have specified in your Edge CR.

You can also follow the events raised for the Edge CR by running the command below:

kubectl get events -n c8yedge --field-selector involvedObject.name=c8yedge --watch

The Events section in the output of the describe edge command specifies the installation progress and the Status section displays the generation of the Edge CR which is being installed and its current state. Once the installation succeeds, the Status section also displays the generation of the CR which is deployed, Edge version, last deployed time/age, validation warnings, if any and some help commands for downloading the diagnostic logs, extracting the Root CA of the Edge operator generated TLS/SSL certificates.

A sample status output:

Name:         c8yedge
Namespace:    c8yedge
Kind:         CumulocityIoTEdge

Metadata:
  Creation Timestamp:  2023-08-11T00:00:01Z
  Generation:          1

Spec:
  Version:             1018.0.1
  License Key:         ***************
  Company:             IoT Company
  Domain:              myown.iot.com
  Email:               myown@iot.com
  ....
  ....

Status:
  Deployed Generation:  1
  Last Deployed Time:  2023-08-11T00:15:00Z
  State:               Ready
  Version:             1018.0.1-XXXX

  Help Commands:
    Download Logs:   
FILE_NAME="edge-diagnostic-archive-$(date +%Y%m%d%H%M%S).tar.gz" && \
kubectl exec -n edge-sample-logging logging-fluentd-0 -c fluentd -- tar -czvf /var/log/$FILE_NAME /var/log/edge && \
kubectl cp edge-sample-logging/logging-fluentd-0:/var/log/$FILE_NAME -c fluentd ./$FILE_NAME && \
kubectl exec -n edge-sample-logging logging-fluentd-0 -c fluentd -- rm /var/log/$FILE_NAME

A sample set of installation events:

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Normal   Validating        15m    cumulocityiotedge  validating
  Normal   ValidationPassed  15m    cumulocityiotedge  validation passed
  Normal   Installing        15m    cumulocityiotedge  installing
…………
…………
  Normal   Installing        12m    cumulocityiotedge  finished installing mongo server
…………
…………
  Normal   Installing        8m     cumulocityiotedge  finished installing core
…………
…………
  Normal   Installing        5m     cumulocityiotedge  finished installing and updating microservices
…………
…………
  Normal   Installing        2m     cumulocityiotedge  finished installing thin-edge
…………
  Normal   Ready             1m     cumulocityiotedge  Cumulocity IoT Edge installation is complete, and it's now running version 1018.0.1-XXXX

Before you continue, wait for the Edge CR status to reach the Ready state.

Accessing Edge

Before you can access Edge, you must first get the external IP address. The Edge operator creates a load balancer service named cumulocity-core, which receives an external IP. Clients outside of the cluster can access the Edge through this external IP.

Assigning an external IP

To get the external IP to access Edge, run the command below:

kubectl get service cumulocity-core -n c8yedge
Info
Substitute the namespace name c8yedge in the command above with the specific namespace name you have specified in your Edge CR.

Sample output of the kubectl get service command:

NAME              TYPE           CLUSTER-IP          EXTERNAL-IP         PORT(S)
cumulocity-core   LoadBalancer   X.X.X.X **REDACTED  X.X.X.X **REDACTED  443:31342/TCP,1883:32751/TCP,8883:32270/TCP

Sometimes the external IP displays as <pending> or <none>. The IP assignment process is dependent on the Kubernetes hosting environment. An external load balancer in the hosting environment handles the IP allocation and any other configurations necessary to route the external traffic to the Kubernetes service. Most on-premise Kubernetes clusters do not have external load balancers that can dynamically allocate IPs. The most common solution is to manually assign an external IP to the service. This can be done in the service’s YAML configuration. You can use the following command to manually assign an external IP to the cumulocity-core service (replace <EXTERNAL-IP> in the command below with the IP address you want to assign).

kubectl patch service cumulocity-core -n c8yedge -p '{"spec":{"type": "LoadBalancer", "externalIPs":["<EXTERNAL-IP>"]}}'
Info
Substitute the namespace name c8yedge in the command above with the specific namespace name you have specified in your Edge CR.
Info

When manually assigning the external IP, see the following Kubernetes API documentation:

“These IPs are not managed by Kubernetes. The user is responsible for ensuring that traffic arrives at a node with this IP.”

You can access Edge using a domain name in a web browser.

Accessing Edge using the domain name

Access Edge using the domain name configured as part of the installation. There are two ways of configuring the accessibility with the domain names:

  • Add an entry of the domain name and IP address mapping in the DNS servers.
    For example, if your domain name is myown.iot.com, add an entry for both myown.iot.com and management.myown.iot.com.
  • Alternatively, Add the alias to access Edge through the domain name provided during installation. This must be performed on each client host on which Edge is accessed.

The first option is always preferable so that Edge is accessible over LAN.

Adding the alias

On Linux machines, add the following entry to /etc/hosts:

<IP address> <domain_name>
<IP address> management.<domain_name>

Use the external IP address fetched by running the command kubectl get service in the previous section.

On Windows machines, add the same entry to C:\Windows\System32\drivers\etc\hosts.

Ping the <domain_name> to verify it.

ping <domain_name>
ping management.<domain_name>

If the ping is successful, the DNS resolution is working properly.

To access Edge

To access Edge, enter one of the following URLs in the browser:

  • For the “edge” tenant, use the URL https://<domain_name>.
  • For the Management tenant, use the URL https://management.<domain_name>.

This will bring up the below login screen. Enter the default credentials username “admin” and password “admin-pass” to log in in to both the “edge” tenant and the Management tenant.

Login prompt

On the first login, you see the dialog window below, forcing you to change the password. The email address to change the password is the one you specified in the Cumulocity IoT Edge CR (or myown@iot.com if you followed the Quickstart installation steps). Alternatively, run the following command to retrieve the email address:

kubectl get edge c8yedge -n c8yedge -o jsonpath='{.spec.email}' && echo

Info
Substitute the Edge name and namespace name, which is currently c8yedge in the command, with the specific Edge name and namespace name you have specified in your Edge CR.

Reset password

Important
After a successful deployment, it is crucial to access both the Management tenant and the “edge” tenant and change their respective admin credentials.

If you are logging in for the first time, you will see a cookie banner at the bottom of the login screen:

Cookie Banner

Info
The cookie banner is turned on by default. This feature can be configured. For more information, see Branding.
  • Click Agree and Proceed to accept the default cookie settings (required and functional cookies enabled).
  • Click Reject all to reject all of the default cookie settings.
  • Click Preferences to select your individual cookie preferences:
    • Required - Required to enable core site functionality. They perform a task or operation without which a site’s functionality would not be possible. Required cookies cannot be disabled.
    • Functional - Used to track site usage and to process personal data to measure and improve usability and performance. Functional cookies must be actively enabled.
  • Click See also our Privacy Notice to open the Software AG privacy statement with details on the Software AG privacy policy.
Info
If you have enabled functional cookies you can opt out of the product experience tracking later on via the User settings dialog, see User options and settings.

Select the Remember me checkbox if you want the browser to remember your credentials, so that you do not have to enter them again when opening the application the next time. This is especially convenient if you frequently switch between Cumulocity IoT applications, as Edge requests you to authenticate each time when starting an application. You can make the browser “forget” your credentials by explicitly logging out.

Finally, click Login to enter Edge. Initially, you will be taken to the Cockpit application, if not configured differently. For further information about the Cumulocity IoT standard applications see Available applications.

Cockpit home screen

To explicitly log out, click the User button at the right of the top bar, then select Logout from the context menu.

Info
The maximum number of failed logins (due to invalid credentials), after which a user is locked, can be configured by the Management tenant on platform level. Contact your Operations team for further support. The default value is 100.

How to reset or change your password

To reset your password, you must first configure the “reset password” template and email server settings in Edge. For information about configuring the email server, see Configuring the email server.

For information about changing the password, see To change your password.

How to access pages using URLs

For information about accessing pages using the URLs, see URL.

Accessing logs

The Edge operator deploys and configures a Fluent Bit daemonset on the Kubernetes node to collect the container and application logs from the node file system. Fluent Bit queries the Kubernetes API, enriches the logs with metadata about the pods (in the Edge namespace), and transfers both the logs and metadata to Fluentd. Fluentd receives, filters, and persists the logs in the persistent volume claim configured for logging.

To download the diagnostic log archive, run the command below. It generates a file named c8yedge-logs-{current date}.tar.gz in the current directory.

kubectl get edge c8yedge -n c8yedge --output jsonpath='{.status.helpCommands.downloadLogs}' | sh
Info
Substitute the Edge name and namespace name c8yedge in the command above with the specific Edge name and namespace name you have specified in your Edge CR.

Download the log archives remotely from your cloud tenant. For more information, see Downloading diagnostics remotely.