This tutorial shows how to deploy a MigratoryData cluster using Azure Kubernetes Service (AKS).

Prerequisites

Before deploying MigratoryData on AKS, ensure that you have a Microsoft Azure account and have installed the following tools:

Shell variables

To avoid hardcoded names of the AKS cluster and Resource Group, let’s define two environment variables as follows:

export RESOURCE_GROUP=rg-migratorydata
export AKS_CLUSTER=aks-migratorydata

Create an AKS cluster

Login to AKS:

az login

Create a new resource group:

az group create --name $RESOURCE_GROUP --location eastus

Create an AKS cluster with at least three and at most five nodes, while also activating cluster autoscaling:

az aks create \
  --resource-group $RESOURCE_GROUP \
  --name $AKS_CLUSTER \
  --node-count 3 \
  --vm-set-type VirtualMachineScaleSets \
  --generate-ssh-keys \
  --load-balancer-sku standard \
  --enable-cluster-autoscaler \
  --min-count 3 \
  --max-count 5

Connect to the AKS cluster:

az aks get-credentials \
--resource-group $RESOURCE_GROUP \
--name $AKS_CLUSTER

Check if the nodes of the AKS cluster are up:

kubectl get nodes

Create namespace

Create a namespace migratory for all the resources created for this environment by copying the following to a file migratory-namespace.yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: migratory

Then, execute the command:

kubectl apply -f migratory-namespace.yaml

Create load balancer

Create a LoadBalancer service to balance the traffic from clients across the MigratoryData’s cluster members using the following YAML:

#
# Service used by the MigratoryData cluster to communicate with the clients
#
apiVersion: v1
kind: Service
metadata:
  namespace: migratory
  name: migratorydata-cs
  labels:
    app: migratorydata
spec:
  type: LoadBalancer
  ports:
    - name: client-port
      port: 80
      protocol: TCP
      targetPort: 8800
  selector:
    app: migratorydata

Copy this YAML to a file nlb-service.yaml and run:

kubectl apply -f nlb-service.yaml

Deploy MigratoryData

We will use the following Kubernetes manifest to build a cluster of three MigratoryData servers:

#
# Headless service used for inter-cluster communication
#
apiVersion: v1
kind: Service
metadata:
  name: migratorydata-hs
  namespace: migratory
  labels:
    app: migratorydata
spec:
  clusterIP: None
  ports:
    - name: inter-cluster1
      port: 8801
      protocol: TCP
      targetPort: 8801
    - name: inter-cluster2
      port: 8802
      protocol: TCP
      targetPort: 8802
    - name: inter-cluster3
      port: 8803
      protocol: TCP
      targetPort: 8803
    - name: inter-cluster4
      port: 8804
      protocol: TCP
      targetPort: 8804
  publishNotReadyAddresses: true
  selector:
    app: migratorydata
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  namespace: migratory
  name: migratorydata-pdb
spec:
  minAvailable: 3 # The value must be equal or higher than the number of seed members 🅐
  selector:
    matchLabels:
      app: migratorydata
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: migratorydata
  namespace: migratory
  labels:
    app: migratorydata
spec:
  selector:
    matchLabels:
      app: migratorydata
  serviceName: migratorydata-hs
  replicas: 3 # The desired number of cluster members 🅑
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: migratorydata
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              podAffinityTerm:
                labelSelector:
                  matchExpressions:
                    - key: "app"
                      operator: In
                      values:
                        - migratorydata
                topologyKey: "kubernetes.io/hostname"
      containers:
        - name: migratorydata
          imagePullPolicy: Always
          image: migratorydata/server:latest
          env:
            - name: MIGRATORYDATA_EXTRA_OPTS
              value: "-DMemory=128MB \
                -DClusterDeliveryMode=Guaranteed \
                -DLogLevel=INFO \
                -DX.ConnectionOffload=true \
                -DClusterSeedMemberCount=3" # Define the number of s 🅒
            - name: MIGRATORYDATA_JAVA_GC_LOG_OPTS
              value: "-XX:+PrintCommandLineFlags -XX:+PrintGC -XX:+PrintGCDetails -XX:+DisableExplicitGC -Dsun.rmi.dgc.client.gcInterval=0x7ffffffffffffff0 -Dsun.rmi.dgc.server.gcInterval=0x7ffffffffffffff0 -verbose:gc"
          command:
            - bash
            - "-c"
            - |
              set -x

              HOST=`hostname -s`
              DOMAIN=`hostname -d`

              CLUSTER_PORT=8801
              MAX_REPLICAS=5 # Define the maximum number of cluster members 🅓

              if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
                  NAME=${BASH_REMATCH[1]}
              fi

              CLUSTER_MEMBER_LISTEN=$HOST.$DOMAIN:$CLUSTER_PORT
              echo $CLUSTER_MEMBER_LISTEN
              MIGRATORYDATA_EXTRA_OPTS="$MIGRATORYDATA_EXTRA_OPTS -DClusterMemberListen=$CLUSTER_MEMBER_LISTEN"

              CLUSTER_MEMBERS=""
              for (( i=1; i < $MAX_REPLICAS; i++ ))
              do
                  CLUSTER_MEMBERS="$CLUSTER_MEMBERS$NAME-$((i-1)).$DOMAIN:$CLUSTER_PORT,"
              done
              CLUSTER_MEMBERS="$CLUSTER_MEMBERS$NAME-$((MAX_REPLICAS-1)).$DOMAIN:$CLUSTER_PORT"
              echo $CLUSTER_MEMBERS
              MIGRATORYDATA_EXTRA_OPTS="$MIGRATORYDATA_EXTRA_OPTS -DClusterMembers=$CLUSTER_MEMBERS"

              echo $MIGRATORYDATA_EXTRA_OPTS
              export MIGRATORYDATA_EXTRA_OPTS

              ./start-migratorydata.sh              
          resources:
            requests:
              memory: "256Mi"
              cpu: "0.5"
          ports:
            - name: client-port
              containerPort: 8800
            - name: inter-cluster1
              containerPort: 8801
            - name: inter-cluster2
              containerPort: 8802
            - name: inter-cluster3
              containerPort: 8803
            - name: inter-cluster4
              containerPort: 8804
          readinessProbe:
            tcpSocket:
              port: 8800
            initialDelaySeconds: 60
            failureThreshold: 5
            periodSeconds: 5
          livenessProbe:
            tcpSocket:
              port: 8800
            initialDelaySeconds: 10
            failureThreshold: 5
            periodSeconds: 5

This manifest contains a Headless Service, a PodDisruptionBudget, and a StatefulSet. The Headless Service is used for inter-cluster communication, providing the DNS records corresponding to the instances of the MigratoryData cluster.

In this manifest, we’ve used the MIGRATORYDATA_EXTRA_OPTS environment variable which can be used to define specific parameters or adjust the default value of any parameter listed in the Configuration Guide. In this manifest, we’ve used this environment variable to modify the default values of the parameters such as Memory, ClusterDeliveryMode, etc. Additionally, we’ve employed it to specify the ClusterMemberListen parameter, setting the port to 8801 for inter-cluster communication, and defined the ClusterMembers parameter to establish an ordered list of cluster members.

For client connections, we’ve maintained the default value of the Listen parameter, which is 8800. Furthermore, the Service load balancer mapps this port to port number 80. Consequently, clients will establish connections with the MigratoryData cluster on port 80.

The MIGRATORYDATA_JAVA_EXTRA_OPTS environment variable is also used to provide a few Java options mostly for demonstration purposes.

To deploy the MigratoryData cluster, copy this manifest to a file migratorydata-cluster.yaml, and run the command:

kubectl apply -f migratorydata-cluster.yaml

Namespace switch

Because the deployment concerns the namespace migratory, switch to this namespace as follows:

kubectl config set-context --current --namespace=migratory

To return to the default namespace, run:

kubectl config set-context --current --namespace=default

Verify installation

Check the running pods to ensure the migratorydata pods are running:

kubectl get pods

The output of this command should include something similar to the following:

NAME              READY   STATUS    RESTARTS   AGE
migratorydata-0   1/1     Running   0          2m52s
migratorydata-1   1/1     Running   0          2m40s
migratorydata-2   1/1     Running   0          2m25s

You can check the logs of each cluster member by running a command as follows:

kubectl logs migratorydata-0

Test installation

Now, you can check that the two services of the manifest above are up and running:

kubectl get svc

You should see an output similar to the following:

NAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                               AGE
migratorydata-cs   LoadBalancer   10.0.176.242   YourExternalIP  80:31868/TCP                        45s
migratorydata-hs   ClusterIP      None           <none>        8801/TCP,8802/TCP,8803/TCP,8804/TCP   16s

You should now be able to connect to the address assigned by AKS to the load balancer service under the column EXTERNAL-IP. In this case the external IP address is YourExternalIP and the port is 80. Open in your browser the corresponding URL http://YourExternalIP. You should see a welcome page that features a demo application under the Debug Console menu for publishing real-time messages to and consuming real-time messages from the MigratoryData cluster.

Scaling

It’s recommended to read the Clustering section before moving forward here.

In the manifest above, it’s worth noting that we’ve set the maximum number of cluster members to 5 using MAX_REPLICAS 🅓. However, only the initial 3 members are created, as indicated by the replicas field 🅑, meeting the minimum criteria set by the minAvailable field 🅐. Additionally, the ClusterSeedMemberCount parameter 🅒 has been configured to 3, ensuring that the requirement for the minAvailable field 🅐 to be equal or higher than the number of seed members is fulfilled.

Therefore, according to this Kubernetes manifest, two additional cluster members could be added either manually or using autoscaling according to the load of the system.

Manual scaling up

In the example above, you can scale up the cluster of the three members up to five members by modifying the value of the replicas field 🅑. For example, if the load of your system increases substantially, and supposing your nodes have enough resources available, you can add two new members to the cluster by modifying the replicas field as follows:

kubectl scale statefulsets migratorydata --replicas=5

Note that you cannot assign to the replicas field a value which is neither higher than the maximum number of members defined by the shell variable MAX_REPLICAS 🅓, nor smaller than the minimum number of cluster members defined by the minAvailable field 🅐.

Manual scaling down

If the load of your system decreases, then you might remove one member from the cluster by modifying the replicas field as follows:

kubectl scale statefulsets migratorydata --replicas=4

Note that you cannot assign to the replicas field a value which is neither higher than the maximum number of members defined by the shell variable MAX_REPLICAS 🅓, nor smaller than the minimum number of cluster members defined by the minAvailable field 🅐.

Autoscaling

Manual scaling is practical if the load of your system changes gradually. Otherwise, you can use the autoscaling feature of Kubernetes.

Kubernetes can monitor the load of your system, typically expressed in CPU usage, and scale your MigratoryData cluster up and down by automatically modifying the replicas field.

In the example above, to add one or more new members up to a maximum of 5 cluster members if the CPU usage of the existing members becomes higher than 50%, or remove one or more of the existing members if the CPU usage of the existing members becomes lower than 50%, use the following command:

kubectl autoscale statefulset migratorydata \
--cpu-percent=50 --min=3 --max=5

Alternatively, you can use a YAML manifest as follows:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  namespace: migratory
  name: migratorydata-autoscale # you can use any name here
spec:
  maxReplicas: 5
  minReplicas: 3
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: migratorydata 
  targetCPUUtilizationPercentage: 50

Save it to a file migratorydata-autoscale.yaml, and execute:

kubectl apply -f migratorydata-autoscale.yaml

Now, you can display information about the autoscaler object above using the following command:

kubectl get hpa

and display CPU usage of cluster members with:

kubectl top pods

While testing cluster autoscaling, it is important to understand that the Kubernetes autoscaler periodically retrieves CPU usage information from the cluster members. As a result, the autoscaling process may not appear instantaneous, but this delay aligns with the normal behavior of Kubernetes.

Node Failure Testing

MigratoryData clustering tolerates a number of cluster member to be down or to fail as detailed in the Clustering section.

In order to test an AKS node failure, use:

kubectl drain <node-name> --force \
--delete-local-data \
--ignore-daemonsets

Then, to start an AKS node, use:

kubectl uncordon <node-name>

Uninstall

Delete the Kubernetes resources created for this deployment with:

kubectl delete -f migratory-namespace.yaml

Go back to default namespace:

kubectl config set-context --current --namespace=default

Finally, when you don’t need anymore the AKS cluster of nodes, delete it:

az group delete --name $RESOURCE_GROUP --yes --no-wait

Build realtime apps

Use any of the MigratoryData’s client APIs to develop real-time applications for communication with this MigratoryData cluster.