To deploy the MigratoryData cluster, copy this manifest to a file migratorydata-cluster.yaml, update the variables
$EVENTHUBS_NAMESPACE and $EVENTHUBS_TOPIC in the file and run the command:
kubectl apply -f migratorydata-cluster.yaml
Namespace switch
Because the deployment concerns the namespace migratory, switch to this namespace as follows:
kubectl config set-context --current --namespace=migratory
To return to the default namespace, run:
kubectl config set-context --current --namespace=default
Verify the deployment
Check the running pods to ensure the migratorydata pods are running:
kubectl get pods
The output of this command should include something similar to the following:
NAME READY STATUS RESTARTS AGE
migratorydata-57848575bd-4tnbz 1/1 Running 0 4m32s
migratorydata-57848575bd-gjmld 1/1 Running 0 4m32s
migratorydata-57848575bd-tcbtf 1/1 Running 0 4m32s
You can check the logs of each cluster member by running a command as follows:
kubectl logs migratorydata-57848575bd-4tnbz
Test installation
Now, you can check that the service of the manifest above is up and running:
kubectl get svc
You should see an output similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
migratorydata-cs LoadBalancer 10.0.39.44 YourExternalIP 80:32210/TCP 17s
You should now be able to connect to the address assigned by AKS to the load balancer service under the column
EXTERNAL-IP. In this case the external IP address is YourExternalIP and the port is 80. Open in your browser the
corresponding URL http://YourExternalIP. You should see a welcome page that features a demo application under the
Debug Console menu for publishing to and consuming real-time messages from the MigratoryData cluster.
Scaling
The stateless nature of the MigratoryData cluster when deployed in conjunction with Azure Event Hubs, where each cluster member is independent from the others, highly simplifies the horizontal scaling on AKS.
Manual scaling up
For example, if the load of your system increases substantially, and supposing your nodes have enough resources
available, you can add two new members to the cluster by modifying the replicas field as follows:
kubectl scale deployment migratorydata --replicas=5
Manual scaling down
If the load of your system decreases significantly, then you might remove three members from the cluster by modifying
the replicas field as follows:
kubectl scale deployment migratorydata --replicas=2
Autoscaling
Manual scaling is practical if the load of your system changes gradually. Otherwise, you can use the autoscaling feature of Kubernetes.
Kubernetes can monitor the load of your system, typically expressed in CPU usage, and scale your MigratoryData cluster
up and down by automatically modifying the replicas field.
In the example above, to add one or more new members up to a maximum of 5 cluster members if the CPU usage of the
existing members becomes higher than 50%, or remove one or more of the existing members if the CPU usage of the existing
members becomes lower than 50%, use the following command:
kubectl autoscale deployment migratorydata \
--cpu-percent=50 --min=3 --max=5
Alternatively, you can use a YAML manifest as follows:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
namespace: migratory
name: migratorydata-autoscale # you can use any name here
spec:
maxReplicas: 5
minReplicas: 3
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: migratorydata
targetCPUUtilizationPercentage: 50
Save it as a file named for example migratorydata-autoscale.yaml, then execute it as follows:
kubectl apply -f migratorydata-autoscale.yaml
Now, you can display information about the autoscaler object above using the following command:
kubectl get hpa
and display CPU usage of cluster members with:
kubectl top pods
While testing cluster autoscaling, it is important to understand that the Kubernetes autoscaler periodically retrieves CPU usage information from the cluster members. As a result, the autoscaling process may not appear instantaneous, but this delay aligns with the normal behavior of Kubernetes.
Node Failure Testing
MigratoryData clustering tolerates a number of cluster member to be down or to fail as detailed in the Clustering section.
In order to test an AKS node failure, use:
kubectl drain <node-name> --force --delete-local-data \
--ignore-daemonsets
Then, to start an AKS node, use:
kubectl uncordon <node-name>
Uninstall
Delete the Kubernetes resources created for this deployment with:
kubectl delete -f migratory-namespace.yaml
Go back to default namespace:
kubectl config set-context --current --namespace=default
Finally, when you don’t need anymore the AKS cluster of nodes, delete it:
az group delete --name $RESOURCE_GROUP --yes --no-wait
Build realtime apps
First, please read the documentation of the Kafka native add-on to understand the automatic mapping between MigratoryData subjects and Kafka topics.
Utilize MigratoryData’s client APIs to create real-time applications that communicate with your MigratoryData cluster via Azure Event Hubs.
Also, employ the APIs or tools of Azure Event Hubs to generate real-time messages, which are subsequently delivered to MigratoryData’s clients. Similarly, consume real-time messages from Azure Event Hubs that originate from MigratoryData’s clients. –>