This tutorial shows how to deploy a MigratoryData cluster using Terraform.


Ensure that you have an AWS account and have installed the following tools:

Login to AWS

Login to AWS with the following command and follow the instructions on the screen to configure your AWS credentials:

aws configure

Configure the deployment

Clone the MigratoryData’s repository with terraform configuration files:

git clone
cd terraform-aws-migratorydata/deploy

Update if necessary the configuration files from the deploy/configs directory. See the Configuration guide for more details. If you developed custom extensions, add them to the the deploy/extensions directory.

Update terraform.tfvars file to match your configuration. The following variables are required:

  • region - The AWS region where the resources will be deployed.
  • availability_zone - The availability zone where the resources will be deployed.
  • namespace - The namespace for the resources.
  • address_space - The address space for the virtual network.
  • num_instances - The number of nodes to start the MigratoryData cluster.
  • max_num_instances - The maximum number of instances of MigratoryData Nodes to scale the deployment when necessary.
  • instance_type - The type of the virtual machines to be deployed.
  • ssh_private_key - The path to the private key used to access the virtual machines.
  • migratorydata_download_url - The download URL for the MigratoryData package.

region = "us-east-1"
availability_zone = "us-east-1a"

namespace = "migratorydata"
address_space = ""

num_instances = 3
max_num_instances = 5

instance_type = "t2.large"
ssh_private_key = "~/.ssh/id_rsa"

migratorydata_download_url = ""

SSH keys

For terraform to install all the necessary files on the VM instances, you need to provide the private key to access the VM machines.

You can generate a new SSH key pair using the ssh-keygen command on your local machine, and then set the path to private key to the terraform deployment using var ssh_private_key. Here’s how you can do it:

ssh-keygen -t rsa -b 4096 -C ""

This will generate in the .ssh directory a public key, i.e. ~/.ssh/, and a private key, i.e. ~/.ssh/id_rsa.

Update terraform.tfvars file with the path to the private key.

Deploy MigratoryData

Initialize terraform:

terraform init

Check the deployment plan:

terraform plan

Apply the deployment plan:

terraform apply

Verify deployment

You can access the MigratoryData cluster using the NLB DNS name. You can find it in the AWS dashboard. You can also find it under migratorydata_cluster_address in the output of the following:

terraform output 

Also you can ssh into the virtual machines using the public ip of the virtual machines. You can find it under cluster-nodes-public-ips output and ssh into the virtual machines using the following command:

ssh admin@machine_public_ip
ssh -i ssh_private_key admin@machine_public_ip


To scale the deployment, update the num_instances variable in the terraform.tfvars file and run the following commands:

terraform plan
terraform apply


To destroy the deployment run the following command:

terraform destroy

Build realtime apps

Use any of the MigratoryData’s client APIs to develop real-time applications for communication with this MigratoryData cluster.