Nomad
Set up a Nomad cluster on Azure
This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on Azure. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.
Prerequisites
For this tutorial, you need:
- Packer 1.9.4 or later
- Terraform 1.2.0 or later
- Nomad 1.7.7 or later
- An Azure account configured for use with Terraform
- az CLI 2.60.0 or later
Clone the code repository
The cluster setup code repository contains configuration files for creating a Nomad cluster on Azure. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.
Clone the code repository.
$ git clone https://github.com/hashicorp-education/learn-nomad-cluster-setup
Navigate to the cloned repository folder.
$ cd learn-nomad-cluster-setup
Navigate to the azure folder.
$ cd azure
Create the Nomad cluster
There are two main steps to creating the cluster: building a virtual machine image with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that you configure variables before you run commands. The variables.hcl.example file contains the configuration you need for this tutorial.
Update the variables file for Packer
Rename variables.hcl.example to variables.hcl and open it in your text editor.
$ mv variables.hcl.example variables.hcl
Update the location variable with your preferred Azure datacenter location. In this example, the location is eastus. The remaining variables are for Terraform, and you update them after building the VM image.
azure/variables.hcl
# Packer variables (all are required)
location = "eastus"
## ...
Create an Azure resource group
Packer needs an existing resource group in order to build the VM image. Initialize Terraform to download required plugins and set up the workspace.
$ terraform init
Initializing the backend...
# ...
Initializing provider plugins...
# ...
Terraform has been successfully initialized!
# ...
Next, run Terraform in target mode so that it only deploys the resource group for Packer to use. Enter yes to confirm the run.
$ terraform apply -target='azurerm_resource_group.hashistack' -var-file=variables.hcl
## ...
Terraform performs the following actions:
# azurerm_resource_group.hashistack will be created
+ resource "azurerm_resource_group" "hashistack" {
+ id = (known after apply)
+ location = "eastus"
+ name = "hashistack"
}
Plan: 1 to add, 0 to change, 0 to destroy.
╷
│ Warning: Resource targeting is in effect
│
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
azurerm_resource_group.hashistack: Creating...
azurerm_resource_group.hashistack: Creation complete after 1s [id=/subscriptions/dc876065-5a56-4f4e-b8c8-eff71e1071e1/resourceGroups/hashistack]
╷
│ Warning: Applied changes may be incomplete
│
## ...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Build the VM image
Now that there is an existing resource group, Packer is ready to build the VM image. First, initialize Packer to download the required plugins.
$ packer init image.pkr.hcl
Then, build the image and provide the variables file with the -var-file flag.
$ packer build -var-file=variables.hcl image.pkr.hcl
# ...
Build 'azure-arm.hashistack' finished after 7 minutes 46 seconds.
==> Wait completed after 7 minutes 46 seconds
==> Builds finished. The artifacts of successful builds are:
--> azure-arm.hashistack: Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: hashistack
ManagedImageName: hashistack.20221202190723
ManagedImageId: /subscriptions/dc876065-5a56-4f4e-b8c8-eff71e1071e1/resourceGroups/hashistack/providers/Microsoft.Compute/images/hashistack.20221202190723
ManagedImageLocation: eastus
Packer outputs the specific VM image name once it finishes building the image. In this example, the value is hashistack.20221202190723.
Update the variables file for Terraform
Open variables.hcl in your text editor and update the image_name variable with the value output from the Packer build contained in ManagedImageName. In this example, the value is hashistack.20221202190723.
azure/variables.hcl
# ...
image_name = "hashistack.20221202190723"
# These variables will default to the values shown
# and do not need to be updated unless you want to
# change them
# allowlist_ip = "0.0.0.0/0"
# name = "nomad"
# server_instance_type = "t2.micro"
# server_count = "3"
# client_instance_type = "t2.micro"
# client_count = "3"
The remaining variables in variables.hcl are optional.
-
allowlist_ipis a CIDR range specifying which IP addresses are allowed to access the Consul and Nomad UIs on ports8500and4646as well as SSH on port22. The default value of0.0.0.0/0allows traffic from everywhere. -
nameis a prefix for naming the Azure resources. -
server_instance_typeandclient_instance_typeare the virtual machine instance types for the cluster server and client nodes, respectively. -
server_countandclient_countare the number of nodes to create for the servers and clients, respectively.
Deploy the Nomad cluster
Run the Terraform deployment and provide the variables file with the -var-file flag. Respond yes to the prompt to confirm the operation. The provisioning takes several minutes. The Consul and Nomad web interfaces are available upon completion.
$ terraform apply -var-file=variables.hcl
# ...
Plan: 34 to add, 0 to change, 0 to destroy.
# ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: Yes
# ...
Apply complete! Resources: 28 added, 0 changed, 0 destroyed.
Outputs:
IP_Addresses = <<EOT
Client public IPs: 52.91.50.99, 18.212.78.29, 3.93.189.88
Server public IPs: 107.21.138.240, 54.224.82.187, 3.87.112.200
The Consul UI can be accessed at http://107.21.138.240:8500/ui
with the bootstrap token: dbd4d67b-4629-975c-e9a8-ff1a38ed1520
EOT
consul_bootstrap_token_secret = "dbd4d67b-4629-975c-e9a8-ff1a38ed1520"
lb_address_consul_nomad = "http://107.21.138.240"
Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the URL in the Terraform output.