Terraform Azure DevOps YAML Pipeline 0_o Part I

If you happen to be using terraform to manage your Azure infrastructure, and your are looking to start using the newer (at the time of writing) Azure DevOps YAML pipelines to create continuous integration/continuous deployment pipeline(s), then you are in luck!  

This post assumes some knowledge of terraform and Azure DevOps.  The same ideas can be extended to your favorite build system.

Terraform is a powerful Infrastructure as Code (IaC) automation tool from Hashicorp.  It provides a powerful state system that allows you to work and version your infrastructure collaboratively across a team or teams. Terraform is also a pleasure to work with compared to ARM templates in my opinion.  The main thing I personally like about it is the confidence it provides in actually creating and modifying your infrastructure.

YAML build pipelines are a declarative way to define your build definitions.  What makes YAML builds handy for use in DevOps is that they live in the code base, next to the code, rather than live completely in Azure DevOps.  This means it’s shared _code_ that can now be versioned right along side the rest of your code. .

Setting it up


  • Azure DevOps Repo
  • Azure DevOps service connection to azure or a service principal in azure
  • Azure storage account for remote terraform state
  • Yaml build pipelines

The basic idea is as follows:

  • Create Yaml pipeline to execute terraform plan on trigger (branch commit, timer, etc)
    • Your build trigger can either be defined in the yaml or on the build definition itself.
    • Add our plan output to the build summary so that we can review
  • Upon successful plan, we store the .tfplan as an artifact (more on this later)
  • Create release via the build
  • Execute our .tfplan we stored as an artifact during the build

With automation of infrastructure comes a certain amount of risk, in the early stages of automating an infrastructure pipeline, to exercise caution, I find it handy to turn on approvals for the release.  This way, you can get the release created from the artifact(s), and validate that all is well, then run your approvals and validate your infrastructure.

To Start, we will need an Azure DevOps service connection to your target azure subscription, or a service principal with the proper rights to your target Azure subscription.   Have your Azure DevOps admin create one for you if you do not already have one.

In an Azure DevOps repo, create a file called azure-pipelines.yml. Add the following YAML snippet, modifying the azureSubscription to identify the service connection and the name to whatever you like. We then call a script called plan.sh:

name: terraform-yaml-pipline example build
    - master
    - develop    
  vmImage: 'Ubuntu 16.04'

- task: AzureCLI@1
    azureSubscription: 'kevin-cloud'
    scriptPath: plan.sh
    addSpnToEnvironment: true
    ARM_STORAGE_ACCOUNT_NAME: "kevindemostorage"
    ARM_STORAGE_CONTAINER: "terraform-state"
    ARM_STORAGE_KEY: "dev.tfstate"

Pay special attention to this line in the AzCli task:

Here we are using the ADO AzureCli task to execute plan.sh,; a script we will create to house terraform commands, and set variables.

#!/usr/bin/env bash
export ARM_SUBSCRIPTION_ID=$(az account show --query="id" -o tsv)
export ARM_CLIENT_ID="${servicePrincipalId}"
export ARM_CLIENT_SECRET="${servicePrincipalKey}"
export ARM_TENANT_ID=$(az account show --query="tenantId" -o tsv)

export ARM_ACCESS_KEY=$(az storage account keys list -n ${ARM_STORAGE_ACCOUNT_NAME} --query="[0].value" -o tsv)

terraform init -backend-config="storage_account_name=$ARM_STORAGE_ACCOUNT_NAME" \
    -backend-config="container_name=$ARM_STORAGE_CONTAINER" \

terraform plan -out=demo.tfplan
addSpnToEnvironment: true

If you want to use a service principal other than the service connection’s, then we need to wire it up here. The four environment variables required for terraform (ARM_SUBSCRIPTION_ID, ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_TENANT_ID) are documented here.

For us to collaborate with others, terraform needs to have a shared state.  In this example we use an storage account. You can create this storage account however you want (preferably using terraform 😉 ) But to keep this post shorter, let’s just assume we have a storage account. We set these as env vars on the azure cli task, and use them in the script for terraform init, and terraform plan.

An important bit to note is here:

export ARM_ACCESS_KEY=$(az storage account keys list -n ${ARM_STORAGE_ACCOUNT_NAME} --query="[0].value" -o tsv)

We set ARM_ACCESS_KEY using the az cli. This is required for the terraform azure remote state backend which we will define later. By leveraging the already authenticated cli (from the AzCli task) to set the env variable, we have one less environment variable/password for us to worry about passing around and needing set. This is also nice if the storage account keys get rotated every so often…your build won’t break!

In another file, main.tf, we will configure our remote state backend, specify that we will be using the azurerm provider, and create a simple resource, a basic azure resource group.:

terraform {
  backend "azurerm" {}

provider "azurerm" {
  version = "~>1.21"

resource "azurerm_resource_group" "rg_demo" {
    name = "rg-kevin-vsts"

We specify azurerm for our backend, and we specify azurerm as a provider so that we can use azurerm_resource_group create an azure resource.

Commit these files and push it to your Azure DevOps repo.

yaml pipeline

After committing these two files, we notice that AzDO has created a build for us. In my case, it created the build name with the convention of “<reponame> CI” (space between reponame and CI). We can also edit the YAML from here. If you have an existing build that you are wiring to a yaml pipeline, go ahead and set the build up now to use the yaml file in your repo.

Hopefully, if all went well, we go to the build summary and we should see something like this:

build summary

If we click on the AzureCLI summary, we will see something like this:

terraform plan output

Now that we have our demo.tfplan, we want to publish this artifact so that we can use it in a release. We will learn how to do the release half of this pipeline in part II where we tie it together!

Setting Up Hashicorp Vault in Azure Container Instance

Secret management can be extremely difficult. There are many things involved with managing secrets: expiration, revoking a secret, secret access control, rotate keys, etc. These things are difficult and error prone if done by hand. Thankfully, there are some tools out there to assist us in our desire to remain secure from the first line of code! One of those tools, Hashicorp Vault, provides a tremendously flexible, verifiably secure, centrally managed secret management engine, that can be adapted to just about any use case. In this article, I will show you how to setup a simple Hashicorp Vault using Docker and Azure Container Instance (ACI) and access it using a service connection from Azure DevOps.

Pre-Reqs for this walkthrough:


We have many different secrets, to many different resources that we need to maintain securely for an organization and its applications. These secrets could be certificates, usernames and passwords to databases, access credentials to a cloud, or even ssh-keys.

Where should we put these secrets? Hashicorp Vault

Hashicorp Vault (Vault from here on out) is an extremely powerful secret management platform. One of Vault’s best features is that we can use it to dynamically create secrets based on policies using secret engines. To Vault, not all secrets are stored as encrypted usernames/passwords…they can be created on demand from a variety of different services, devices, etc that need secrets to access them. For example, Vault can talk directly to PostgreSQL to dynamically create a new username and password _on demand_ that has pre-determined expiration. Another secret engine, KV, can be used to store the more traditional key/value based secrets. This is the secret engine we will use in this article.

Much like the secret engines allow us pluggable secret sources, Vault provides auth methods , which are configurable authentication mechanisms to Vault itself. Examples of authentication methods that we can use are username/password, token, or even Azure Active Directory credentials. We can configure both how we authenticate to Vault, and we can configure the secret engines that Vault can supply. This makes for a highly configurable and flexible secret store.

For this example we will just deploy a basic Vault to Azure Container Instance (ACI) using a container image off of docker hub and a storage account file share for persistance so that we don’t lose our vault configuration and data (this is not a highly available setup, but it will be enough for our development and testing).

Update the RG (resource group), SA_NAME (storage account name), and FILE_SHARE (file share name) in the code below using values from your storage account and file share (you will need to create a file share if one doesn’t exist). Also, make sure do update the DNS_LABEL. This will be in the URL to our new Vault instance once the ACI is created.

#!/usr/env/env bash

RG="<resource group name>"
VAULT_IMAGE="dankydoo/vault-test" # preconfigured vault image for ACI
SA_NAME="<storage account name>"
FILE_SHARE="<file share name>"

SA_KEY=$(az storage account keys list -g ${RG} -n ${SA_NAME} --query "[0].value" -o tsv)

az container create \
--name "vault-test" \
--resource-group $RG \
--image $VAULT_IMAGE \
--dns-name-label $DNS_LABEL \
--ports 8200 \
--location $LOCATION \
--azure-file-volume-account-name $SA_NAME \
--azure-file-volume-account-key $SA_KEY \
--azure-file-volume-share-name $FILE_SHARE \
--azure-file-volume-mount-path "/opt/vault/data"

After a few minutes, you will now have a live vault instance accessible via:


In the case of the example, Vault is at available at: http://vault-azdo.eastus2.azurecontainer.io:8200/.

Vault Start Page

The first time that Vault starts, it is sealed. When the Vault is sealed, nothing can be accessed until it is unsealed. The data within Vault is encrypted with a master key, that can be reconstituted via a number of shares using an algorithm called Shamir’s Secret Sharing. The basic gist is that when we initialize Vault (which we do only once at creation time), we specify how many key shares that we want, and how many of those key shares it should take to unlock. The idea is that no one person should ever really hold all of the keys to the kingdom! An example: we initialize with 5 keys, and we specify that it takes 2 keys to unseal the Vault. We can distribute these 5 keys to 5 trusted individuals, and any two of them can coordinate to unseal the Vault.

Vault Initialization

We will go ahead and enter 5 for keyshares, and 2 for the key threshold. Vault will now present us with the root token (admin password) and the 5 key shares as requested. Record these now!! Once we leave this screen, they can never be seen again. Treat them very securely, like you would root certificates! We could have also used the vault cli to do this.

Your root token will look something like: s.FIIteDEOVQDJeTJ8tifmyWER and the key shares will be of the format: zTTsqyXXb3KskiXXSSQATxxHN/aZYkiPEjN87vk+zgtz

Now that we have generated our keys, we need to actually unseal the Vault. We will use the Vault CLI which can be downloaded from here. Select the download that is appropriate for your system then place the vault binary in your path. A nice feature of vault is that the CLI, server, and agent are all contained in the same executable. It makes managing vault a bit easier.

I cannot stress enough to keep the root token (super user/admin) and key shares protected. In fact, it’s best to enable username/password authentication and delete the root token as soon as possible so that it isn’t shared around or lost.

Now that we have vault in the path, we need to point it at our newly created Vault running in Azure Container Instance. Remember from earlier that the example vault deployed to http://vault-azdo.eastus2.azurecontainer.io:8200. To have the CLI use this Vault instance, we set the VAULT_ADDR environment variable equal to this address.

export VAULT_ADDR=http://vault-azdo.eastus2.azurecontainer.io:8200

We can test this by typing: vault status

Key                Value
---                -----
Seal Type          shamir
Initialized        true
Sealed             true
Total Shares       5
Threshold          2
Unseal Progress    0/2
Unseal Nonce       n/a
Version            1.1.1
HA Enabled         false

We notice that our vault is initialized, but it’s still sealed. We need to use the keys we created earlier to unlock it and be on our way! We do this by typing: vault operator unseal:

 $ vault operator unseal
Unseal Key (will be hidden): 
$ vault login
Token (will be hidden): 

We go ahead and enter any two of our key shares, and the vault unseals. Now that we have unsealed our vault, we need to log into the Vault. Here, we enter the root token.

Now that we have an unsealed vault that we are logged into, it’s time to apply another important Vault concept: policies. Policies allow us to declaratively describe who has access to what secrets. We can store our policies in HCL files and version them just like code. Then we have a source of truth to be able to see who has access to what in our vault.

Initially, all we have with a fresh vault is a root token with all permissions and no policies, and no secret engines. Let’s create our first policy for admins:

path "*" {
        policy = "sudo"

We save this into a file, we will call admins.hcl and we create the policy:

vault policy write admins admins.hcl

We need to next enable the standard key/value secret engine. We do this by running:

vault secrets enable -path=/secrets kv

Let’s go ahead and put our first secret into the vault. In the previous step we mounted the kv secret engine onto the /secrets path. We will write our secrets to this path:

$ vault kv put secrets/myapp apikey=xyzabc anothersecret=blahblah
Success! Data written to: secrets/myapp

Next we need to setup username/password authentication to our Vault and then disable the root token. We do this by adding our first vault method: userpass . Run the following commands to enable the userpass auth method and add an admin user:

$ vault auth enable userpass 
Success! Enabled userpass auth method at: userpass/
$ vault write auth/userpass/users/<username> \   
    password=<password> \
Success! Data written to: auth/userpass/users/<username>

In Vault, everything is a path. Whether it’s an auth method, or a secret engine, it is accessed by path.

We can see in the above example, we gave our new user the “admins” policy. This links our new user to the admin policy we created in the earlier steps. We can go ahead and create a few admin users for our main keyholders. For this example, we will stick with the single user. We test our new user to make sure it’s setup correctly:

vault login -method=userpass username=<username> password=<password>

Now that we have a username and password, let’s revoke that root token so that it’s taken care of!

$ vault token revoke <root token>
Success! Revoked token (if it existed)

If everything has gone as planned, we will be logged in as our user, and we should be able to read our secret that we had added earlier:

$ vault kv list /secrets
$ vault kv get /secrets/myapp
======== Data ========
Key              Value
---              -----
anothersecret    blahblah
apikey           xyzabc

We now have a fully configured, hands off vault instance that we can learn and utilize in our applications!


In this article, we:

  • created a publicly accessible Vault instance using the Azure Container Instance PaaS.
  • Initialized the Vault
  • Unsealed the Vault
  • Mounted the userpass secret engine
  • Added a secret
  • Created a vault policy
  • Created a vault user associated with that policy
  • Logged in as this new user and retrieved our secrets!

With this setup, you’ll be able to get a great feel for Vault and its capabilities. In a future Article, I will show you how to authenticate to this Vault instance using an Azure Managed Service Identity (MSI) from your build pipeline.