Automate the deployment of .NET Core 2 Docker containers to Kubernetes with Azure Container Service and Azure Container Registry using VSTS

Previous blogpost coveres all steps to create a Docker Image from a .NET Core 2 WebAPI application on your local machine. After that, the Docker Image was pushed to Azure Container Registry (ACR). The deployment to Kubernetes pulled this Docker Image from ACR and runs a number of instances. All steps were executed manually. Let’s automate this using VSTS.

In this blogpost the Continuous Delivery of the .NET Core 2 WebAPI application is automated in VSTS. Next blogpost will show how to automate the Continuous Delivery of the Azure Resources. Also expect a blogpost on how to deal with application configuration that is environment specific for the running container.

Prerequisites

The solution should be available in Git sourcecontrol in VSTS

Two CI/CD pipelines

The infrastructure we need in Azure to deploy the Docker images to, Kubernetes, is infrastructure which is needed for all Docker images we deploy. This is the reason why this is a separate CI/CD pipeline in VSTS. The same for Azure Container Registry. This is infrastructure that doesn’t belong to one application.

The second pipeline is for the MyWebApi application, which results in a single Docker Image. If this application needs infrastructure like a CosmosDB database, then the deployment of this infrastructure would be included in this build. But for now the application is simple and doesn’t need any other resources of Azure.

Connection to Kubernetes from VSTS

VSTS needs to deploy to Kubernetes. To make this possible we are going to configure a Service Endpoint:

Navigate to the settings of the project by clicking the gear icon: navigate to settings
Select tab Services.
release add kubernetes connection
Click “Add Service Endpoint” and choose for Kubernetes.

kubernetes authentication vsts

Connection name: any name you like
Server URL: You can find the URL in the Azure Portal on the Overview tab of the Azure Container Service. Copy the Master FQDN. Prefix this with https://
Kubeconfig: Open a command prompt and run the same command you use for connecting, with an extra argument –file:
az acs kubernetes get-credentials –resource-group=k8sdemo-containers –name=pn-k8sdemo-cs-dev-we –ssh-key-file “C:\Users\Pascal Naber\Documents\myacs\opensshprivatekey” –file ~/kubeconfig

Open the content of kubeconfig and copy this content in the Kubeconfig textbox.

Click OK and you have configured VSTS to have connection to your Kubernetes cluster.

Create a build for the .NET Core 2 application

Add a new Build definition.

Create build core

Choose for template ASP.NET Core. This will give you a good start.
Give the build a meaningfull name.
Remove the Restore task because .NET Core 2 restores during the build already.

Create build process
Click on Process and select the Hosted Linux Preview Agent Queue.

Create build core core task

Because we use .NET Core 2, we need to add the “.NET Core Tool Installer” task as the first task. Configure the version on 2.0.0.
Because this demo doesn’t contain unittests, I’ve disabled the Test task for now.

Select the Publish task. By default the output is configured as follows: –output $(build.artifactstagingdirectory). Our Dockerfile expects the sources by default on “obj/Docker/publish”. We can do 2 things now. Change the source by passing a parameter while creating the container Image or change the output of the publish task to the path where Dockerfile expects it. I choose for the latter.

Create build core publish

So change the output to: .\obj\Docker\publish. (note the Capital D! Linux is Case Sensitive)
Select the Version 2.* preview version because the other versions adds the name of the project to the output path. After this Uncheck “Add project name to publish path” for the same reason. We just want to have our output published to the path we configure. We don’t want to Publish Web Projects, so uncheck it and we don’t need a zip of the published projects.

To build the container image I’m going to use the Docker Compose file. A big advantage for this choice is that if you add more services to your solution. The build supports this already and does not need to be changed.

We need to change the docker-compose.yml file to have the full name of the Image that we need.

docker compose file
Add the Docker Compose task to the build definition.
Create build create image

Configure the Action. Select “Build Service Images”. To identify a version of the Image I like to use the BuildId as a version for the Image. To do this the textbox “Additional Image Tags” should contain: $(Build.BuildId). Also apply the Latest Tag to the Image. Do this by checking “Include Latest Tag”.

The image should also be pushed to Azure Container Registry (ACR). To do this add another Docker Compose task to the build.

Create build push image a

Select the subscription where the Azure Container Registry is located. After this you can select the correct Azure Container Registry.

Create build push image b

This time choose for Action “Push service images”. Add “$(Build.BuildId)” to the Additional Image Tags and check Include Latest Tag.

Now we have to prepare the Deployment of the Image to Kubernetes with the Deploy.yml file. Remember from the previous blogpost that we have configured a fixed version for the Image to deploy:

Deployyml manual

Because the Image that Kubernetes has to deploy has the version of the buildId, we need to change this fixed number to the dynamic BuildId. Change the file to:
Deployyml build

Now we need to replace this placeholder with the BuildId. To do this I’m using the task of Guillaume Rouchon which can be found here in the marketplace. Add this task to the build definition.

create build replace token

Configure the root directory. Choose the directory where the Deploy.yml file is located. and configure the Target files. Type the name of the file: “Deploy.yml”.

create build replace token variables

Click on the Variables tab and add a variable with the name of the placeholder. In this case “ContainerVersion”. The value should be: “$(Build.BuildId)”.

Finally we have to create a build artifact of the Deploy.yml file. This way the Release, that we are going to create, can use this file.

create build copy files

Configure the Source Folder, Contents and the Target Folder. Target should be configured as: “$(build.artifactstagingdirectory)”.

Queue the build.
The Deploy.yml file should be available as Artifact and Azure Container Registry should contain the Image with both a version and a latest tag.
azure container registry after build

You can configure the trigger of the CI build so it runs whenever the sourcecode is pushed.

Create a release for the deployment to Kubernetes

Make sure you have read the prerequisite in this blogpost: Connection to Kubernetes from VSTS.

First step is to Add a new release.

release template.png

This time choose for template: “Empty”.
Give the Environment a meaningfull name.
Add the CI Build as Artifact.

release add task

Add the “Deploy to Kubernetes” task and select it.

release config task a

Select the Kubernetes Service Connection which you have added as explained in the prerequisite.
Select the Azure subscription that contains the Azure Container Registry.
Select the Azure Container Registry
The task is able to update the secret that contains the connection to Azure Container Registry. This is convenient so add the name of the secret to Secret name.

release config task b
Select the “apply” Command
Check “Use Configuration files”
Select the Deloy.yml file at Configuration File.

Give the release a meaningfull name and queue a Release.

You are now able enable the Continuous Deployment trigger. This ensures a new release is triggered as soon as the CI Build is available.

If you have finished this you have a complete CI/CD pipeline with .NET Core 2 and Docker containers, which are deployed to Kubernetes using Azure Container Registry. As soon as the code is changed in Git, the build is fired which creates a new Container Image. The build triggers the release and the Container Image is deployed to Kubernetes with a rolling update. So there is no downtime! How cool is that.

Advertenties

Run .NET Core 2 Docker images in Kubernetes using Azure Container Service and Azure Container Registry

This blogpost shows you the bare minimal steps to run .NET Core 2 Docker images in Kubernetes. Kubernetes is hosted in Azure with Azure Container Service and we are using Azure Container Registry as our private Docker Hub.

In this blogpost all steps will be executed manually. Next time we automate the whole process with VSTS.

Prerequisites

The next items need to be installed:

Besides this Azure Container Registry and Azure Container Service needs to be provisioned in Azure. For now create them manually within the Azure Portal.

After this make sure the ServiceAccount of Kubernetes can access Azure Container Registry.

Create an Azure Container Registry

To create an Azure Container Registry (ACR) navigate to the Azure Portal and add a new resource:
acr create
Search for Azure Container Registry and click on it.

acr create 2
Choose an available Registry name and Enable the Admin user. (the Admin user can be enabled later on also)

Create a Kubernetes cluster with Azure Container Services

Make sure you have SSH keys. See my blogpost for the easiest way to do this on Windows.
Create a Service Principal and make sure you have the ApplicationId and password.

acs 1
Search for Azure Container Service

acs 2
Give the cluster a name and use an existing (empty) resourcegroup or create a new one

acs 3
Choose for Kubernetes as Orchestrator
Enter the pem public SSH key or the Single line public SSH key
Enter the ApplicationId of the Service Principal
Enter the secret of the Service Principal

acs 4
Choose the number of agents that you like.

Prepare the Kubernetes cluster to have access to Azure Container Registry

When the cluster is succesfully provisioned you are going to change the configuration of the Service Account. Because Azure Container Registry (ACR) is a private Docker Registry also the Kubernetes cluster cannot access it by default.

If you haven’t logged in to Azure then you will need to login with az login
If the cluster is on a different subscription then your default, change subscription with: az account set -s “”

Login to your Kubernetes cluster with:
az acs kubernetes get-credentials –resource-group=pascalnaberacs –name=myacscluster –ssh-key-file “C:\blogpost\opensshprivatekey”
Type the password for the SSH

Add a secret to Kubernetes, this secret contains the credentials to connect to ACR.
kubectl create secret docker-registry acrconnection —docker-server=https://myacr.azurecr.io —docker-username=myacr —docker-password=r/DK=ijNIvTArT1yU1OlXxHiLMXA9UDY —docker-email=pnaber@xpirit.com

The name of the secret is: acrconnection
The credentials that you have to pass can be found on the admin tab of Azure Container Registry in the Azure Portal

Get the current configuration of the ServiceAccount
kubectl get serviceaccounts default -o yaml > ./serviceaccount.yml

Add this to the end of the serviceaccount.yml file:

imagePullSecrets:name: acrconnection

So the complete files looks like:

apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2017-09-12T08:01:14Z
  name: default
  namespace: default
  resourceVersion: "151"
  selfLink: /api/v1/namespaces/default/serviceaccounts/default
  uid: 8ca65600-9790-11e7-83bb-000d3a24e4fe
secrets:name: default-token-j5jzn
imagePullSecrets:name: acrconnection

Replace the current configuration of the ServiceAccount with this new one:
kubectl replace serviceaccount default -f ./serviceaccount.yml

Now when we deploy images located in our Azure Container Registry, the images can be pulled by Kubernetes.

Steps to run your .NET Core 2 WebAPI project as Docker Container in Kubernetes

Now we have all prerequisites in place. Let’s do the actual work to deploy a Docker Container to Kubernetes. The following picture shows the steps that need to be done. Every step is explained below.
k8s manual deployment

1. Create a .NET Core 2 WebAPI project

You can use the command prompt, you can use Visual Studio Code. I’m using Visual Studio 2017:

Create ProjectCreate a new project: choose for an ASP.NET Core Web Application

2 Choose API Type

In the next step in the wizard choose for a Web API project. Note that ASP.NET Core 2.0 is selected, Enable Docker Support is enabled and create a Linux Docker Image.

2. Publish the .NET Core 2 WebAPI project

We are not going to change the behavior of the code in the Web API controller. For now we go with the default implementation.

There are multiple ways to create a Docker Image.
One way is to use the Docker compose file. To use this, build the docker-compose project in Release configuration. After building you will have a Docker image on your machine with the name of your project. Check this in the command prompt with docker images
Using Docker Compose is the easiest way when you have multiple projects in your solution that has to become a Docker Image.

Another way is to use the Dockerfile directly. The Dockerfile references the obj/Docker/publish directory by default. So we are going to publish the project to this directory.
Right click the project and choose for publish. Choose for Folder publish:
Publish

Select the same path as the Dockerfile uses. So: obj/Docker/publish
The files are published to this directory.

An alternative to publish the ASP.NET Core 2 WebAPI project is to use the command prompt:
dotnet publish ./MyWebApi.csproj -c Release -o ./obj/Docker/publish

3. Create a Docker image of the .NET Core 2 WebAPI project

This step is already done if you have used Docker compose in the previous step.

Open a command prompt in the directory of the WebApi project.
Execute (where mywebapi is the name of your project) docker build -t mywebapi .
create image

You can see the image on your machine with the following command:
docker images

if you want to run the image on your local machine run:
docker run -it -p 8080:80 mywebapi
Where 8080 is the local port and maps to port 80 of the Docker container.
You can access the WebApi project in the browser via:
http://localhost:8080/api/values

If you like, you can in the mean time open a new command prompt and see the Docker container running with: docker ps

4. Push the Docker image to the Azure Container Registry

It’s not possible to deploy a Docker container directly to Kubernetes. Kubernetes needs to get the image from a Docker repository. For example Docker Hub is a public repository. Azure offers a  private Docker repository with Azure Docker Registry (ACR).

To succesfully push the Docker image of the project to ACR, you need to prepare the tags of the image. The tag needs to contain the name of the ACR. So in this sample, where the name of the ACR is myacr, the name of the image will be: myacr.azurecr.io/myservice/mywebapi
where myservice is only a name to group Container Images that belong together.

For this step multiple ways are possible also:
You can add a tag to an existing image:
docker tag mywebapi myacr.azurecr.io/myservice/mywebapi
Tag the image with an version also:
docker tag mywebapi myacr.azurecr.io/myservice/mywebapi:1

An alternative for adding a tag is to apply multiple tags while building the image.
docker build -t mywebapi -t myacr.azurecr.io/myservice/mywebapi -t myacr.azurecr.io/myservice/mywebapi:1 .

Note: it’s not needed to apply a version to the Image, but it’s a best practice to know which version you are using. Because next time when you upload the same image. That will be the latest.

Take a look at the results by executing docker images.
docker images
You can see 3 images with the same IMAGE ID. Two of them have a latest tag and one has tag 1.

Now the images are prepared with the correct tags, you can push the images to ACR.

a. Get the password of the admin user in the Azure Portal:
acr admin
If you forgot to enable the admin account during the creation of ACR, you can do it now.

b. Login to ACR from the command prompt:
docker login myacr.azurecr.io -u myacr -p r/DK=ijNIvTArT1yU1OlXxHiLMXA9UDY

c. Push the image to ACR
docker push myacr.azurecr.io/myservice/mywebapi and
docker push myacr.azurecr.io/myservice/mywebapi:1Push image
Note that uploading the second version of the Image is drastically faster because of the Layers that are already known in ACR.

d. Take a look in the Azure Portal, tab Repositories:
acr repository

extra:

if you like, you can now also get the image from ACR on your local machine and run it.
Because you already have this image, delete it from your local machine first.
(the parameter is the first part of the IMAGE ID)
docker rmi fb5

if you get an exception that’s because the image has run.
Check this with docker ps -a
And remove the container with: (the parameter is the first part of the CONTAINER ID)
docker rm 67a 

Check with docker images that the images are not available on your local machine anymore.

Get the image from ACR (you should be logged in already):
docker pull myacr.azurecr.io/myservice/mywebapi:1

Run the image.

5. Execute a Deployment file to instruct Kubernetes to host the Docker Container

Create a Deploymentfile which declaratively tells Kubernetes what to do. In this case we want Kubernetes to run 3 containers of the mywebapi image.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mywebapi-deployment
spec:
  replicas: 3  
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1 
  template:
    metadata:
      labels:
        app: mywebapi
    spec:
      containers:
      - name: mywebapi
        image: myacr.azurecr.io/myservice/mywebapi:1        
        ports:
        - containerPort: 80
        imagePullPolicy: Always   

Change the file according to your needs and save it as Deploy.yml

Login to your Kubernetes cluster as described in the “Prepare the Kubernetes cluster to have access to Azure Container Registry” section earlier in this blogpost.

Exectute the deployment with (–save-config is to see the history of deployments): kubectl create -f Deploy.yml –save-config

The “create” argument only have to be passed the first time you deploy. Sequential deployments should be executed with: kubectl apply -f Deploy.yml 

Let’s see if the deployment is succeeded:
This can be done in the command line and also in the UI.
a. With the command line
See the deployments:
kubectl get deployments
See the pods that are available now: (you will see 3 pods)
kubectl get pods

b. With the UI
Start the UI proxy:
kubectl proxy
Open the UI in the webbrowser:
http://127.0.0.1:8001/ui

Navigate to the Deployments, the pods and you can also see the secret you created.

The pods should be running, but you can’t access the webapi right now. To make a public endpoint do the following: (this has only to be done once)

Create a new Deployment file. This time for a Service. The file looks like this:

apiVersion: v1
kind: Service
metadata:
  name: myapiservice
spec:
  ports:
    - port: 80
  selector:
    app: mywebapi
  type: LoadBalancer

 

Make sure that the selector-app is the same as template-metadata-labels-app name in the Deploy.yml.
Save the file as DeployService.yml
Create the Deployment:
kubectl create -f DeployService.yml

The creation of the Service is fast, but to get a public IP-Address takes a while (3 minutes or so), because Kubernetes is communicating with the load balancer in Azure to get a public IP address.

Check the results in the UI or command line:
kubectl get services
service
The EXTERNAL-IP is <pending> for now.
services 2
The EXTERNAL-IP is available and now you can access your WebApi:
In my situation the URI is:
http://13.80.144.10/api/values

extra:
Now change the implementation of your WebApi project to return a different value.
Create a Docker Image with a new version tag and push the Docker Image to ACR.
Update the Deploy.yml file with the new version tag and execute it.
See the power of Kubernetes when the Rolling updates takes place.

Next time we are going to automate all this steps with VSTS

SSH keys on Windows for Kubernetes with Azure Container Service (ACS)

I’m on Windows and want to connect to my Kubernetes Linux cluster provisioned with Azure Container Service. I want to connect in the command prompt so I can use Kubectl and I want to connect with PuTTY also. Besides this I also want to know which keys to use when provisioning the ACS Kubernetes cluster in the Azure Portal and also which key to use when provisioning with ARM templates.

It is possible to follow the Microsoft documentation and generate SSH keys to connect to your cluster and execute openssl.exe and also PuttyGen to connect with PuTTY . But I was struggling with the SSH keys a couple of times now. And after some investigation together with Loek Duys, the following method works best for me:

Prerequisites

  1. You should already have installed all necessary tools. See the documentation to read how to do this.
  2. Download PuTTY
  3. Download PuTTYgen

Use PuTTYgen to generate *all* necessary SSH keys

  1. Open PuTTYgen. Click Generate. Move your mouse over the blank area and wait till the key is generated. (all default values, so the type to generate is RSA)
  2. Fill in a passphrase
  3. Click on Save public key. Name it: pempublickey
  4. Click on Save private key. Name it: puttyprivatekey.ppk
  5. Select the key in the textbox and save it in a new file. Name it: singlelinepublickey
  6. Click on Conversions in the menu. Then Export OpenSSH key. Name it: opensshrsaprivatekey

You have all keys that are needed.

Connecting with PuTTY:

  1. Fill in the Host Name: <youruser>@<Master FQDN>
    For example: azureuser@mycluster.westeurope.cloudapp.azure.com
  2. Fill in the port: 22
  3. Navigate to Connection–>SSH–>Auth and select the puttyprivatekey.ppk

  4. Tip: save the settings by typing a name in Saved Settions textbox and click Save. Next time you start PuTTY you only have to load the settings

Use the Command Prompt: az acs kubernetes get-credentials

  1. Use the opensshrsaprivatekey as parameter for az acs kubernetes get-credentials command:
    az acs kubernetes get-credentials –resource-group=myresourcegroup –name=myclustername –ssh-key-file “C:\sshkeys\opensshrsaprivatekey
    You will be asked for the passphrase.
  2. If you don’t want to pass –ssh-key everytime, put the opensshrsaprivatekey file in the .ssh directory in your user map:
    C:\Users\Pascal Naber\.ssh\opensshrsaprivatekey
    now you can connect just with:
    az acs kubernetes get-credentials –resource-group=myresourcegroup –name=myclustername

Provision the ACS cluster with Kubernetes in the Azure Portal

  1. Paste the content of one of the public SSH keys. Use pempublickey or singlelinepublickey

Provision the ACS cluster with an ARM Template

  1. Use the single line public key file: singlelinepublickey

Reset the SSH key

If you made a mistake while provisioning the cluster or you want to reset the SSH key, you don’t have to delete the cluster and provision it again. It’s possible to reset the SSH key.

  1. In the Azure Portal navigate to the cluster and select the master virtual machine.
  2. In the menu click on Reset password
  3. You have the option here to Reset the SSH public key. You can use one of the public keys; or the pempublickey or the singlelinepublickey

Fixing The subscription is not registered to use namespace ‘Microsoft.xxx’

When you automatically deploy to a brand new subscription, for example using VSTS, not all resource providers are registered.

For example when you try to deploy Azure EventHub you will get the following exception:

The subscription is not registered to use namespace ‘Microsoft.EventHub’

After adding resources by hand in the Azure Portal the corresponding resource providers are registered, which is the reason why you will not always see this exception.
Instead of adding resources by hand the first time, an alternative is to register the resource providers with powershell.

You can register the single resource provider that you need with the following line:

Register-AzureRmResourceProvider -ProviderNamespace “Microsoft.EventHub”

If you like, you can also register all resource providers at once.
This can be done with the following script:

Select-AzureRmSubscription -SubscriptionName “”

$providerNamespaces = @(Get-AzureRmResourceProvider -ListAvailable) | ? {$_.RegistrationState -eq “NotRegistered” } | select ProviderNamespace

$providerNamespaces | foreach {
write-host $_.ProviderNamespace
Register-AzureRmResourceProvider -ProviderNamespace $_.ProviderNamespace
}
write-host “Finished!”

(make sure you are using the latest Azure PowerShell version. Otherwise you will be asked to confirm for every registration (which you can prevent by adding -Force))

Enterprise Azure ARM Templates

Lately I’ve given many workshops to all kinds of customers of my employer Xpirit about the automatic deployment of Azure resources. Mainly with VSTS.

I noticed customers would like to have ready to use ARM Templates.

resource-groupOf course there are the valuable Azure Quickstart templates (or via the portal)  which I use a lot. Sometimes these templates offer a complete solution. Mainly I use these templates to have, as the name tells, a quick start, for creating ARM templates fast.

Another way to get the ARM Template you wish is to download it after creating a resource through the Azure portal. These generated ARM Templates are nice for a reference, but are too generic. To use these templates in your automatic deployment pipeline you will have some work to do.

Besides this, I create ARM Templates a lot lately. For demo’s and for the Dutch Azure Meetup. I would like to reuse the templates easier myself and offer these templates to the community.

For these reasons I decided to start collecting the ARM Templates I have used to create resources in Azure. You can find the ARM Templates here. The templates are meant to be used right away.

For now ARM templates can be used per Resource type. You can use two types of ARM templates for the same resource.

One standard version where the user of the template decides what the name of the created resource will be.

Most enterprises I’ve visited don’t want to think about the way resources are named in Azure, but wants a consistent way of naming. When it’s possible to pass a name to a resource, they will end up with all kinds of naming conventions. For this reason the ARM templates are offered with a namingconvention also. This version of the Enterprise ARM templates passes metadata to tags of the resources also.

Infrastructure as Code and VSTS

Written by Peter Groenewegen and Pascal Naber for the Xpirit Magazine

Your team is in the process of developing a new application feature, and the infrastructure has to be adapted. The first step is to change a file in your source control system that describes your infrastructure. When the changed definition file is saved in your source control system it triggers a new build and release. Your new infrastructure is deployed to your test environment, and the whole process to get the new infrastructure deployed took minutes while you only changed a definition file and you did not touch the infrastructure itself.

Does this sound like a dream? It is called Infrastructure as Code. In this article we will explain what Infrastructure as Code (IaC) is, the problems it solves and how to apply it with Visual Studio Team Services (VSTS).

Infrastructure in former times
We have radically changed the way our infrastructure is treated. Before the change to IaC it looked like this: Our Operations team was responsible for the infrastructure of the application. That team is very busy because of all their responsibilities, so we have to request changes to the infrastructure well ahead of time.

The infrastructure for the DTAP environment was partially created by hand and partly by using seven PowerShell scripts. The order in which the scripts are executed is important and there is only one IT-Pro with the required knowledge. Those PowerShell scripts are distributed over multiple people and are partly saved on local machines. The other part of the scripts is stored on a network share so every IT-pro can access it. In the course of time many different versions of the PowerShell scripts are created because it depends on the person who wants to execute it and the project it is executed for.

directories2

Figure 1: A typical network share

The configuration of the environment is also done by hand.

This process creates the following problems:

  • Changes take too long before being applied.
  • The creation of the environment takes a long time and is of high risk, not only because manual steps can be easily forgotten. The order of the PowerShell scripts is important, but only a single person knows about this order.
  • What’s more, the scripts are executed at a particular point in time and they are updated regularly. However, it is unclear whether the environment will be the same when created again.
  • Some scripts are on the work machine of the IT-Pro, sometimes because it’s the person’s expertise area, and sometimes because the scripts are not production code. In either case, nobody else has access to it.
  • Some scripts are shared, but many versions of the same script are created over time. It’s not clear what has changed, why it was changed and who changed it.
  • It’s also not clear what the latest version of the script is. (See figure 1)
  • The PowerShell scripts contained a lot of code. The code does not only contain the creation of resources, but also checks whether resources already exist and updates them, if required.
  • The whole process of deploying infrastructure is pretty much trial and error.

As you can see, the creation of infrastructure is an error-prone and risky operation that needs to change in order to deliver high quality, reproducible infrastructure.

Definition of Infrastructure as Code
Infrastructure as Code is the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files. It treats the infrastructure as a software system, applying software

Infrastructure as Code characteristics
Our infrastructure deployment example has the following infrastructure provisioning characteristics, which will be explained in the following paragraphs:

  • Declarative
  • Single source of truth
  • Increase repeatability and testability
  • Decrease provisioning time
  • Rely less on availability of persons to perform tasks
  • Use proven software development practices for deploying infrastructure
  • Idempotent provisioning and configuration

Declarative

imperative-vs-declarative

Figure 2: Schematic visualization of Imperative vs Declarative

A practice in Infrastructure as Code is to write your definitions in
Definition of Infrastructure as Code Infrastructure as Code is the process of managing and provisioning computing infrastructure and its configuration through machine-processable definition files. It treats the infrastructure as a software system, applying software a declarative way versus an imperative way. You define the state of the infrastructure you want to have and let the system do the work on getting there. In the Azure Cloud, the way to use declarative code definition files are ARM templates. Besides the native tooling you can use a third party tool like Terraform to deploy declarative files to Classic Azure and to AzureRM. PowerShell scripts use an imperative way. In PowerShell you specify how you want to reach your goals.

Single source of truth
The infrastructure declaration files are placed in a source control repository. This is the single source of truth. All team members can see and work on the files and start their own version of the infrastructure. They can test it, and then commit changes to source control. All changes are under version control and can be linked to work items. The source control repository gives insight into what is changed and by whom.
The link to the work item can tell you why it was changed. It’s also clear what the latest version of the file is. Team members can easily work together on the same file.

Increase repeatability and testability
When a change to source control is pushed, this initiates a build that can test the change and after that publish an artifact. That will trigger a release which deploys your infrastructure. Infrastructure as Code makes your process repeatable and testable. After deploying your infrastructure, you can run standard tests to see if the deployment is correct. Changes can be deployed and tested in a DTAP pipeline.
This makes your process of deploying infrastructure reliable, and when you redeploy, you will get the same environment time after time.

Decrease provisioning time
Everything is automated to create the infrastructure. This results in short provisioning times. In many cases a deployment to a cloud environment has a lead time of 5 to 10 minutes, compared to a deployment time of days, weeks or even months.
This is accomplished by skipping manual tasks and waiting time in combination with high-quality, proven templates. The automation creates an environment that should not be touched by hand. It handles your servers like cattle instead of pets*. In case of problems there is no need to logon to infrastructure to see what is going wrong and trying to find the problem and fix it. Just delete the environment and redeploy the infrastructure to get the original working version.

Rely less on availability of persons to perform tasks
In our team, everybody can change and deploy the infrastructure. This removes the dependency on a separate operations team. By having a shared responsibility, the whole team cares and is able to optimize the infrastructure for the application.

This will result in more efficient usage of the infrastructure deployed by the team. Operations is now spending more time on developing software than on configuring infrastructure by hand. Operations is moving more to DevOps.

Pets vs Cattle
Is a widely used metaphor for how IT operations should handle servers in the cloud.
Servers are like pets You name them and when they get sick, you nurse them back to health.
Servers are like cattle You number them and when they get sick, you get another one.

Use proven software development practices for deploying infrastructure
When applying Infrastructure as Code you can use proven software development practices for deploying infrastructure. Handing your infrastructure in the same way you handle your code, helps you to streamline the whole process. You can start and test your infrastructure on each change. Using Source control as a team is a must. The sources that it contains should always be in the state in which they can be executed. This results in the need for tests such as unit tests.

Idempotent provisioning and configuration
Creating an idempotent provisioning and configuration for provisioning will enable you to rerun your releases at any time. ARM Templates are idempotent. This means that every time they will be executed the result will be exactly the same. The configuration is set to what you have configured in your definitions. Because the definitions are declarative, you do not have to think about the steps on how to get there; the system will figure this out for you.

Creating an Infrastructure as Code pipeline with VSTS
There are many tools you can use to create an Infrastructure as Code pipeline. In this sample we will show you how to create a pipeline which deploys an ARM template with a Visual Studio Team Service (VSTS) build and release pipeline. The ARM Template will be placed in a Git repository in VSTS. When you change the template, a build is triggered, and the build will publish the ARM template as an artifact. Subsequently, the release will deploy or apply the template to an Azure Resource group.

pipeline

Figure 3: VSTS source control, build and release

Prerequisite
To start building Infrastructure as Code with VSTS you need a VSTS account. If you don’t have a VSTS account, you can create one at https://www.visualstudio.com. This is free for up to 5 users. Within the VSTS Account you create, you then create a new project with a Git repository. The next step is to get some infrastructure definition pushed to the repository.

ARM template
ARM templates are a declarative way of describing your infrastructure. ARM templates are json files that describe your infrastructure and can contain 4 sections: parameters, variables, resources and output. To get started with ARM templates you can read Resource Manager Template Walkthrough.

It is possible to create ARM templates yourself by choosing the project type Cloud ? Azure Resource Group in Visual Studio. The community has already created a lot of templates that you can reuse or take as a good starting point. The community ARM templates can be found on the Azure Quickstart Templates. ARM templates are supported on Azure and also on-premise with Microsoft Azure Stack.

In our example we want to deploy a Web App with a SQL Server database. The files for this configuration are called 201-web-appsql-database. Download the ARM template and parameter files and push them in your Git source control repository in your VSTS project.

VSTS Build
Now you are ready to create the build. Navigate to the build tab in VSTS and add a new build. Use your Git repository as the source. Make sure you have Continuous Integration turned on. This will start the build when code is pushed into the Git repository. As a minimum, the build has to publish your files to an artifact called drop. To do this, add a Copy Publish Artifact step to your build and configure it like this:

code

Figure 4: ARM template in Git

buildpipeline

Figure 5: Copy Publish Artifact configuration

VSTS Release
The next step is to use VSTS Release for deploying your infrastructure to Azure. To do so, you navigate to release and add a new Release. Rename the first environment to Development and add the task Azure Resource Group Deployment to the Development environment. This task can deploy your ARM template to an Azure Resource group. To configure your task, you need to add an ARM Service Endpoint to VSTS. You can read how to do this in the following blogpost: http://xpir.it/mg3-iac4. Now you can fill in the remaining information, i.e.  the name of the ARM template and the name of the parameters file (fig. 6):

releasepipeline1

Figure 6: Azure Resource Group deployment configuration

releasepipeline2

Figure 7: Clone an environment in Release

DTAP
At this point you only have a Development environment. Now you are going to add a Test, Acceptance and Production environment. The first step is to create the other environments in VSTS release manager. Add environments by clicking the Add environment button or bycloning the development environment.

Infrastructure as Code will help you to create a robust and reliable infrastructure in a minimum of time.

Each environment needs separate parameters, so you need to create a parameter json file per DTAP environment. Each environment gets its own azuredeploy.{environment}.parameters.json file, where {environment} stands for development, test, acceptance or production.

releasepipeline3

Figure 8: Configure each environment to a different parameters file

The deployment can be changed to meet your wishes. For example, deploy to a separate Resource group in Azure per DTAP environment. Now you have your first version of an Infrastructure as Code deployment pipeline. The pipeline can be extended in multiple ways. The build can be extended with tests to make sure the infrastructure is configured as it is supposed to be. The release can be extended by adding approvers, which makes sure that an environment will only be deployed after an approval of one or more persons.

Conclusion
Infrastructure as Code will help you to create a robust and reliable infrastructure in a minimum of time. Each time you deploy, the infrastructure will be exactly the same. You can easily change the resources you are using by changing code and not by changing infrastructure.

When you apply Infrastructure as Code, everything should be automated, which will save a lot of time, manual configuration and errors. All configurations are the same, and there are no more surprises when you release your application to production. All changes in the infrastructure are accessible in source control.

Source control gives great insight in why and what is changed and by whom. A DevOps team that applies Infrastructure as Code is self-contained in running its application. The team is responsible for all aspects of the environment they are using. All team members have the same power and responsibilities in keeping everything up and running, and everybody is able to quickly fix, test and deploy changes.

peterenpascal

This article was published in the Xpirit Magazine #3, get your soft or hard copy here.

VSTS Task to create a SAS Token

The Create SAS Token task creates a SAS Token which can be used to access a private Azure Storage Container. The task also gets the StorageUri. Both variables can be used in subsequent tasks, like the Azure Resource Group Deployment task. This is the first task of the Infrastructure as Code serie.

The Task can be found in the marketplace and added to your VSTS account. The code is open source and can be found on GitHub.

Prerequisites for the sample

In this sample I’m executing an ARM template which uses linked ARM Templates. These linked ARM Templates are stored in a private Azure Storage Container. I will be using the Azure Resource Group Deployment task to deploy the parent ARM Template.

The Azure Storage Container looks like this:

AzureStorageContainer

The StorageAccount.json ARM Template looks like this:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountName": {
      "type": "string"
    },
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": [
        "Standard_LRS",
        "Standard_GRS",
        "Standard_ZRS",
        "Premium_LRS"
      ],
      "metadata": {
        "description": "Storage Account type"
      }
    }
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "name": "[parameters('storageAccountName')]",
      "apiVersion": "2016-01-01",
      "location": "[resourceGroup().location]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "Storage",
      "properties": {
      }
    }
  ]
}

The ARM template which links to the StorageAccount.json looks like this:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountName": {
      "type": "string"
    },
    "_artifactsLocation": {
      "type": "string",
      "metadata": {
        "description": "Change this value to your repo name if deploying from a fork"
      },
      "defaultValue": ""
    },
    "_artifactsLocationSasToken": {
      "type": "securestring",
      "metadata": {
        "description": "Auto-generated token to access _artifactsLocation",
        "artifactsLocationSasToken": ""
      },
      "defaultValue": ""
    }
  },
  "variables": {
  },
  "resources": [
    { 
      "apiVersion": "2015-01-01",
      "name": "storage",
      "type": "Microsoft.Resources/deployments",
      "properties": {
        "mode": "Incremental",
        "templateLink": {
          "uri": "[concat(parameters('_artifactsLocation'),'/StorageAccount.json',parameters('_artifactsLocationSasToken'))]",          
          "contentVersion": "1.0.0.0"
        },
        "parameters": {
          "storageAccountName": {
            "value": "[parameters('storageAccountName')]"
          }
        }
      }
    }
  ],
  "outputs": {
  }
}

In this sample the ARM Template above is stored in Git. A build is responsible to create an artifact of this ARM Template, so it can be used in the Release. The release will be explained in the next paragraph.

Steps to use and configure the task

  1. Install the task in your VSTS account by navigating to the marketplace and click install. Select the VSTS account where the task will be deployed to.
  2. Add the task to your release by clicking in your release on add a task and select the Utility category. Click the Add  button on the Create SAS Token task.
    SelectSasTokenTask
  3. Configure the task. When the task is added the configuration will look like this
    EmptySasTokenTask
    All yellow fields are required.
    – Select an AzureRM subscription. If you don’t know how to configure this. Read this blogpost. (I’m using an Azure Principal with “reader” rights only on the ResourceGroup which contains the StorageAccount)
    – Select the Storage Account where you want to create a SAS Token for
    – Enter the name of the Storage Container
    – A variable for the SAS Token is also required. By default a variable is configured with the name storageToken.
    The configuration of the Create SAS Token Task looks like this:
    SasTokenConfiguration
    Note that I’m using an ServicePrincipal with readonly access to the ResourceGroup which contains the StorageAccount with a private container.
  4. Configure subsequent tasks which need the SAS Token. In the following sample the Azure Resource Group Deployment task is used.
    – Configure the task and fill the Override Template Parameters field with:

    -_artifactsLocation $(storageUri) -_artifactsLocationSasToken (ConvertTo-SecureString ‘$(storageToken)’ -AsPlainText -Force)

    The output variables of the Create SAS Token task are used here.
    The configuration of the Azure Resource Group Deployment task looks like this:
    DeployAzureResourceGroupTask
    Note that I’m using a different Service Principal. This Service Principal is specific for this project (and environment).

  5. Run the release