Thursday 23 December 2021

Using templates in a Azure YAML pipeline

I was recently asked by a colleague how to use templates within a YAML pipeline, as they wanted to template a part of the deployment, this was because they have the option to deploy to different Azure App Services for testing.

To do this we created a simple dummy pipeline:


  - stageBuild
    - jobCompilation

      - scriptecho Build build build!
        displayName'Compile code'

  - stageTest01
    - jobDeployToTest01

      - scriptecho Steps to deploy to Test01

    - jobRunTestsOnTests01

      - scriptecho Tests on Tests01
        displayName'Run tests'

This shows the initial stage which would be used to build the code and another stage that would be for deploying to the App Service.

Everything in the Test01 stage needs to be duplicated for other test environments but ideally we didn't want to bloat the pipeline with a lot of duplication.  Also, as Test01 is a deployment phase that should be updated as well.

We created a new YAML file in the repo called azure-environment.yaml:


  - stageAzure${{ parameters.env }}
    - deploymentDeployTo${{ parameters.env }}Dev
      environment${{ parameters.env }}
            - scriptecho Deploy to ${{ parameters.env }} Dev

    - jobRunTestsOn${{ parameters.env }}Dev
      dependsOnDeployTo${{ parameters.env }}Dev

      - scriptecho Tests on ${{ parameters.env }}Dev
        displayName'Run tests'

The first four lines show that a parameter is expected called env, this is then used in the tasks to create the stage and deployment name.

This can then be used by updating the main yaml script:


  - stageBuild
    - jobCompilation

      - scriptecho Build build build!
        displayName'Compile code'

  - templateazure-environment.yaml

  - templateazure-environment.yaml

This means that the template block can be easily added to branches if for a period of time the code needs to be deployed to another test environment.

As the deployment uses an environment these would need to be configured in Azure DevOps but it doesn't have to contain anything, although it does provide the functionality for approvals which could be useful if used for higher end deployments (such as PreProd and Production).

To create an environment simply select environments:

Then follow the steps and create an empty one with the names (in our case Test01 and Test02).
If you are interested in having someone approve the deployment then use the 'Approvals and Checks' to add a group.

Saturday 27 November 2021

Mining Monero coin using Docker and K8s!

 Following on from the previous blog post where we installed and configured Docker Desktop and enabled Kubernetes (K8s), I thought I'd play around with mining a digital currency.  There are so many digital currencies around I went for one that doesn't require a massive computer to do this, so I went for Monero which can be mined using a Raspberry Pi.

Now mining Monero isn't going to make you rich unless you've got a room full of hefty powered computers but it can help you understand a bit about how Docker and Kubernetes work and make a penny or two in the process.

Creating the Dockerfile

To do this we are going to create our own Docker container which I've based on the Alpine image of linux.  The main reason for this is that it is small, lightweight and perfect for what we need.
Create a new directory somewhere on your machine and create a new file called dockerfile, with no extension.
Open the file in your favourite text editor and add the following line:

FROM alpine:latest

This will tell Docker when we build our container that we want to use the latest version of the Alpine image on the Docker hub, it's under 3mb - not bad for an operating system!

For this container instead of downloading the latest version to make it a bit more of a challenge and to show what can be achieved in a Docker file we're going to clone the git repository and build the code.

To do this, now we need to update the package manager so that it is up to date and install some of the tools that we need, so add the following to the file:

RUN apk update
RUN apk add git build-base cmake libuv-dev libressl-dev hwloc-dev 

Great, at this point our image will have Alpine linux along with some tools installed, now lets clone the git repo of the miner.

RUN git clone

When we build the image this will use git which was installed earlier, then connect to GitHub and download all of the files.
To build the code I followed the instructions in the xmrig documentation appending the RUN command.

RUN mkdir build
RUN cmake ..
RUN make -j$(nproc)

Now we're almost done, at this point when we build the image it will download Alpine linux, install some tools, download the code from the GitHub repo and then build it.

Monero Wallet

To be able to mine any digital currency you need to have somewhere to store it, this is called a digital wallet.
If you don't have a Monero wallet browse to their website and download the GUI Wallet.  Install this (you may have add approval rules for your anti-virus).
Once installed run the program and follow the steps, select a Simple Mode and Create a New Wallet.  This will ask you for a name and a location, it will also have a mnemonic seed.  It is very important that this is stored somewhere safe as without it you won't be able to use your wallet!  Have a printed out copy as well as storing it in a password safe is a good idea.  Finally create a (secure password) and store that in your password safe.
Once you've done this Monero will need to synchronise which will take a few minutes (don't panic).
Now you can click on Account and finally click the icon to copy the Primary Account address, this is the ID of your Monero wallet.

Mining Pools

To increase your chances of earning money from a digital currency people group together into a pool, these will then give you a percentage of the revenue generated depending on how much your computer helped.
I've used a pool called Monero Ocean but there are other options available which can be found by using the xmrig configuration wizard.

The final step is to add the line to the Dockerfile to start the miner:
CMD ./xmrig -o -u <Wallet ID>
Replacing <Wallet ID> with the one copied from the Monero GUI Wallet.

Your complete Dockerfile should look like this:

FROM alpine:latest
RUN apk update
RUN apk add git build-base cmake libuv-dev libressl-dev hwloc-dev

RUN git clone

RUN mkdir build
RUN cmake ..
RUN make -j$(nproc)

CMD ./xmrig -o -u <Wallet ID>

Build the Docker Image

To build the image simply open a command prompt and set the directory to be where you created the dockerfile.

docker build -t monero .

This will build the image and give it a tag of the name monero.  You can of course change this to be anything you like.

Once it is built you can view the image by typing

docker images

It should look like this:

Now we can start the container by running:

docker run -it monero

The switch -it will start the container interactively, allowing us to see the output.  It should look like this:

Wahoo, we've built a container from scratch that will download the code from github, build it and then run the code to generate Monero.


To take this to the next stage let's deploy this to our Docker Desktop Kubernetes installation.

To deploy this to K8s we need a yaml file, this file describes to k8s what it needs to deploy our container.
Create a new file (I called mine monero.yaml) in the same location as the dockerfile and add the following:

apiVersion: apps/v1
kind: Deployment
  name: monero
  namespace: monero
  replicas: 2
      app: monero
        app: monero
        - name: monero
          image: monero:latest
          imagePullPolicy: IfNotPresent
              memory: 4096Mi
              cpu: "1"
              memory: 4096Mi
              cpu: "0.5"

Some things to point out:

  replicas: 2

This is the number of instances we want in our cluster.  If you are running this locally you may be limited by the amount of memory your computer has.

          image: monero:latest
This is the name of the image we build locally, if you called this something different it will need to be updated here.
          imagePullPolicy: IfNotPresent
Because our image is local and not publicly available we need it to check locally.

At the bottom of the file are the resources, these don't need to be specified but I've found that it needs around 4GB of RAM, if it doesn't have enough the container will be killed and restarted.

Now we're almost there, before we can deploy this we need to create the namespace for our deployment, as specified in the file this is called monero.

Open a PowerShell window and type:

kubectl config current-context

This should state:

If it doesn't list the contexts:
kubectl config get-contexts

Then select your cluster:
kubectl config use-context docker-desktop

To create the namespace, which we called monero type:
kubectl create namespace monero

Now to deploy our image to our cluster creating 2 replicas type:
kubectl apply -f monero.yaml

Which should respond
deployment.apps/monero configured

To view the status you can describe the pod with this command:
kubectl describe pod --namespace monero

If you've installed the dashboard (see my previous post) you should be able to see what is happening with a GUI.  Run the command:
kubectl proxy

Enter the token to login which can be obtained using this command:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

Make sure you select the monero namespace and you can see the pods that are running and mining!

Viewing your earnings!

This isn't going to make you rich but you can view the amount of money you have earned on the Monero Ocean website.
Simply paste in your wallet ID and it will provide a run down of how much you have contributed and earned.  Don't expect to earn more than a few pence per day!

Monday 26 July 2021

Using Kubernetes on Docker for Windows

 Kubernetes is the industry standard tool for hosting containers with Azure and AWS both hosting their own platforms for this.  But what if you want to test it locally (and you're on Windows), then Docker for Windows has got you covered...

Install Docker for Windows

Whilst Docker isn't the only option for managing containers, it is probably the most common, it can be installed from the official Docker website.  I recommend going through the steps and set it up to use the Windows Subsystem for Linux (WSL2), I imagine it will work fine using a Hyper-V image but WSL2 will be quicker and it is the way that I configured my machine.

Once you've got Docker setup and working you'll be able to run some Docker commands.

To check everything is setup correctly type

Docker version

Into a PowerShell window and you should see something like this:

 Cloud integration: 1.0.17
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.16.4
 Git commit:        f0df350
 Built:             Wed Jun  2 12:00:56 2021
 OS/Arch:           windows/amd64
 Context:           desktop-linux
 Experimental:      true
Server: Docker Engine - Community
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
  Version:          0.19.0
  GitCommit:        de40ad0

Now that Docker is setup you can setup you will need to enable Kubernetes.

Click on the 'settings cog', then Kubernetes, then finally click 'Enable Kubernetes'

Click Save and Restart.

A message will appear stating that an internet connection is required and that it may take some time.

Soon you may notice a new icon in the bottom of the window Docker window:

At this point we've got Docker and Kubernetes installed, to confirm this run the command:

kubectl cluster-info

In the information returned you should see

- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop

This is because when Kubernetes is installed it creates this context for you.

Now to be sure we are using the correct context type:

kubectl config use-context docker-desktop

It should respond with:

Switched to context "docker-desktop".

Now we can list the namespaces and list the pods:

kubectl get namespace

Which shows:

NAME              STATUS   AGE
default           Active   27s
kube-node-lease   Active   28s
kube-public       Active   28s
kube-system       Active   29s

Then to see the pods:

kubectl get pods

No resources found in default namespace.

Okay, so it is empty and there is nothing running.  So let's install a dashboard!

kubectl apply -f

You'll see the output of this command as:

serviceaccount/metrics-server created created created created created created
service/metrics-server created
deployment.apps/metrics-server created created

Now we need to get a token before we can log into the dashboard (it is possible to enable a skip login option but for security we'll create a token.  This is documented in the kubernetes github pages but the process is:

Open up your favourite text editor and create two files:


kind: ClusterRoleBinding
  name: admin-user
  kind: ClusterRole
  name: cluster-admin
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard


apiVersion: v1
kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

With those two files created they need to be merged into the config, to do that run (pointing to where you saved the files):

kubectl apply -f .\ClusterRoleBinding.yaml
kubectl apply -f .\ServiceAccount.yaml

Now to get the token that you need run:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

It will return a long string, this is the token:


Then finally, to login to the dashboard we need to run:

kubectl proxy 

Then browse to the dashboard URL:


Paste the token and click Sign In:

Changing the namespace (the dropdown box next to the Kubernetes logo) to kubernetes-dashboard will display the pods that is running the dashboard:

The final step that you may want to do is to add the Metrics Server, this will allow you to see memory and cpu usage for the pods.

To do this we need to install it:

kubectl apply -f

It will give the output:

serviceaccount/metrics-server created created created created created created
service/metrics-server created
deployment.apps/metrics-server created created 

Before we can run this we need to make a slight change.  Out of the box the dashboard will only work with HTTPS connections, as we are running this locally we need to add the flag --kubelet-insecure-tls, for more information look at their github page.

kubectl patch deployment metrics-server -n kube-system --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'

Now to view the graphs log into the dashboard again:

kubectl proxy

Note: You may need to wait a few minutes for the CPU usage and Memory Usage graphs to appear and populate.

Now you've got Kubernetes all setup and working locally.

If for any reason you want to revert the system back to the starting state and go from the beginning, click on the Docker icon, the Settings cog and Kubernetes (this is the same place where Kubernetes was enabled), then click the 'Reset Kubernetes Cluster' option.  This will remove all the pods and namespaces and put you back to the beginning.


Wednesday 20 January 2021

Docker container time drift using WSL2

I recently came across an issue that my Ubuntu Docker containers were failing restoring packages and this was due to them having a different time than my Windows 10 laptop.

After Googling the solution suggested was to reboot my laptop but as I'd just turned it on and got everything setup this wasn't something that I wanted to do.

The commands that most people suggested was to run a command in the container to re-synchronise the time with the host but the command returned an error when I tried this.

Eventually I found a github issue which implied it was a bug with Windows Subsystem for Linux.

Thankfully to re-synchronise the time it was quite simple, just run this command from a PowerShell window:

wsl --shutdown

Docker Desktop will quickly inform you that it isn't working and suggest you start it.

Once it is started again everything was back in sync and I could restore packages again!

Tuesday 7 April 2020

NuGet Restore failing in Azure with Error parsing solution file

I recently came across a problem where builds were failing in Azure DevOps when performing a NuGet restore for the solution.

The error details were:

2020-04-07T08:05:03.8535680Z [command]C:\hostedtoolcache\windows\NuGet\4.1.0\x64\nuget.exe restore d:\a\1\s\MyProject\MyProject.sln -Verbosity Detailed -NonInteractive -ConfigFile d:\a\1\Nuget\tempNuGet_41515.config
2020-04-07T08:05:05.3883943Z NuGet Version:
2020-04-07T08:05:05.3886378Z MSBuild auto-detection: using msbuild version '' from 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\bin'. Use option -MSBuildVersion to force nuget to use a specific version of MSBuild.
2020-04-07T08:05:05.4539665Z System.AggregateException: One or more errors occurred. ---> NuGet.CommandLine.CommandLineException: Error parsing solution file at d:\a\1\s\MyProject\MyProject.sln: Exception has been thrown by the target of an invocation.
2020-04-07T08:05:05.4540531Z at NuGet.CommandLine.MsBuildUtility.GetAllProjectFileNamesWithMsBuild(String solutionFile, String msbuildPath)
2020-04-07T08:05:05.4541882Z at NuGet.CommandLine.RestoreCommand.ProcessSolutionFile(String solutionFileFullPath, PackageRestoreInputs restoreInputs)
2020-04-07T08:05:05.4542419Z at NuGet.CommandLine.RestoreCommand.d__37.MoveNext()
2020-04-07T08:05:05.4542827Z --- End of stack trace from previous location where exception was thrown ---
2020-04-07T08:05:05.4543213Z at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
2020-04-07T08:05:05.4543673Z at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
2020-04-07T08:05:05.4544134Z at NuGet.CommandLine.RestoreCommand.d__30.MoveNext()
2020-04-07T08:05:05.4544520Z --- End of inner exception stack trace ---
2020-04-07T08:05:05.4545738Z at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
2020-04-07T08:05:05.4546231Z at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
2020-04-07T08:05:05.4546606Z at NuGet.CommandLine.Command.Execute()
2020-04-07T08:05:05.4546965Z at NuGet.CommandLine.Program.MainCore(String workingDirectory, String[] args)

I then ran a build that had run successfully before (I ran it against the same commit) and that had the same error, pointing me in the direction of the Azure hosted agent being the issue.
I then was able to confirm that the Azure agent had been updated to version 20200331.1 (this can be found in the Initialize Job step of the build).
After checking the GitHub repo for the build agent it confirmed that Visual Studio 2019 had been updated on that version of the agent.

After some research I realised that the version of NuGet.exe it was using was quite old and that NuGet should ideally match the version of Visual Studio (and more importantly MSBuild) you are using:
  • 4.1 of NuGet.exe matches Visual Studio 2017 Update 1 (15.1)
  • 4.7 of NuGet.exe matches Visual Studio 2017 Update 7 (15.7)
  • 5.0 of NuGet.exe matches Visual Studio 2019 (16.0)
  • 5.4 of NuGet.exe matches Visual Studio 2019 (16.4)
So in my case running NuGet.Exe version 4.1 to restore a Visual Studio 2019 project isn't a good idea.

To resolve this issue add a new task to your Build pipeline (NuGet Tool Installer) and set it restore a newer version of NuGet: For a YAML pipeline add:
- task: NuGetToolInstaller@1
    versionSpec: '5.x'

Or for the GUI type:

This will then insure that you are using the correct version of NuGet which should stop that error at least!

Hope that helps!

Friday 13 September 2019

Using the Pi-Hole with Windows

If you haven't heard of the Pi-Hole it is a great tool.  It is a DNS server (actually it is more than just that) which can run on a Raspberry Pi that simply blocks out adverts while you're browsing the web.  While some Ad-Blockers are browser add-ons this takes a different view, it simply stops the adverts from being loaded before they reach the browser.
Effectively setup the Pi-Hole on a Raspberry Pi then update the DNS settings on your Router so that it uses the Pi-Hole and then all devices connected won't see an advert as every time one is attempted to be loaded the Pi-Hole handles the request.  It's great.

But what about your laptop?  They're meant to be taken with you, so you'll see adverts when your elsewhere.

Thankfully the Pi-Hole also offer Docker images, meaning all that you require is Docker For Windows to be installed on your laptop.

So what do you need to do?
  1. Install Docker For Windows.  I'm not going to detail all of the steps but the Pi-Hole image requires a Linux container (which is handy given the size of Windows containers).  Downloading Docker For Windows requires you to create an account (or login) to Docker.
  2. Ensure that Windows containers and not the default, as to set it up we need to embrace Linux.
  3. Download the Pi-Hole image, to do this open PowerShell and run:
    Docker Pull pihole/pihole 
  4. This will take a couple of minutes (not long) to download the Linux container with the Pi-Hole installed.
  5. Create the following directories on your machine:
    • C:\pihole\
    These are locations that the PiHole image will use to store files that will remain (for when upgrading the container to a newer version of PiHole)
  6. Run the following command to start the Pi-Hole image:
    docker run -d --name pihole -p 53:53/tcp -p 53:53/udp -p 80:80 -p 443:443 -v "c:/pihole/pihole/:/etc/pihole/" -v "c:/pihole/dnsmasq.d/:/etc/dnsmasq.d/" -e WEBPASSWORD=vRz0n36IWF --restart=unless-stopped pihole/pihole:latest
  7. I strongly suggest that you use a strong password, as the web interface to the Pi-Hole will require this to login.  You can now browse to LocalHost in a browser and you should see a page showing that the PiHole is running, although no requests are currently going to it (so it won't actually be blocking any adverts).
  8. Docker may ask you for an account to share files on your C drive (or wherever you placed them).
  9. Finally, you need to update the DNS setting for your connection to block adverts.  To do this
  10. In File Explorer right click on Network and select Properties
  11. Click on your connection
  12. Select Properties in the dialog
  13. Then select TCP/IPv4 and then properties
  14. Then set the DNS settings to be (as the Pi-Hole container is running on your laptop).
  15. Click Ok to dismiss the dialog boxes and you're done.
  16. To see the interface for PiHole type localhost into a browser.  Click on Login and enter the password (in my example vRz0n36IWF but please change it!).  
  17. Adverts are now being blocked!

Monday 10 December 2018

Running Jenkins from a Docker image in Azure

I am (relatively) new to Docker and I want to know some more about Jenkins so I thought I'd use Docker to run the latest version of Jenkins.  This is a warts and all step through my progress.

I've heard that Docker containers can run in Azure without a virtual machine but I wanted to understand how it all works so I decided to create a Windows 10 virtual machine and install Docker on that.

I created the Azure VM (using the UI), using the Windows 10 N (x64) image.

Once the machine had been created I then installed the Desktop edition of Docker, which can be found here:

Note:  You need to be logged in to be able to download.

Once it had downloaded (550MB) I ran the installation:

I went with the default options and click Ok to let it unpack the files:

After that had completed the installation it wanted to log out, I went for a reboot:

Once the machine had rebooted I logged in and after Docker had started I was presented with this message:

As Docker uses Container and Hyper-V Windows features I'm only to happy for this to be setup automatically for me.

Once this had completed the VM had rebooted Docker prompted me to login with my Docker account.

Ok, so at this point Docker is installed and the VM has all the components to run Docker containers.

Next step was to get Jenkins running!

As I wanted the data to persist I created a folder structure on the VM; C:\Gruss\Docker\Jenkins.

I opened an Administrative PowerShell windows (not sure if I needed to run it with Administrator privileges or not) and ran the following command:

docker run -p 8080:8080 -p 50000:50000 -v C:\Gruss\Docker\Jenkins:/var/jenkins_home jenkins/jenkins:lts

I am no Docker expert but to breakdown the command:
  • -p 8080:8080
    • This maps port 8080 on the VM to port 8080 within the container
  • -p 50000:50000
    • This maps the port in the same way above
  • -v C:\Gruss\Docker\Jenkins:/var/jenkins_home
    • This creates a volume mount so that information in the container can be persisted after reboots etc.  In this case I'm storing the data in C:\Gruss\Docker

As the Docker image was not local to my Azure VM Docker proceeded to download the container for me:

Docker informed me to store the configuration on 'C:\Gruss\Docker' folder it needed permission to do this:

Having clicked 'Share it', an account was needed:

At this point Docker spat out an error message:

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_lederberg (deaba2deeea0486c92ba8a1a32740295f03859b1b5829d39e39eff0b24613ebf): Error starting userland proxy: Bind for unexpected error Permission denied.

This is stating that it could not map port 50000 on the local machine, possibly because it was in use.  I ran netstat to list all of the ports that were in use:
netstat -a -n -o
Nothing was using port 50000, something strange was going on.

I was able to start the VM removing the ‘-p 50000:50000’ but I’ve assumed it needs to map this port in order to work correctly.

Coming back the following morning (after shutting down the vm) all was resolved so perhaps a reboot was all it needed?
Ok so now I’ve run the command and my docker container is running! (wahoo!!!)

Open a browser on the VM and go to:  Http://localhost:8080

However, as I started the container removing the port 50000 mapping I don't have the administrator password to start Jenkins.

However, it states that it is available by browsing to /var/jenkins_home/secrets/initialAdminPassword

As I started the container previously the Admin password isn’t now shown in the output when starting the container (it is only shown the first time), so I now need to browse the local file system of the container to get the password.

To do this I opened a new PowerShell window.
The Docker ps command lists the running containers and it also lists the ‘name’ for it, which in my case is called dreamy_bhabha.
With that I can exec a command on the image:

docker exec dreamy_bhabha cat /var/jenkins_home/secrets/initialAdminPassword

I've since found that I could have browsed to the C:\Gruss\Docker\Jenkins\secrets folder but where is the fun in that?

Typing that password in allows Jenkins to start: 

I went with the option to install the suggested plugins and off it went:

 Once they were all installed I was prompted to create the first admin user:

After creating the user Jenkins seemed to crash for me, as I was presented with a blank page.
Trying in an incognito window showed the login screen but after logging in I got the blank page.
To resolve this I stopped the container:

docker stop dreamy_bhabha

Then restarted it:

docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

Note:  This will give me a new name for the container.

Opening a browser allowed me to login and see that Jenkins is now working:

Next step will be creating a pipeline in Jenkins!