Monday, 26 July 2021

Using Kubernetes on Docker for Windows

 Kubernetes is the industry standard tool for hosting containers with Azure and AWS both hosting their own platforms for this.  But what if you want to test it locally (and you're on Windows), then Docker for Windows has got you covered...


Install Docker for Windows

Whilst Docker isn't the only option for managing containers, it is probably the most common, it can be installed from the official Docker website.  I recommend going through the steps and set it up to use the Windows Subsystem for Linux (WSL2), I imagine it will work fine using a Hyper-V image but WSL2 will be quicker and it is the way that I configured my machine.


Once you've got Docker setup and working you'll be able to run some Docker commands.

To check everything is setup correctly type

Docker version

Into a PowerShell window and you should see something like this:

Client:
 Cloud integration: 1.0.17
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.16.4
 Git commit:        f0df350
 Built:             Wed Jun  2 12:00:56 2021
 OS/Arch:           windows/amd64
 Context:           desktop-linux
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0


Now that Docker is setup you can setup you will need to enable Kubernetes.


Click on the 'settings cog', then Kubernetes, then finally click 'Enable Kubernetes'


Click Save and Restart.

A message will appear stating that an internet connection is required and that it may take some time.

Soon you may notice a new icon in the bottom of the window Docker window:


At this point we've got Docker and Kubernetes installed, to confirm this run the command:

kubectl cluster-info

In the information returned you should see

- context:
    cluster: docker-desktop
    user: docker-desktop
  name: docker-desktop


This is because when Kubernetes is installed it creates this context for you.

Now to be sure we are using the correct context type:

kubectl config use-context docker-desktop

It should respond with:

Switched to context "docker-desktop".

Now we can list the namespaces and list the pods:

kubectl get namespace


Which shows:

NAME              STATUS   AGE
default           Active   27s
kube-node-lease   Active   28s
kube-public       Active   28s
kube-system       Active   29s

Then to see the pods:

kubectl get pods


No resources found in default namespace.


Okay, so it is empty and there is nothing running.  So let's install a dashboard!

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

You'll see the output of this command as:

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

Now we need to get a token before we can log into the dashboard (it is possible to enable a skip login option but for security we'll create a token.  This is documented in the kubernetes github pages but the process is:

Open up your favourite text editor and create two files:

ClusterRoleBinding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

ServiceAccount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

With those two files created they need to be merged into the config, to do that run (pointing to where you saved the files):

kubectl apply -f .\ClusterRoleBinding.yaml
kubectl apply -f .\ServiceAccount.yaml

Now to get the token that you need run:

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

It will return a long string, this is the token:

eyJhbGciOiJSUzI1NiIsImtpZCI6IllPLTlwRmtaOUJwanhUczNtM0J0a2M5REl2eGlweGI0bzdQRzZJcG5VT3MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmV....

Then finally, to login to the dashboard we need to run:

kubectl proxy 

Then browse to the dashboard URL:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Paste the token and click Sign In:



Changing the namespace (the dropdown box next to the Kubernetes logo) to kubernetes-dashboard will display the pods that is running the dashboard:


The final step that you may want to do is to add the Metrics Server, this will allow you to see memory and cpu usage for the pods.

To do this we need to install it:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

It will give the output:

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created 

Before we can run this we need to make a slight change.  Out of the box the dashboard will only work with HTTPS connections, as we are running this locally we need to add the flag --kubelet-insecure-tls, for more information look at their github page.


kubectl patch deployment metrics-server -n kube-system --type 'json' -p '[{"op": "add", "path": "/spec/template/spec/containers/0/args/-", "value": "--kubelet-insecure-tls"}]'

Now to view the graphs log into the dashboard again:

kubectl proxy


Note: You may need to wait a few minutes for the CPU usage and Memory Usage graphs to appear and populate.


Now you've got Kubernetes all setup and working locally.

If for any reason you want to revert the system back to the starting state and go from the beginning, click on the Docker icon, the Settings cog and Kubernetes (this is the same place where Kubernetes was enabled), then click the 'Reset Kubernetes Cluster' option.  This will remove all the pods and namespaces and put you back to the beginning.



Enjoy!


Wednesday, 20 January 2021

Docker container time drift using WSL2

I recently came across an issue that my Ubuntu Docker containers were failing restoring packages and this was due to them having a different time than my Windows 10 laptop.

After Googling the solution suggested was to reboot my laptop but as I'd just turned it on and got everything setup this wasn't something that I wanted to do.

The commands that most people suggested was to run a command in the container to re-synchronise the time with the host but the command returned an error when I tried this.

Eventually I found a github issue which implied it was a bug with Windows Subsystem for Linux.

Thankfully to re-synchronise the time it was quite simple, just run this command from a PowerShell window:

wsl --shutdown

Docker Desktop will quickly inform you that it isn't working and suggest you start it.

Once it is started again everything was back in sync and I could restore packages again!