Skip to main content

Docker, Kubernetes and Helm

Docker

Docker in general

Docker is used to create, deploy and run applications (such as the Identity server en WebMVC) as containers.

Docker Images

Docker images is like a read-only snapshot of an image and they are used to create these containers. Each service in V9 will have their own image (Account.API, WebMVC, etc).

Okay so now that you have your containers that were created from the images, its gets a little bit more complicated when you are working with a bigger applications that have 10 to 10000 containers that should run and also where some containers will have three copied running simultaneously.... that is where an orchestrator comes in...

Kubernetes

Kubernetes is the most popular orchestrator (made by Google). Kubernetes (k8s) is in charge of orchestrating all of your containers. For example, if you indicate that the WebMVC should have 3 containers running, Kubernetes makes sure that there are always 3 running. If one of them fails for some reason, a new one should be started. Kubernetes also makes sure that you don't have to worry about the container IP addressees when communicating with one of them. Because there is 3 containers, there are also 3 different IP addresses. And because one of these containers can fail and a new one has to be started, it means that the new container will have a new IP address. So you can see why things can get quickly insane... however, Kubernetes also handles the networking. He makes sure that you can communicate with ONE IP or DNS and reach one of the 3 containers (preferably not one that is busy failing) via some sort of scheme such as round robin, load balancing, etc.

Pods

So Kubernetes works with resources and everything is just seen as a resource. The smallest resource of an application is called a Pod. The pod can be seen as the computer that the application is run on and the pod is then running a container. The pod can also run more than one container at a time. This practice is usually used for logging or proxying where you run a logger application with your main application in the same pod (on the same computer). This logger application is called a sidecar.  Usually, we only run one container in one pod but if you want to 3 instances of your application, then it will create three pods.

Containers within a pod can communicate with one another through localhost (they are on the same computer). However, pods can only communicate with one another through their cluster IPs (This is the IPs they have in the cluster). So the entire Kubernetes cluster can be seen as one giant private network (such as a company with several computers within it). However, previously we said that the IPs can get insane since they can change any time and we don't always know what the new IPs will be. That's why Kubernetes uses the DNS (usually given through the CoreDNS module). Pods can thus communicate with one onother through Services. 

PS: The WebMVC can essentially only serve one person at a time, it happens so quickly that one thinks its serving all at once but when usage gets more one will realize that this is not the case. So by running 3 containers, the load is split (load balancing).

HELM

Kubernetes uses a configuration file (YAML file) to contain all sorts of settings that a container needs to function on a specific deployment. For example, the WebMVC app needs to know where the IdentityServer is located. You can place all these type of settings in the associated YAML file, but now you have to change something such as deploying it on another premise where the IdentityServer URL is different, it has to be a public URL (the user should also be able to access the URL, not just the services in the cluster). You now have to change the URL is each YAML file of all 500 services... sounds tedious right?

This is where HELM comes in... he basically allows you to use the YAML files like templates. So instead of indicating that the URL should be 'https://www.identity.signifyhr.co.za', you can rather indicate it as '.Values.IdentityUrl'. You essentially changed the value into a variable. All 500 YAML files can now reference this one variable and you can set the value in one location for the specific release. 

Chart 

 

 so nou moet jy die ding deploy byvoorbeeld by ARM, so nou kan een of ander poephol net die regte Chart kry vir die version wat hulle wil he, 'n paar settings tweak, en dan net die Chart "install" (HELM terminologie se jy install die chart) op ARM se Kubernetes cluster.