Default Kubernetes cluster templates in Eumetsat Elasticity Cloud
In this article we shall list Kubernetes cluster templates available on Eumetsat Elasticity and explain the differences among them.
What We Are Going To Cover
List available templates on your cloud
Explain the difference between calico and cilium network drivers
How to choose proper template
Overview and benefits of localstorage templates
Example of creating a cluster using a localstorage template with HMD and HMAD flavors
Prerequisites
No. 1 Account
You need a Eumetsat Elasticity hosting account with access to the Horizon interface: https://horizon.cloudferro.com/auth/login/?next=/.
No. 2 Private and public keys
To create a cluster, you will need an available SSH key pair. If you do not have one already, follow this article to create it in the OpenStack dashboard: How to create key pair in OpenStack Dashboard on Eumetsat Elasticity.
No. 3 Documentation for standard templates
Documentation for Kubernetes drivers is here.
Documentation for localstorage templates:
k8s-1.25.16-localstorage-v1.0.1
k8s-1.27.11-localstorage-v1.0.1
No. 4 How to create Kubernetes clusters
The general procedure is explained in How to Create a Kubernetes Cluster Using Eumetsat Elasticity OpenStack Magnum.
Templates available on your cloud
The exact number of available default Kubernetes cluster templates depends on the cloud you choose to work with.
- WAW3-1
These are the default Kubernetes cluster templates on WAW3-1 cloud:
How to choose a proper template
Standard templates
Standard templates are general in nature and you can use them for any type of Kubernetes cluster. Each will produce a working Kubernetes cluster on Eumetsat Elasticity OpenStack Magnum hosting. The default network driver is calico. A template that does not specify calico, k8s-1.25.16-v1.1.1, is identical to the template that does specify calico in its name. Both are placed in the left column in the following table:
calico | cilium |
---|---|
k8s-1.25.16-v1.1.1 | k8s-1.25.16-cilium-v1.1.1 |
k8s-1.25.16-cilium-v1.1.1 |
If the application does not require a great many operations, then a standard template should be sufficient.
You can also dig deeper and choose the template according to the the network plugin used.
Network plugins for Kubernetes clusters
Kubernetes cluster templates at Eumetsat Elasticity cloud use calico or cilium plugins for controlling network traffic. Both are CNI compliant. Calico is the default plugin, meaning that if the template name does not specify the plugin, the calico driver is used. If the template name specifies cilium then, of course, the cilium driver is used.
Calico (the default)
Calico uses BGP protocol to move network packets towards IP addresses of the pods. Calico can be faster then its competitors but its most remarkable feature is support for network policies. With those, you can define which pods can send and receive traffic and also manage the security of the network.
Calico can apply policies to multiple types of endpoints such as pods, virtual machines and host interfaces. It also supports cryptographics identity. Calico policies can be used on its own or together with the Kubernetes network policies.
Cilium
Cilium is drawing its power from a technology called eBPF. It exposes programmable hooks to the network stack in Linux kernel. eBPF uses those hooks to reprogram Linux runtime behaviour without any loss of speed or safety. There also is no need to recompile Linux kernel in order to become aware of events in Kubernetes clusters. In essence, eBPF enables Linux to watch over Kubernetes and react appropriately.
With Cilium, the relationships amongst various cluster parts are as follows:
pods in the cluster (as well as the Cilium driver itself) are using eBPF instead of using Linux kernel directly,
kubelet uses Cilium driver through the CNI compliance and
the Cilium driver implements network policy, services and load balancing, flow and policy logging, as well as computing various metrics.
Using Cilium especially makes sense if you require fine-grained security controls or need to reduce latency in large Kubernetes clusters.