We are going to install Kubernetes on bare metal servers (= dedicated servers) on Debian based distros.
This tutorial was tested with Kubernetes 1.24.
We will use "kubeadmn" which is the official Kubernetes cluster initialiser.
In this tutorial, our Kubernetes cluster has 2 bare metal servers: one server where a control-panel will be installed and one server where a worker will be installed.
Prepare the bare metal servers
These instructions are run on both servers.
Note about network interfaces
By default, Kubernetes' components check the default gateway and retrieve the details of the network interface it is associated with. They use that network interface to configure their IP addresses. It is possible to change this behaviour and use a different network interface.
Kubernetes doc about network interfaces:
<<If you have more than one network adapter, and your Kubernetes components are not reachable on the default route, we recommend you add IP route(s) so Kubernetes cluster addresses go via the appropriate adapter.>>
If the servers have many network interfaces, check that the default gateway is associated with the network interface that you expect, by running:
ip route
And if the displayed default gateway is not linked to the network interface that you expected, it is recommended to add a default gateway with the expected network interface, as follows:
ip route add default via [gateway IP address] dev [network interface's name]
Set a domain name for the control-plane's cluster endpoint
In this installation, we are going to have one control plane only. But in the future, we may want to add additional control planes to improve the resiliency of our Kubernetes cluster. If one server where a control plane runs has a defect, the other servers where additional control planes run would keep the cluster available. To prepare to the eventuality of having many control planes in the future, we need to create a domain name (e.g. "cluster-endpoint") with the IP address of our only control plane. If in the future we have more than one control planes, that domain name can be assigned to the address of the external load balancer in front of the control plane instances so that the traffic between the control planes is load balanced.
Create a domain name "cluster-endpoint":
cat <<EOF | sudo tee -a /etc/hosts [Control plane’s IP address] cluster-endpoint EOF
Test:
ping cluster-endpoint
Kubernetes pre-installation checks
Disable Swap
Run this command:
sudo swapoff -a
And in the file /etc/fstab add a # in front of the row(s) about swap:
sudo vi /etc/fstab
For example:
# UUID=c2f8e4ca-d822-4ed9-ae1b-123346d9c3a0 none swap sw 0 0
Letting iptables see bridged traffic
Kubernetes requires the Linux node's iptables to correctly see bridged traffic.
Let's make sure the module "br_netfilter" is always available on reboot:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter EOF
Enable that module without having to reboot:
sudo modprobe br_netfilter
Let's make sure the required system configs are always available on reboot:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
Enable those system configs without having to reboot:
sudo sysctl --system
Set-up Docker certs and repo
Install the required packages:
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release
Add Docker's official GPG key
For Ubuntu:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
For Debian:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
Add Docker apt repository
For Ubuntu:
echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
For Debian:
echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Install ContainerD
ContainerD is a container runtime. Docker is based on ContainerD and consequently we do not need to install a Docker runtime anymore. This is in line with Kubernetes community's decision to move away from the Docker runtime.
Let's make sure the required Linux modules by ContainerD are always available on reboot:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF
Enable those modules without having to reboot:
sudo modprobe overlay sudo modprobe br_netfilter
Let's make sure the required system configs are always available on reboot:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF
Enable those system configs without having to reboot:
sudo sysctl --system
Install packages:
sudo apt-get update sudo apt-get install -y containerd.io
Set default containerD configuration:
sudo mkdir -p /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml
Configure in the config.toml to use the systemd cgroup driver by setting the option “SystemdCgroup” to true:
sudo vi /etc/containerd/config.toml
In the file above, add "SystemdCGroup = true" in the following location:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] ... [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
Restart ContainerD:
sudo systemctl restart containerd
Install kubernetes components
Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
Add Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install kubeadm, kubelet and kubectl on all machines:
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
The kubelet is now restarting every few seconds, as it waits in a crash loop for kubeadm to tell it what to do.
Check connectivity to grx.io.container image registry by running:
sudo kubeadm config images pull
Initialise the Control-panel and create a cluster
These instructions are run on the server which will host the Control-panel.
Creating a cluster with kubeadm
Generate a token and save it:
kubeadm token generate
The token value is going to be used inside “init-conf.yaml” below.
Create a config file with the default init configurations:
kubeadm config print init-defaults >> init-conf.yaml
Open it:
vi init-conf.yaml
Add in it the following configurations marked in bold:
apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: [the generated token] ttl: 24h0m0s usages: - signing - authentication localAPIEndpoint: advertiseAddress: [server’s IP] # the IP address of the control plane server bindPort: 6443 nodeRegistration: criSocket: /run/containerd/containerd.sock name: [server’s hostname] taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: timeoutForControlPlane: 4m0s certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kubernetesVersion: [latest version] controlPlaneEndpoint: cluster-endpoint networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
In the YAML above, all available fields for the kinds "InitConfiguration" and "ClusterConfiguration" are available here: InitConfiguration, ClusterConfiguration
Optional: if you would like to pass runtime arguments to "kubelet", in the YAML above you could add the field "kubeletExtraArgs" under the field "nodeRegistration". For example, if you want to specify an IP address that Kubelet should assign to a node in the cluster:
nodeRegistration: ... kubeletExtraArgs: node-ip: [server’s IP same as advertiseAddress] ...
Mode details about the field "nodeRegistration" are available here: NodeRegistrationOptions
Initialise Kubernetes Control-plane:
sudo kubeadm init --config=init-conf.yaml
The command above is going to initialise a new Kubernetes cluster with a control plane. Once it is completed, it will display the credentials allowing to joind additional nodes to the cluster. Please save those values for "token" and "discovery-token-ca-cert-hash".
On the server, add the Kuberbetes's configuration files in your user's home folder in order to access to the cluster with the command "kubectl":
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
(optional) For root user:
export KUBECONFIG=/etc/kubernetes/admin.conf echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bashrc
Installing a pod network add-on
Download Calico's config YAML and set the value for "podSubnet":
wget https://docs.projectcalico.org/manifests/calico.yaml
Open the downloaded file:
vi calico.yaml
And find the constant "CALICO_IPV4POOL_CIDR", uncomment it and set as value 10.244.0.0/16, as below:
- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"
Install Calico:
kubectl apply -f calico.yaml
Check the cluster
From that point, we have a Kubernetes cluster running with one Control-plane node. Let's check the cluster is healthy. Run the following commands on the control plane server.
From the control plane, list all nodes:
kubectl get nodes -o wide
From the control plane, list all pods:
kubectl get pods --all-namespaces -o wide
From the control plane, view the services running in Kubernetes:
kubectl cluster-info
Once the Kubernetes cluster is ready, we can add more workers nodes by following the instructions in the next section.
Join the cluster as a worker node
These instructions are run on the server which will host the worker.
A worker node is where the containers for your custom services will run. By default those containers do not run on the control plane(s)' nodes.
Create a config file with the default join configurations:
kubeadm config print join-defaults >> join-conf.yaml
Open it:
vi join-conf.yaml
Add in it by adding the following configurations marked in bold:
apiVersion: kubeadm.k8s.io/v1beta2 caCertPath: /etc/kubernetes/pki/ca.crt discovery: bootstrapToken: apiServerEndpoint: cluster-endpoint:6443 token: [token] caCertHashes: - [discovery-token-ca-cert-hash] timeout: 5m0s tlsBootstrapToken: [token] kind: JoinConfiguration nodeRegistration: criSocket: /run/containerd/containerd.sock name: [worker server’s hostname] --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
In the YAML above, all available fields for the kind "JoinConfiguration" are available: here
Optional: as for the control-plane, if you would like to pass runtime arguments to the worker "kubelet", in the YAML above you could add the field "kubeletExtraArgs" under the field "nodeRegistration". For example, if you want to specify an IP address that Kubelet should assign to a node in the cluster:
nodeRegistration: ... kubeletExtraArgs: node-ip: [worker server’s IP] ...
Join the cluster:
sudo kubeadm join --config=join-conf.yaml
From the control plane, check that the worker node is ready:
kubectl get nodes -o wide
If we need to add additional worker nodes, repeat the instructions above on different bare metal server(s).
Join the cluster as an additional Control-plane node
These instructions are run on the server which will host the additional Control-plane.
The instructions to add a Control-plane to the cluster are similar to the ones for adding a worker. With few differences.
We are going to use the same configuration than when adding a worker node, except an additional "controlPlane" field, as shown in bold below:
apiVersion: kubeadm.k8s.io/v1beta2 caCertPath: /etc/kubernetes/pki/ca.crt controlPlane: localAPIEndpoint: advertiseAddress: [server’s IP] # the IP address of the additional control plane server discovery: bootstrapToken: apiServerEndpoint: cluster-endpoint:6443 token: [token] caCertHashes: - [discovery-token-ca-cert-hash] timeout: 5m0s tlsBootstrapToken: [token] kind: JoinConfiguration nodeRegistration: criSocket: /run/containerd/containerd.sock name: [worker server’s hostname] kubeletExtraArgs: node-ip: [worker’s IP] --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd
Join the cluster:
kubeadm join --control-plane --config="join-conf.yaml"
From the main control plane, check that the new control plane node is ready:
kubectl get nodes -o wide
Next steps
Keep securing
Follow the next step in the Guide to create a secure cluster of bare-metal servers and install Kubernetes.
Controlling the cluster from a local computer
You can control a Kubernetes cluster from your local computer as if you were located on the main control-plane server.
From the local computer, copy admin.conf in your home folder:
scp root@<control-plane-host>:/etc/kubernetes/admin.conf .
Now, we can run "kubeclt" command from the local computer:
kubectl --kubeconfig ./admin.conf get nodes
Note: the admin.conf file gives the user superuser privileges over the cluster. This file should be used sparingly.
Additional readings
List of alternatives Kubernetes installers (see section "Partners and Distributions")
Official Kubernetes installation instructions
Kubernetes official doc: Install "kubedamn"
Kubernetes official doc: Creating a cluster with "kubedamn"
Kubernetes official doc: Install ContainerD
Docker official doc: Set-up Docker certs and repo for Ubuntu
Docker official doc: Set-up Docker certs and repo for Debian