Here is the error when I run Nginx on k0s
❯ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default k0s-nginx-576c6b7b6-2kn87 0/1 Pending 0 8m53s
kube-system coredns-85c69f454c-zwl4t 1/1 Running 0 9d
kube-system konnectivity-agent-whswd 1/1 Running 0 9d
kube-system kube-proxy-t998n 1/1 Running 0 9d
kube-system kube-router-pb2fd 1/1 Running 0 9d
kube-system metrics-server-5cd4986bbc-sjpz7 1/1 Running 0 9d
❯ kubectl describe pod k0s-nginx-576c6b7b6-2kn87
Name: k0s-nginx-576c6b7b6-2kn87
Namespace: default
Priority: 0
Service Account: default
Node: <none>
Labels: app=nginx
pod-template-hash=576c6b7b6
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/k0s-nginx-576c6b7b6
Containers:
nginx:
Image: nginx:latest
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t58ld (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
kube-api-access-t58ld:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 9m10s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning FailedScheduling 3m52s default-scheduler 0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling
When I played with k0s
on bare metal server, I realized that k0s control plane assigned node-role.kubernetes.io/master:NoExecute
taint automatically. Therefore I have to disable it.
Here is the simple Nginx yaml for testing
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: k0s-nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
Solution
Manually taint your master node
kubectl taint nodes "xxx.xxx.com" node-role.kubernetes.io/master:NoSchedule
After you apply the taint, drain the master node so the pods scheduled on it shift to the worker node.
kubectl drain "xxx.xxx.com" --ignore-daemonsets --delete-emptydir-data
> node/xxx.xxx.com already cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/konnectivity-agent-84mfn, kube-system/kube-proxy-fzztz, kube-system/kube-router-8mczs
evicting pod kube-system/metrics-server-5cd4986bbc-sv4bj
evicting pod kube-system/coredns-85c69f454c-zk9x5
evicting pod default/k0s-nginx-576c6b7b6-wmms5
pod/k0s-nginx-576c6b7b6-wmms5 evicted
pod/metrics-server-5cd4986bbc-sv4bj evicted
pod/coredns-85c69f454c-zk9x5 evicted
node/xxx.xxx.com drained
Solution 2 – Reset the control plane
# Stop the k0s
k0s stop
# Reset the k0s
k0s reset
# Start the k0s
k0s install controller --enable-worker --no-taints -c k0s.yaml
If you dont have k0s.yaml
, you have to export it first.
apt-get install -y kubectl
curl -sSLf https://get.k0s.sh | sudo sh
k0s start
k0s default-config > k0s.yaml
k0s stop && k0s reset
# Print control plane credential
cat /var/lib/k0s/pki/admin.conf
References: