Kubernetes Error: unbound immediate PersistentVolumeClaims

Error

Events:
  Type     Reason             Age                        From                Message
  ----     ------             ----                       ----                -------
  Warning  FailedScheduling   50m (x270 over 3d20h)      default-scheduler   0/8 nodes are available: 8 pod has unbound immediate PersistentVolumeClaims.
  Normal   NotTriggerScaleUp  3m59s (x33151 over 3d21h)  cluster-autoscaler  pod didn't trigger scale-up: 1 max node group size reached
  Warning  FailedScheduling   113s (x5361 over 3d21h)    default-scheduler   0/7 nodes are available: 7 pod has unbound immediate PersistentVolumeClaims.

Solution

The events you have shared show that the scheduler was unable to schedule 15 pods because there were not enough available nodes. The message of the FailedScheduling event indicates that there are 8 pods that have unbound immediate PersistentVolumeClaims. This means that these pods have requested PersistentVolumeClaims (PVCs) that are not yet available.

The NotTriggerScaleUp event indicates that the cluster-autoscaler did not scale up the cluster even though there were pods that were not able to be scheduled. This is because the maximum node group size has been reached.

There are a few possible reasons why there are not enough available nodes:

  • The cluster may not have enough nodes. You can check the number of nodes in the cluster by running the following command:
kubectl get nodes

  • The nodes may not have enough resources to run the pods. You can check the resources available on each node by running the following command:
kubectl get nodes -o wide

  • The nodes may be tainted and the pods may not have the tolerations required to run on those nodes. You can check the taints on each node by running the following command:
kubectl get nodes -o jsonpath='{.spec.taints}'

  • The pods may have nodeSelectors that the nodes do not match. You can check the nodeSelectors for each pod by running the following command:
kubectl get pods -o jsonpath='{.spec.nodeSelector}'

To troubleshoot the issue, you can first check the resources available on the nodes and make sure that they are sufficient to run the pods. You can also check the taints and tolerations on the nodes and make sure that the pods have the required tolerations. Finally, you can check the nodeSelectors for the pods and make sure that the nodes match the nodeSelectors.

If you are still having trouble scheduling the pods, you can also try increasing the maximum node group size for the cluster-autoscaler.

Here are some additional things you can do to troubleshoot the issue:

  • Look at the logs for the pods that are failing to be scheduled. This may give you more information about the reason for the failure.
  • Use the kubectl describe pod <pod_name> command to get more information about a specific pod.
  • Use the kubectl get events command to see a list of all events in the cluster, including events related to scheduling failures.