Certified Kubernetes Administrator Study Guide – Scheduling – Understand how resource limits can affect pod scheduling

  • Within the spec for a pod you can specify how much CPU and RAM each container should be allocated aka its limit
  • The pod does not have its own limit value, it is in effective the sum of the limits of all containers within the pod.
  • Resources are measured in cpu units, one cpu in Kubernetes is:
    • 1 AWS vCPU
    • 1 GCP Core
    • 1 Azure vCore
    • 1 IBM vCPU
    • 1 Hyperthread on a bare metal intel processor with Hyperthreading
  • The amount of cpu based on the limit is always the same regardless of whether it is single, dual or 48 core e.g. 0.1 cpu units are 0.1 on a single core cpu and are 0.1 on a dual core cpu
  • Memory resources are measured in bytes, and can be specified as E, P, T, G, M or K suffixes for kilobytes etc
  • They can also be specified using the power of 2 equivalents Ei, Pi etc
  • Limits are added to the pod spec using the resources keyword within the containers section e.g.
apiVersion: v1
kind: Pod
  name: frontend
  - name: db
    image: mysql
      value: "password"
        memory: "64Mi"
        cpu: "250m"
        memory: "128Mi"
        cpu: "500m"
  - name: wp
    image: wordpress
        memory: "64Mi"
        cpu: "250m"
        memory: "128Mi"
        cpu: "500m"
  • When a pod with a limit is scheduled the limit value is converted to its millicore value and multiplied by 100. This value is the total amount of CPU time a container can use every 100ms.
  • If a container uses more than its memory limit it might be terminated. If the restart policy of the pod allows it then the kubelet will restart the container
  • If a container uses more than its cpu limit it will not be terminated. It may be allowed to exceed its limit for extended periods of time
  • If a container users more empheral storage than its limit the Pod will be evicted.
  • If a pod uses more empheral storage as a sum than specified on the spec of the containers (i.e. sum of limits from all containers is 5GB and sum of usage across the pod is 6GB) the Pod will be evicted.
  • The pod status can be used to see the resource consumption of a pod
  • The scheduler has to be able to find a node where the pod resource requirements can be statisfied. If no such node can be found the pod will remain in an unscheduled state until a node becomes available.

If the schedule failure relates to resource usage, messages like PodExceedsFreeCPU or PodExceedsFreeMemory  will be shown when using

kubectl describe pod <pod name> | grep -A 3 Events 


1.Check the capacity of the nodes using

kubectl describe nodes <node name>

Subtract the CPU/Memory allocated resources figures from the CPU/Memory allocatable figures to give the amount of available resources a new pod can consume (capacity includes resources reserved for system daemons which is why allocatable figure is lower)

Check the pod is not set to a size larger than all the nodes e.g. more cpu than a single node can provide

2.Terminate any unneeded pods to release resources

3.Add more nodes to the cluster to provide additional resources

4.If a pod is scheduled and then terminated, to see if the termination is due to resource constraints use

kubectl describe pod <pod name>  

5.Check the restart counts figure to see how many times the pod has been terminated and restarted by the kubelet

6.To fetch the status of previously terminated containers use

kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' <pod name> 

The command will return a LastState value with a reason. If the reason is OOM Killed it was Out of Memory.

1 thought on “Certified Kubernetes Administrator Study Guide – Scheduling – Understand how resource limits can affect pod scheduling

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close