gke cluster is not scaling down even after applying optimize-utilization option

I have created a standard gke cluster with one worker node and implemented cluster autoscaling using 

gcloud container clusters update cluster-latest --enable-autoscaling --node-pool=default-pool --min-nodes=1 --max-nodes=2 --region=us-central1-c 

also updated the cluster to use optimize-utilization profile using gcloud container clusters update cluster-latest --autoscaling-profile optimize-utilization --location=us-central1-c

Than I deployed my angular application with 1 replica , after i scaled my relica to 7 and new one worker node got created which handled the pods

7 replicas and 2 nodes.png

when I scaled down the replica to 1 , I noticed that my Autoscaler is not deleting the other worker node as one node is enough to handle one replica.

1pod-2nodes.png

my question is why it not scaling down , and what is the solution to scale it down.

I am attaching screenshot of autoscaling log

log-1.pnglog-2.png

why it is not scaling down , also please provide solution i am searching the solution for last few days please help.

 

 

0 1 658
1 REPLY 1

no.scale.down.node.pod.kube.system.unmovable means that

"Pod is blocking scale down because it's a non-DaemonSet, non-mirrored, Pod without a PodDisruptionBudget in the kube-system namespace."

as there is a lack of PodDisruptionBudget for workloads running in kube-system namespace, specifically for the kube-dns-autoscaler pod.

You need to set a PodDisruptionBudget to enable cluster autoscaler to move Pods in the kube-system namespace 

Check this documentation for reference to set a PodDisruptionBudget.

Top Labels in this Space