Comments (4)
Hi @fkamaliada, you have RemovePodsViolatingTopologySpreadConstraint enabled under Balance and Deschedule, but it's only a Balance plugin. Try removing it from the Deschedule section of your config to see if that fixes it
from descheduler.
@fkamaliada it looks like you are trying to balance based on LowNodeUtilization? If so, see this line in the logs:
I0530 13:34:02.119990 1 lownodeutilization.go:153] "No node is underutilized, nothing to do here, you might tune your thresholds further"
This means there aren't any nodes that fall under the thresholds
for all 3 resource types (cpu, memory, pods). LowNodeUtilization will only evict pods from over-utilized nodes if there is a matching under-utilized node for the new pods to be scheduled onto. You can try adjusting your threshold settings to get the balance you want. Please see the LowNodeUtilization docs for more details about how this works.
from descheduler.
Thanks again @damemi
You're right. My mistake was that I thought that the numbers for pods, was count type, but they are percentage of (current pods / max pods capability of node).
Also cpu usage or memory usage is % of reserved measures and not used. So I was seen quite little cpu usage (5,5%) but the descheduler was pointing to about 60% cpu for a node, and that was the reserved cpu.
Now, I'll have to find the optimal values.
Thank you very much!
from descheduler.
Thank you @damemi for your helpful reply. Seems that for the specific issue that was the problem. I also deleted from balance the plugin RemovePodsViolatingNodeTaints, and the error gone...
plugins:
balance:
enabled:
- RemoveDuplicates
- RemovePodsViolatingTopologySpreadConstraint
- LowNodeUtilization
deschedule:
enabled:
- RemovePodsHavingTooManyRestarts
- RemovePodsViolatingNodeTaints
- RemovePodsViolatingInterPodAntiAffinity
Now in logs, there aren't something similar.
I0530 13:34:02.118912 1 pod_antiaffinity.go:93] "Processing node" node="ip-192-168-93-80.eu-west-1.compute.internal"
I0530 13:34:02.118994 1 profile.go:321] "Total number of pods evicted" extension point="Deschedule" evictedPods=0
I0530 13:34:02.119013 1 removeduplicates.go:107] "Processing node" node="ip-192-168-18-35.eu-west-1.compute.internal"
I0530 13:34:02.119150 1 removeduplicates.go:107] "Processing node" node="ip-192-168-72-233.eu-west-1.compute.internal"
I0530 13:34:02.119239 1 removeduplicates.go:107] "Processing node" node="ip-192-168-93-80.eu-west-1.compute.internal"
I0530 13:34:02.119317 1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.119335 1 topologyspreadconstraint.go:122] Processing namespaces for topology spread constraints
I0530 13:34:02.119525 1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119658 1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119700 1 topologyspreadconstraint.go:221] "Skipping topology constraint because it is already balanced" constraint={"MaxSkew":1,"TopologyKey":"topology.kubernetes.io/zone","Selector":[{}],"NodeAffinityPolicy":"Honor","NodeTaintsPolicy":"Ignore","PodNodeAffinity":{},"PodTolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]}
I0530 13:34:02.119753 1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.119857 1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-18-35.eu-west-1.compute.internal" usage={"cpu":"1560m","memory":"3086Mi","pods":"38"} usagePercentage={"cpu":80.83,"memory":43.61,"pods":34.55}
I0530 13:34:02.119920 1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-72-233.eu-west-1.compute.internal" usage={"cpu":"1700m","memory":"2982Mi","pods":"30"} usagePercentage={"cpu":88.08,"memory":42.14,"pods":27.27}
I0530 13:34:02.119933 1 nodeutilization.go:207] "Node is overutilized" node="ip-192-168-93-80.eu-west-1.compute.internal" usage={"cpu":"1090m","memory":"1312Mi","pods":"15"} usagePercentage={"cpu":56.48,"memory":18.54,"pods":13.64}
I0530 13:34:02.119947 1 lownodeutilization.go:135] "Criteria for a node under utilization" CPU=10 Mem=20 Pods=10
I0530 13:34:02.119962 1 lownodeutilization.go:136] "Number of underutilized nodes" totalNumber=0
I0530 13:34:02.119972 1 lownodeutilization.go:149] "Criteria for a node above target utilization" CPU=12 Mem=60 Pods=15
I0530 13:34:02.119981 1 lownodeutilization.go:150] "Number of overutilized nodes" totalNumber=3
I0530 13:34:02.119990 1 lownodeutilization.go:153] "No node is underutilized, nothing to do here, you might tune your thresholds further"
I0530 13:34:02.120003 1 profile.go:349] "Total number of pods evicted" extension point="Balance" evictedPods=0
I0530 13:34:02.120016 1 descheduler.go:170] "Number of evicted pods" totalEvicted=0
I0530 13:34:02.120232 1 reflector.go:302] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120377 1 reflector.go:302] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120498 1 reflector.go:302] Stopping reflector *v1.PriorityClass (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120591 1 reflector.go:302] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160
I0530 13:34:02.120854 1 secure_serving.go:258] Stopped listening on [::]:10258
I0530 13:34:02.120907 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
But, in the end it seems to do nothing about the balancing. Still the same node behavior as before. If someone has any idea on what prevents the balancing, please let me know. I'll keep trying anyway.
from descheduler.
Related Issues (20)
- 1.30: Update version references in docs and readme
- Chart not available anymore HOT 2
- Descheduler not evicting anything when deployed with Helm chart as a deployment HOT 2
- CrashLoopBackOff 0.29.0 HOT 5
- 1.30: Update CI in test-infra HOT 1
- Create v0.30.0 tag on master HOT 1
- Promote v0.30.0 docker image
- Helm chart version update to v0.30.0 HOT 1
- Endless descheduling of pods with node affinity preferredDuringSchedulingIgnoredDuringExecution and enough resources available on not tainted node but not on a tainted node
- Default deschedulerPolicy in helm chart causes crashloop HOT 3
- Add a new extension point EvictPlugin to descheduling framework HOT 1
- unknown phase feature HOT 3
- Latest version does not work with helm chart anymore HOT 1
- `failed to convert Descheduler minor version to float` on start HOT 5
- `unknown field nodeAffinityType` error with v0.30.0 HOT 8
- KEP-1421: Make individual NodeFit predicates configurable
- otel: conflicting Schema URL HOT 4
- Enable Service in Descheduler without ClusterIP as None - Helm Chart HOT 3
- Unable to create a profile" err="unable to build DefaultEvictor plugin HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from descheduler.