#1847  Kubernetes Executor requests and limits
Closed
andrzej opened 2 weeks ago

We had some issues related to memory consumed by our build jobs, that forced me to verify memory limits applied to pods/containers created by OneDev. If I'm correct, single job contains of multiple containers (within a single pod). Each job step is a separate container and sidecar gathering logs is also a container. After reviewing containers definitions, it looks like resources (requests for memory and CPU) that are set in executor definition are applied only to the sidecar container that is only responsible for gathering logs. Other containers have resources: {} entry, meaning that no requests are applied to them.

Is that by design and we should think about other ways to limit resources used by job? or those limits should be applied by OneDev to all containers (all steps). I also do wonder, if this change (to apply requests and limits to all steps) will not cause issue with containers failing to start due to summary of requests for memory and CPU being larger that available CPU and memory of the cluster (even though in this case containers are expected to be started one after the other).

Actually limits are applied to each container (except from. sidecar), but requests are applied only to sidecar. Is that by design? Without proper assignment of requests, it may happen that jobs/containers requiring larger amount of memory of CPU will be scheduled to run on a node with not enough free memory. Am I correct?

Also using requests only for sidecar would block this CPU/memory from being allowed to be used by other container... if so, what would be the point of setting requests for executor at all if those would be applied only to sidecar?

Maybe I'm missing something, but I would like to understand how those settings are applied to properly configure and manage OneDev on my cluster.

andrzej changed title 2 weeks ago
Previous Value Current Value
Kubernetes Executor limits
Kubernetes Executor requests and limits
andrzej commented 2 weeks ago

This is related to #1848

Robin Shen commented 2 weeks ago

This is by design. I think set the request to sidecar container is appropriate, as k8s scheduler cares about the total resource requests for all containers in the pod.

If we set resource request for all containers with defined value, the pod will actually claim to need N * resource_requests (where N is number of containers in the pod) which breaks assumption of resource requests for whole job.

Please correct me if I am wrong on this.

Robin Shen commented 2 weeks ago

Also all step containers will be added to pod at start of job due to pod limitation, and only container associated with running step will actually be doing things, other step containers are simply sleeping with minimum resource consumption. This is another reason I put the resource requests in sidecar to represent whole job resource requiremens.

andrzej commented 2 weeks ago

Thank you. This explanation is what I expected how those limits are applied to make them work with kubernetes, but I wanted to confirm that.

andrzej changed state to 'Closed' 2 weeks ago
Previous Value Current Value
Open
Closed
issue 1 of 1
Type
Question
Priority
Normal
Assignee
Labels
No labels
Issue Votes (0)
Watchers (3)
Reference
onedev/server#1847
Please wait...
Page is in error, reload to recover