Limiting overall memory usage and/or number of concurrent jobs (OD-1056)
Released
Jerome St-Louis opened 1 year ago

Despite using the setting that 4000 mb total memory ("Server Job Executor Memory Quota") should be used and "CPU Intensive Task Concurrency" set to 2, OneDev ends up using more than 6 GB of memory and brings the system down. I believe this is caused by the docker job executors running g++.

Is there a better way to ensure that all jobs executors cannot end up using more memory than is available, and/or to limit the number of builds that can be running simultaneously? Ideally, I would not want more than 2 jobs to run at the same time, with the following ones being queued until the other finish first.

Thank you!

EDIT: Could this perhaps have to do with the memory requirements of each buildspec that are lower than the actual requirements? (Is that what is relied upon to calculate the Server Job Executor Memory Quota?)

Robin Shen commented 1 year ago

You will need to set memory requirement of your memory consuming jobs to reflect peek memory usage, so that OneDev can queue some of them when resource is in short.

Jerome St-Louis commented 1 year ago

Thank you @robin . This is the memoryRequirement in the .onedev-buildspec.yaml, right? One of my concern is that a developer may not know ahead of time how much resources the build will end up using and/or might forget to set it on a particular project or branch, easily bringing the whole system down (scenario requiring the reset button and leading to #1055). Perhaps setting the docker resources constraints is one way to address it?

I was wondering if perhaps it would make sense to set a global minimum memoryRequirement that onedev could assume e.g., setting it to 2000 with a 4000 total memory would force it to have a maximum of 2 builds running at the same time, regardless of what developers put in the .onedev-buildspec.yaml? The idea is that the admin is responsible for the stability of the system and therefore is more trustable than the developers to define resource requirements.

This could be used together with the docker constraints. Without this however, if I set my docker constraints to 2 GB, onedev will probably still run more than 2 docker executors concurrently and I will still run into the same issues (if the memoryRequirement in the .onedev-buildspec.yaml are not accurate).

Robin Shen commented 1 year ago

Jobs consume considerable resources should specify meaningful cpu/memory requirement. For jobs you do not trust or do not have control over, use a particular job executor running on designated agents in order not to affect your important jobs.

Robin Shen commented 1 year ago

Thinking more about this. Specifying cpu/memory limit and concurrent jobs at executor level seems like a more natural approach. Turning this into an improvement request.

Robin Shen changed fields 1 year ago
Name Previous Value Current Value
Type
Support Request
Improvement
Jerome St-Louis commented 1 year ago

Awesome, thank you :) Wishing you and yours all the best, and all the success to OneDev that it deserves for 2023.

OneDev changed state to 'Closed' 1 year ago
Previous Value Current Value
Open
Closed
OneDev commented 1 year ago

State changed as code fixing the issue is committed

OneDev changed state to 'Released' 1 year ago
Previous Value Current Value
Closed
Released
OneDev commented 1 year ago

State changed as build #3255 is successful

issue 1 of 1
Type
Improvement
Priority
Normal
Assignee
Issue Votes (1)
Watchers (4)
Reference
OD-1056
Please wait...
Page is in error, reload to recover