-
You will need to set memory requirement of your memory consuming jobs to reflect peek memory usage, so that OneDev can queue some of them when resource is in short.
-
Thank you @robin . This is the
memoryRequirementin the.onedev-buildspec.yaml, right? One of my concern is that a developer may not know ahead of time how much resources the build will end up using and/or might forget to set it on a particular project or branch, easily bringing the whole system down (scenario requiring the reset button and leading to #1055). Perhaps setting the docker resources constraints is one way to address it?I was wondering if perhaps it would make sense to set a global minimum
memoryRequirementthat onedev could assume e.g., setting it to 2000 with a 4000 total memory would force it to have a maximum of 2 builds running at the same time, regardless of what developers put in the.onedev-buildspec.yaml? The idea is that the admin is responsible for the stability of the system and therefore is more trustable than the developers to define resource requirements.This could be used together with the docker constraints. Without this however, if I set my docker constraints to 2 GB, onedev will probably still run more than 2 docker executors concurrently and I will still run into the same issues (if the
memoryRequirementin the.onedev-buildspec.yamlare not accurate). -
Jobs consume considerable resources should specify meaningful cpu/memory requirement. For jobs you do not trust or do not have control over, use a particular job executor running on designated agents in order not to affect your important jobs.
-
Thinking more about this. Specifying cpu/memory limit and concurrent jobs at executor level seems like a more natural approach. Turning this into an improvement request.
-
Name Previous Value Current Value Type
Support Request
Improvement
-
Awesome, thank you :) Wishing you and yours all the best, and all the success to OneDev that it deserves for 2023.
-
OneDev
changed state to 'Closed' 3 years ago
Previous Value Current Value Open
Closed
-
OneDev
changed state to 'Released' 3 years ago
Previous Value Current Value Closed
Released
-
State changed as build #3255 is successful
| Type |
Improvement
|
| Priority |
Normal
|
| Assignee |
Despite using the setting that 4000 mb total memory ("Server Job Executor Memory Quota") should be used and "CPU Intensive Task Concurrency" set to 2, OneDev ends up using more than 6 GB of memory and brings the system down. I believe this is caused by the docker job executors running g++.
Is there a better way to ensure that all jobs executors cannot end up using more memory than is available, and/or to limit the number of builds that can be running simultaneously? Ideally, I would not want more than 2 jobs to run at the same time, with the following ones being queued until the other finish first.
Thank you!
EDIT: Could this perhaps have to do with the memory requirements of each buildspec that are lower than the actual requirements? (Is that what is relied upon to calculate the Server Job Executor Memory Quota?)