-
This seems too complicated. I am thinking of adding an environment variable so that user can pass in onedev installation directory on host directly. Will that work for your case?
If that environment variable is not set, onedev continues with current logic to find the host installation directory which works for most cases.
-
That would only work if you assume that the data directory of docker daemon is the same on all hosts in a docker swarm cluster. However that might not be the case. Different linux distributions might install docker into different locations. Or a node is intentionally configured differently by changing dockerd's configuration parameter
data-root, see: https://docs.docker.com/engine/reference/commandline/dockerd/Can you shortly describe how you use the onedev host installation directory when using job executors? So I can better understand the situation.
-
When running a CI job using server docker executor, OneDev creates a temp directory under its installation directory, which is
/opt/onedev/temp/jobuuid, cloning repository into this temp directory, and then mount this temp directory into job container as/onedev-buildto serve as job workspace. Since OneDev running in container uses the option-v /var/run/docker.sock:/var/run/docker.sockto use the DooD approach to avoid DinD headaches, volume mount of-v /opt/onedev/temp/jobuuid:/onedev-buildwhen creating job container will actually be handled by the docker daemon on the host running OneDev container, and the mount will fail, as the path/opt/onedev/temp/jobuuiddoes not exist in host. To solve the issue, OneDev has to find out the host path mounting as/opt/onedevin OneDev container, and from that host path, OneDev deducts the actual host path corresponding to/opt/onedev/temp/jobuuid, and then use that path as mount source when creating job container. -
Turns out that below command can be used to determine container id of OneDev:
docker ps -f volume=/opt/onedevWith the container id, it is easy to determine the host installation directory simply by inspecting the container, even if the volume is not a bind one. Docker inspect will always output source directory of the mount like below in the volumes section:
"Type": "volume", "Name": "onedev", "Source": "/var/lib/docker/volumes/onedev/_data", "Destination": "/opt/onedev", "Driver": "local", "Mode": "z", "RW": true, "Propagation": ""Can you please verify if this works at your side?
-
OneDev
changed state to 'Closed' 4 years ago
Previous Value Current Value Open
Closed
-
OneDev
changed state to 'Released' 4 years ago
Previous Value Current Value Closed
Released
-
State changed as build #2237 is successful
-
Previous Value Current Value Job Executor: Server Docker executor failed to create
Job Executor: server docker executor failed to run jobs on docker swarm
-
Sorry for the late reply. Yes the above command works in a docker service task.
An alternative solution could be to remove the dependency on the host path completely by using a onedev-build volume.
For example you could execute the following commands inside the onedev container to populate a docker volume with the build data and then run the build as usual:
Preparation:
docker volume create onedev-build-20220101120000docker create --name onedev-build-volume-helper -v onedev-build-20220101120000:/onedev-build busyboxdocker cp /opt/onedev/temp/jobuuid onedev-build-volume-helper:/onedev-builddocker rm onedev-build-volume-helper
After these commands you have a volume with the required data to execute the build. That volume could then be mounted into the real target image that the user has defined for the build, just like you already do it.
Run the docker job
docker run -v onedev-build-20220101120000:/onedev-build ......
That way you don't have to know the host path of onedev at all.
-
Hmm on the other hand using the volume strategy also means occupying additional space on the host and maybe that host only has a small local disk and uses a NAS/SAN for storage. So using
docker ps -f volume=/opt/onedevis the easier solution. -
Thanks for the idea. Just as you've mentioned, placing source directory of
/onedev-buildunder OneDev's data directory can control the disk allocation easier.Also there are two other reasons:
- Using bind mount saves an extra copy for cloned git repositories
- Cache directory mounting. Under the directory identified by cache key, OneDev has to determine which cache is not occupied, and then mount the free cache sub directory to container.
All these are much easier to be handled with bind mount knowing host directory of OneDev.
| Type |
Bug
|
| Priority |
Normal
|
| Assignee | |
| Affected Versions |
Not Found
|
We have
docker-compose.ymlfile which defines all our server side development services. That compose file is then used withdocker stack deploy -c docker-compose.yml dev-environmentso that all services are deployed to docker swarm.Once we run the above via
docker stack deploywe get a functional onedev instance in docker swarm. However configuring docker as job executor fails with exception:Looking briefly into the code I think the reason is that you are trying to inspect the container onedev is running in. To do so you either use the
hostname(which isonedevin our case) or you are tryingonedevas a fallback. Given that we deployed onedev to docker swarm as a service there is no such container.Calling
docker pswithin the onedev container results inSo you would need to call
docker container inspect dev-environment_onedev.1.gk2giempre4u9xq1q5oegyydcordocker container inspect 252d42474124. But given that we have defined a custom hostname and given that you do not know about the docker stack name (dev-environment) both commands can not be generated with the available information.As a workaround we could comment the hostname in the docker-compse file, so that the container hostname will be its container id again. However we like to have a more stable hostname.
So I think it would be useful to have an environment variable in the onedev docker image to give onedev more information:
ONEDEV_SERVICE_NAME: You could executedocker service inspect <ONEDEV_SERVICE_NAME>, find the mounts, check wether or not it is a bind mount (done) or a volume mount. If it is a volume mount you need to follow up withdocker volume inspect <VOLUME_NAME>to find out the current mountpoint of that volume on the host.ONEDEV_VOLUME_NAMEandONEDEV_HOST_BIND_DIR: If the volume name is provided you again calldocker volume inspect <ONEDEV_VOLUME_NAME>. If a bind mount is used, you simply use the provided path directly.Maybe there are other solutions but the current solution in code is a little too easy to be fully compatible with docker swarm.