-
Uploading cache for failed build can often corrupts the cache, which is highly unrecommended.
-
Previous Value Current Value Open
Closed
-
@robin I heavily disagree with your opinion. There are completely legitimate reasons you'd want to do this, e.g. prerequisite steps in a build.
To further explain my example: Let's say you're developing a build step at the end of your Docker executor build and need to run the build multiple times. One of the earliest steps you have is to download package updates for the container. One way to speed up builds is to cache these downloaded package files. This works fine if the build succeeds, but the cache is not touched if the build fails. In either case, the cache will be the same, it does not matter if the build fails or succeeds.
Caching downloaded files for failed builds also has the potential to lessen network traffic, which is important if you have limited bandwidth or are being charged based on use. Having to download 500MB every failed build is something I can personally live with, but there are surely people in situations where this would be unmaintainable, making OneDev build infrastructure unusable for them. I don't agree with discriminating against these people because they are less fortunate.
Less network traffic also benefits the upstream by reducing overall network load. You might also encounter throttling or blocking if you have to run the same build step frequently, downloading the same files over and over.
Yes, it is possible the cache could get some bad data in it from a failed build. But this should be the responsibility of the build step creator to handle. It's not like it's impossible to verify the contents of the cache before using it. If it is bad, it's a cache, just delete it and recache it.
The only workarounds I can think of are:
- Make changes to the container
- Get the name of the currently running container (should be able to craft it based on build info)
- Run
docker committo save the updated image - Manually
docker execyour own custom container using the cached image - If the build fails, or if there are further changes necessary for the image, either re-
committhe changes, or untag/delete the image to rebuild it on the next build
Or:
- Tag a base image and use that as the image to run in OneDev
- Make changes to the container
docker committo the same tag- The next steps should use this image
Or:
- Force all builds to succeed and ignore errors
For the Docker
commitsolutions, it wastes disk space in a situation where your build step takes care of multiple builds in one, for example building all packages in a package repo—you'll need tocommitan image for every package in your project. And of course, it also relies on access to the Docker socket from the build env, which may not be wanted or possible in some situations.This all seems needlessly ugly, when the simple answer is uploading cache for failed builds. If you still do not see the benefit in it, do you have any less complicated/ugly workarounds than what I listed? Because I can't think of any.
-
This is still an issue.
-
Previous Value Current Value Closed
Open
-
Maybe it would help to see an example?
This is my buildspec job: https://git.sev.monster/sev/aports/~files/master/.onedev-buildspec.yml?position=buildspec-jobs/build-packages
This job handles builds for all packages in my repo. This is not at all an unusual usecase. And like how others often do it, I would love to fail the overall build job if any of the packages it's handling fail to build, but doing so would mean:
- No Alpine package caching, so have to redownload indexes and packages that would have been cached if the build had not failed.
- No source tarball caching, so if a build fails when updating it, the tarball will need to be redownloaded.
- Yes, sometimes tarballs are bad and need to be redownloaded, but such files can be automatically deleted when detected... And if only one tarball out of five are bad, that's still an extra four that need to be redownloaded if the entire cache is not saved because the build failed.
- No successfully built packages will be cached, which is a requirement of the build tool I'm using to recognize that a package has already been build successfully.
- Yes, this could be worked around, but would require either a fork of that build tool, or the creation and implementation of an entirely new build infrastructure that can check the package server to see if there is already an uploaded package there.
For these reasons, I am forced to ignore package build failures in OneDev. The benefits of using the cache outweigh my desire for accurate reporting.
| Type |
Improvement
|
| Priority |
Normal
|
| Assignee | |
| Labels |
No labels
|
It seems that this feature existed at some point before (OD-486), but after refactors (OD-1836?), has been lost. Either that, or I can't figure out how to do it.
An example of how this would be useful is when testing builds: long-running tasks like package updates must be run every time when a build fails, which extends the time needed to create and test a new build.