#1834  Onedev CI/CD questions
Closed
Sagar Bud opened 3 weeks ago

Hello There,

I have few queries regarding Onedev CI/CD pipeline.

  1. We have installed and configure Onedev in Linux VM and setup docker executor and shell executor but while we're configuring kubernetes executor it gives attached screen shot error.
  2. We want to setup some production job manual. How we can set manual job? For example, we have 2 stage in CI/CD pipeline build and deploy. Build stage will trigger when any changes happen in code but deploy stage only trigger when i want to run(manually).
  3. We have set variable and secret in Job secretes and Job properties for specific repository. How we can set it globally for all repositories?
  4. How we can use the default variable in Onedev like CI_COMMIT_REF_NAME- CI_PIPELINE_ID(those are GitLab default variable)?

im.png

Sagar Bud changed fields 3 weeks ago
Name Previous Value Current Value
Priority
Normal
Major
jbauer commented 3 weeks ago
  1. We have installed and configure Onedev in Linux VM and setup docker executor and shell executor but while we're configuring kubernetes executor it gives attached screen shot error.

The node mentioned in your screenshot cannot resolve the DNS name you have configured as OneDev server. The domain is not available in the public DNS so you somehow have to provide your IP.

  1. We want to setup some production job manual. How we can set manual job? For example, we have 2 stage in CI/CD pipeline build and deploy. Build stage will trigger when any changes happen in code but deploy stage only trigger when i want to run(manually).

OneDev only executes jobs automatically if they have a trigger assigned. Otherwise you have to start them manually. So you could have a trigger for your build job that runs whenever code is committed and a deploy job that depends on the build job and possibly uses artifacts published by the build job as input. OneDev has a job step to publish/retrieve artifacts.

https://docs.onedev.io/tutorials/cicd/understanding-pipeline
https://docs.onedev.io/tutorials/cicd/pass-job-artifacts
https://docs.onedev.io/tutorials/cicd/build-promotion

  1. We have set variable and secret in Job secretes and Job properties for specific repository. How we can set it globally for all repositories?

You can make a project hierarchy in OneDev and then define job secrets higher in the hierarchy. For example company -> repo1 and define secrets in company project.

  1. How we can use the default variable in Onedev like CI_COMMIT_REF_NAME- CI_PIPELINE_ID(those are GitLab default variable)?

OneDev has a couple of variables that can be used: https://docs.onedev.io/appendix/job-variables

Sagar Bud commented 3 weeks ago

Hello @jbauer

Thank you for the update.

Ok. We will point the node name to Public IP address. Is there any other way to configure kubernetes executor? Also, can we set IP restriction to access Onedev URL?

I have few more questions.

  1. We have created some project after that we have deleted it but those project are stuck in pending deletion. I have attached screen capture for your reference.
  2. How can configure to store docker layer caching and node module caching centrally like GCS bucket?

pro.png

Robin Shen commented 3 weeks ago

Ok. We will point the node name to Public IP address. Is there any other way to configure kubernetes executor? Also, can we set IP restriction to access Onedev URL?

As long as the server url specified in OneDev system setting can be accessed by pods in your k8s cluster, it will be fine. To set ip restriction, you will need to use some other tools outside OneDev.

We have created some project after that we have deleted it but those project are stuck in pending deletion. I have attached screen capture for your reference.

Please refresh the page to see if it gets deleted. If not, please check server log to see if there are any errors.

How can configure to store docker layer caching and node module caching centrally like GCS bucket?

I am not familiar with docker layer caching, will the buildx cache option work here? It allows to cache built image in registry. As to central cache of node modules, OneDev's npm registry does not support that yet. You may however use a per-project node modules cache via the set up cache step.

Sagar Bud commented 3 weeks ago

Hello @robin

Thank you for your quick response.

We have setup Onedev in Linux VM in our local infra and we're trying to configure kubernetes executor on GCP cluster. We have setup VPN tunnel between local infra and GCP cluster. Is there any way to resolve private IP address instead of public IP address while registering kubernetes executor?

We will check the server log for project deletion and update you.

We haven't use buildx cache option yet. In GitLab we can use docker-dind functionality to store docker layer caching for kubernetes executor. You can refer more details from here: https://docs.gitlab.com/ee/ci/docker/docker_layer_caching.html Can we use it with Onedev?

In kubernetes executor how per-project node modules works because every time kubernetes executor pods will create on different kubernetes nodes. You can refer the URL for the same: https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching

We need both caching mechanism because we're using multiple shell executor and kubernetes executor.

Robin Shen commented 3 weeks ago

We have setup Onedev in Linux VM in our local infra and we're trying to configure kubernetes executor on GCP cluster. We have setup VPN tunnel between local infra and GCP cluster. Is there any way to resolve private IP address instead of public IP address while registering kubernetes executor?

Using private ip for OneDev server does not matter, but your VPN tunnel needs to be configured so that containers running inside GCP cluster can connect to that private ip address. This is required as job pods running in k8s cluster needs to talk to OneDev server.

We haven't use buildx cache option yet. In GitLab we can use docker-dind functionality to store docker layer caching for kubernetes executor. You can refer more details from here: https://docs.gitlab.com/ee/ci/docker/docker_layer_caching.html Can we use it with Onedev?

If you are building images inside k8s, the Kaniko command allows to use cached layers in a registry. Some reference:

https://zanechua.com/blog/speed-up-kaniko-builds/

The Kaniko step in OneDev allows you specifying additional options in more settings, including cache options.

In kubernetes executor how per-project node modules works because every time kubernetes executor pods will create on different kubernetes nodes. You can refer the URL for the same: https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching

For directory caches, OneDev always stores the cache on server. So it does not matter if existing or new nodes are used. If network connection between server and k8s cluster is slow, you may consider running OneDev server directly in your k8s cluster via helm.

Sagar Bud commented 3 weeks ago

Hello @robin

We have checked Kaniko step for caching option but did not find any option for the same. I have attached screen capture for your reference. Can you please guide me how we can use docker caching mechanism.

se.png

Robin Shen commented 3 weeks ago

You may specify various cache options such as --cache, --cache-repo etc in more options field. They will be passed literally to the Kaniko command.

Sagar Bud commented 3 weeks ago

Hello @robin

Thank you for the update.

I will try it and update you. Meanwhile I am facing issue while push the image in GCP Artifact Registry. How i re.pngcan Authenticate it in Onedev CI/CD?

Robin Shen commented 3 weeks ago

Registry logins can be specified in executor (in administration menu) running your job.

Sagar Bud commented 3 weeks ago

Hello @robin

Thank you for your quick response.

Yes, I have checked it but how we can configure GCP Artifact Registry. Artifact Registry does not provide username and password. It provides service account and it'json key. How we can configure it? Docker hub provide username and password and it is working fine.

Do you have any idea how we can configure GCP Artifact Registry?

Robin Shen commented 3 weeks ago

Does it have manual guiding how to push from terminal manually via docker cli? Docker cli expects user name and password to login.

Sagar Bud commented 3 weeks ago

Hello @robin

We're using below command to push image using docker cli to GCP Artifact Registry.

gcloud auth activate-service-account --project test-team --key-file=dev_key.json

gcloud auth configure-docker europe-west1-docker.pkg.dev

docker push europe-west1-docker.pkg.dev/test-team/web/asc-web:develop-89

But how we can provide the details in Executor to authenticate Artifact Registry?

Robin Shen commented 3 weeks ago
Sagar Bud commented 3 weeks ago

Hello @robin

Thank you for the update.

We have refereed the below URL and generated the token to authenticate Artifact Registry but the token is Temporary. Is there any way to generate token every time and pass it in executor while running the pipeline? https://cloud.google.com/artifact-registry/docs/docker/authentication#standalone-helper

Moreover, Can we integrate any other CI/CD tool with Onedev?

Robin Shen commented 3 weeks ago

It is possible to update registry password from time to time. To do it:

  1. Create a build spec job calling OneDev restful api to update job executor registry password
  2. Schedule the job to run periodically via job triggers.

Currently OneDev does not integrate with other CI/CD tools.

Sagar Bud commented 2 weeks ago

We have tried as above mentioned by you but that is not working. Is there any way we can authenticate gcp artifact registry with kubernetes executor. Do you any sample step which only does this authentication stage?

Robin Shen commented 2 weeks ago

Unfortunately there is no such option. Why updating password of authenticator periodically not working?

Sagar Bud commented 2 weeks ago

When we run the command to get credentials i.e: **echo "https://us-central1-docker.pkg.dev" | docker-credential-gcr get **, we get output as:

{"ServerURL":"https://us-central1-docker.pkg.dev","Username":"_dcgcr_0_0_0_token","Secret":"ya29.a0Ad52N380lNks7TVgoxzMjsG0CXfQRxSKFfY9SwClOEqVC0W-RySao4_6Z7rUDC-LdKxsaIwAWMSNSotVVhJBeayNTWGz_CcwkjpKmLY0_V5S2aawhVZxfrkV_8NulsY3p06TWJTLAQGl5ZVYogFgV1nPy-XsO_RZzOJI7AaCgYKARYSARMSFQHGX2Mi39QXBt4meQypCN-roo__A0173"}

Now how to pass this username and password to this API call through pipeline : curl -u : -X POST -d@request-body.json -H "Content-Type: application/json" https://code.onedev.io/~api/settings/job-executors

Since in this we need to also pass the file "-d@request-body.json" and everytime that secret (password) should be updated in this file through pipeline.

Robin Shen commented 2 weeks ago

You may use jq (https://jqlang.github.io/jq/) to do the job, it allows you to extract desired fields from one json file and use them to replace desired fields in another json file.

These logic can be put in a command step running a docker image with curl and jq installed. The request-body.json can be retrieved beforehand via restful api.

Robin Shen commented 2 weeks ago

You may also separate the logic in multiple command steps, and they will share same job workspace.

Sagar Bud commented 2 weeks ago

Hello, while using the curl API call to update executor using this call: curl -u vipul:test123 -X POST -d@request-body.json -H "Content-Type: application/json" http://10.220.0.18/~api/settings/job-executors, We get error while running the job as below: Screenshot from 2024-04-16 12-15-48_3.png

And also as above mentioned by you to use curl command to update executor from the pipleline we get error as below:

Screenshot from 2024-04-16 14-53-50.png

Robin Shen commented 2 weeks ago

Looks that the job match field of the executor is configured incorrectly. What is the value?

Also in step command, please use @@ for any iteral @ to avoid being interpretated as variable. So -d@requst-body.json should be -d@@request-body.json instead.

Sagar Bud commented 2 weeks ago

This is the file configured: request-body_2.json

And what is happening here is if you go in the administration section and test the executor then it is successful:

Screenshot from 2024-04-16 12-16-09_2.png

But if you run the pipeline then it throws this error:

Screenshot from 2024-04-16 18-50-13_2.png

Robin Shen commented 2 weeks ago

For empty fields, use null instead of "".

To avoid such error, please configure a working executor with GUI, then get the executors template via restful api (by running curl -u <user>:<password> http://onedev-server/~api/settings/job-executors), from which you make customizations and post back to OneDev.

vipul commented 2 weeks ago

Hello, The above solution helped us now we are not facing the error. But what is happening now is if we are authenticating using token from our local then pipeline is working but the same thing we are trying through job then we are facing error. I am attaching a video for this above so you can get a better idea of what is happening.

Screencast 2024-04-17 14:48:15.mp4

Robin Shen commented 2 weeks ago

Do you mean job executors works with manually filled credential, but not with job populated credential? If so, I'd suggest to get the job executor via restful api and make sure it is using the credential generated in job. You may also use the generated credential from command line directly to see if it works.

vipul commented 2 weeks ago

Hi Robin,

Thank you so much for the help so far!

Do you mean job executors works with manually filled credential, but not with job populated credential? Yes.It looks like that job artifact or data passing doesn't get updated from test stage(where authentication is done) to build stage(where Kaniko builds the image).

suggest to get the job executor via restful api and make sure it is using the credential generated in job.

I did get the job executor details using restful api and also compared it with credentials in the executor section, they are the same.

You may also use the generated credential from command line directly to see if it works. Yes. those credentials are working when we put sleep time inside kubernetes pod and test auth. The only thing which is not working is when build stage is not taking the new token from previous stage. Can we add any authentication for specific kaniko build stage? I'm adding updated onedev-buildspec.yaml

Screenshot from 2024-04-17 18-49-59.png

Robin Shen commented 2 weeks ago

Is the populating credentials step and the kaniko step in two separate jobs and connect via job dependency (there is no stage concept in OneDev, so please avoid using that to avoid confusion)?

Mitesh commented 2 weeks ago

Hi Robin,

Really appreciate your help so far!

The current kaniko step(job) doesn't offer to put any other authentication apart from username and password so can you guide us how to put that token based auth within same kaniko step(job)? bcz we need some cil to actual get token from gcp like displayed in last image.

we are fine if any of the authentication supported by https://cloud.google.com/artifact-registry/docs

This is actually very important need for us because our most of projects are consuming GCP artifacts registry.

Robin Shen commented 2 weeks ago

@mitesh OneDev does not have plan to support that currently, as there may exist many vendor specific auth stategies, and this can complicate things a lot. You are almost successful with the approach of generating user/pass in executor. You may consider to put the logic in a separate job scheduling it to run periodically based on GCP auth expiring time. That way you do not need to generate auth in each project and job.

vipul commented 2 weeks ago

Hi Robin, I had another question that if multiple job runs of different projects then each job creates it namespace and then runs the job. Anything available like we are using same Kubernetes executor for different projects so each job should run in same namespace?

Robin Shen commented 2 weeks ago

OneDev runs each job in a separate namespace so that one can delete all resources of the job in case job is not cleaned up for some reason. Any reason you want all jobs in same executor sharing same namespace?

Robin Shen changed state to 'Closed' 1 week ago
Previous Value Current Value
Open
Closed
issue 1 of 1
Type
Question
Priority
Major
Assignee
Labels
No labels
Issue Votes (0)
Watchers (6)
Reference
onedev/server#1834
Please wait...
Page is in error, reload to recover