Sagar Bud opened 3 weeks ago
|
|||||||
Sagar Bud changed fields 3 weeks ago
|
|||||||
The node mentioned in your screenshot cannot resolve the DNS name you have configured as OneDev server. The domain is not available in the public DNS so you somehow have to provide your IP.
OneDev only executes jobs automatically if they have a trigger assigned. Otherwise you have to start them manually. So you could have a trigger for your build job that runs whenever code is committed and a deploy job that depends on the build job and possibly uses artifacts published by the build job as input. OneDev has a job step to publish/retrieve artifacts. https://docs.onedev.io/tutorials/cicd/understanding-pipeline
You can make a project hierarchy in OneDev and then define job secrets higher in the hierarchy. For example
OneDev has a couple of variables that can be used: https://docs.onedev.io/appendix/job-variables |
|||||||
Hello @jbauer Thank you for the update. Ok. We will point the node name to Public IP address. Is there any other way to configure kubernetes executor? Also, can we set IP restriction to access Onedev URL? I have few more questions.
|
|||||||
As long as the server url specified in OneDev system setting can be accessed by pods in your k8s cluster, it will be fine. To set ip restriction, you will need to use some other tools outside OneDev.
Please refresh the page to see if it gets deleted. If not, please check server log to see if there are any errors.
I am not familiar with docker layer caching, will the buildx cache option work here? It allows to cache built image in registry. As to central cache of node modules, OneDev's npm registry does not support that yet. You may however use a per-project node modules cache via the set up cache step. |
|||||||
Hello @robin Thank you for your quick response. We have setup Onedev in Linux VM in our local infra and we're trying to configure kubernetes executor on GCP cluster. We have setup VPN tunnel between local infra and GCP cluster. Is there any way to resolve private IP address instead of public IP address while registering kubernetes executor? We will check the server log for project deletion and update you. We haven't use buildx cache option yet. In GitLab we can use docker-dind functionality to store docker layer caching for kubernetes executor. You can refer more details from here: https://docs.gitlab.com/ee/ci/docker/docker_layer_caching.html Can we use it with Onedev? In kubernetes executor how per-project node modules works because every time kubernetes executor pods will create on different kubernetes nodes. You can refer the URL for the same: https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching We need both caching mechanism because we're using multiple shell executor and kubernetes executor. |
|||||||
Using private ip for OneDev server does not matter, but your VPN tunnel needs to be configured so that containers running inside GCP cluster can connect to that private ip address. This is required as job pods running in k8s cluster needs to talk to OneDev server.
If you are building images inside k8s, the Kaniko command allows to use cached layers in a registry. Some reference: https://zanechua.com/blog/speed-up-kaniko-builds/ The Kaniko step in OneDev allows you specifying additional options in more settings, including cache options.
For directory caches, OneDev always stores the cache on server. So it does not matter if existing or new nodes are used. If network connection between server and k8s cluster is slow, you may consider running OneDev server directly in your k8s cluster via helm. |
|||||||
Hello @robin We have checked Kaniko step for caching option but did not find any option for the same. I have attached screen capture for your reference. Can you please guide me how we can use docker caching mechanism. |
|||||||
You may specify various cache options such as |
|||||||
Hello @robin Thank you for the update. I will try it and update you. Meanwhile I am facing issue while push the image in GCP Artifact Registry. How i can Authenticate it in Onedev CI/CD? |
|||||||
Registry logins can be specified in executor (in administration menu) running your job. |
|||||||
Hello @robin Thank you for your quick response. Yes, I have checked it but how we can configure GCP Artifact Registry. Artifact Registry does not provide username and password. It provides service account and it'json key. How we can configure it? Docker hub provide username and password and it is working fine. Do you have any idea how we can configure GCP Artifact Registry? |
|||||||
Does it have manual guiding how to push from terminal manually via docker cli? Docker cli expects user name and password to login. |
|||||||
Hello @robin We're using below command to push image using docker cli to GCP Artifact Registry. gcloud auth activate-service-account --project test-team --key-file=dev_key.json gcloud auth configure-docker europe-west1-docker.pkg.dev docker push europe-west1-docker.pkg.dev/test-team/web/asc-web:develop-89 But how we can provide the details in Executor to authenticate Artifact Registry? |
|||||||
Please check if below helps: https://discuss.circleci.com/t/authenticated-docker-pulls-for-gcp-artifact-registry/42194/2 |
|||||||
Hello @robin Thank you for the update. We have refereed the below URL and generated the token to authenticate Artifact Registry but the token is Temporary. Is there any way to generate token every time and pass it in executor while running the pipeline? https://cloud.google.com/artifact-registry/docs/docker/authentication#standalone-helper Moreover, Can we integrate any other CI/CD tool with Onedev? |
|||||||
It is possible to update registry password from time to time. To do it:
Currently OneDev does not integrate with other CI/CD tools. |
|||||||
We have tried as above mentioned by you but that is not working. Is there any way we can authenticate gcp artifact registry with kubernetes executor. Do you any sample step which only does this authentication stage? |
|||||||
Unfortunately there is no such option. Why updating password of authenticator periodically not working? |
|||||||
When we run the command to get credentials i.e: **echo "https://us-central1-docker.pkg.dev" | docker-credential-gcr get **, we get output as: {"ServerURL":"https://us-central1-docker.pkg.dev","Username":"_dcgcr_0_0_0_token","Secret":"ya29.a0Ad52N380lNks7TVgoxzMjsG0CXfQRxSKFfY9SwClOEqVC0W-RySao4_6Z7rUDC-LdKxsaIwAWMSNSotVVhJBeayNTWGz_CcwkjpKmLY0_V5S2aawhVZxfrkV_8NulsY3p06TWJTLAQGl5ZVYogFgV1nPy-XsO_RZzOJI7AaCgYKARYSARMSFQHGX2Mi39QXBt4meQypCN-roo__A0173"} Now how to pass this username and password to this API call through pipeline : curl -u : -X POST -d@request-body.json -H "Content-Type: application/json" https://code.onedev.io/~api/settings/job-executors Since in this we need to also pass the file "-d@request-body.json" and everytime that secret (password) should be updated in this file through pipeline. |
|||||||
You may use jq (https://jqlang.github.io/jq/) to do the job, it allows you to extract desired fields from one json file and use them to replace desired fields in another json file. These logic can be put in a command step running a docker image with curl and jq installed. The |
|||||||
You may also separate the logic in multiple command steps, and they will share same job workspace. |
|||||||
Hello, while using the curl API call to update executor using this call: curl -u vipul:test123 -X POST -d@request-body.json -H "Content-Type: application/json" http://10.220.0.18/~api/settings/job-executors, We get error while running the job as below: And also as above mentioned by you to use curl command to update executor from the pipleline we get error as below: |
|||||||
Looks that the job match field of the executor is configured incorrectly. What is the value? Also in step command, please use |
|||||||
This is the file configured: request-body_2.json And what is happening here is if you go in the administration section and test the executor then it is successful: But if you run the pipeline then it throws this error: |
|||||||
For empty fields, use null instead of "". To avoid such error, please configure a working executor with GUI, then get the executors template via restful api (by running |
|||||||
Hello, The above solution helped us now we are not facing the error. But what is happening now is if we are authenticating using token from our local then pipeline is working but the same thing we are trying through job then we are facing error. I am attaching a video for this above so you can get a better idea of what is happening. |
|||||||
Do you mean job executors works with manually filled credential, but not with job populated credential? If so, I'd suggest to get the job executor via restful api and make sure it is using the credential generated in job. You may also use the generated credential from command line directly to see if it works. |
|||||||
Hi Robin, Thank you so much for the help so far! Do you mean job executors works with manually filled credential, but not with job populated credential? Yes.It looks like that job artifact or data passing doesn't get updated from test stage(where authentication is done) to build stage(where Kaniko builds the image). suggest to get the job executor via restful api and make sure it is using the credential generated in job.
You may also use the generated credential from command line directly to see if it works. Yes. those credentials are working when we put sleep time inside kubernetes pod and test auth. The only thing which is not working is when build stage is not taking the new token from previous stage. Can we add any authentication for specific kaniko build stage? I'm adding updated onedev-buildspec.yaml |
|||||||
Is the populating credentials step and the kaniko step in two separate jobs and connect via job dependency (there is no stage concept in OneDev, so please avoid using that to avoid confusion)? |
|||||||
Hi Robin, Really appreciate your help so far! The current kaniko step(job) doesn't offer to put any other authentication apart from username and password so can you guide us how to put that token based auth within same kaniko step(job)? bcz we need some cil to actual get token from gcp like displayed in last image. we are fine if any of the authentication supported by https://cloud.google.com/artifact-registry/docs This is actually very important need for us because our most of projects are consuming GCP artifacts registry. |
|||||||
@mitesh OneDev does not have plan to support that currently, as there may exist many vendor specific auth stategies, and this can complicate things a lot. You are almost successful with the approach of generating user/pass in executor. You may consider to put the logic in a separate job scheduling it to run periodically based on GCP auth expiring time. That way you do not need to generate auth in each project and job. |
|||||||
Hi Robin, I had another question that if multiple job runs of different projects then each job creates it namespace and then runs the job. Anything available like we are using same Kubernetes executor for different projects so each job should run in same namespace? |
|||||||
OneDev runs each job in a separate namespace so that one can delete all resources of the job in case job is not cleaned up for some reason. Any reason you want all jobs in same executor sharing same namespace? |
|||||||
Robin Shen changed state to 'Closed' 1 week ago
|
Type |
Question
|
Priority |
Major
|
Assignee | |
Labels |
No labels
|
Hello There,
I have few queries regarding Onedev CI/CD pipeline.