max opened 2 years ago
|
|
Thanks for the feedback. The helm chart implementation is contributed by other user, and I am working on top of the contribution to make it production ready, with improvements both in manual and chart. I will check your suggestions while improving it. As to clusterrole permission:
|
|
Also the patch node permission can be removed. If removed, do not forget to disable the "create cache labels" in Kubernetes executor. |
|
#1: This isn't a good practice. The Namespace(s) should be user-defined. For tracking the ci pods, you'd simply add a unique id suffix to the name. This is how other executors like Jenkins or Airflow handle it. It is an anti-pattern to dynamically create namespaces for ephemeral pods. #2: CI jobs should not be touching K8s resources or have access to them by default. This should be provided as credential that is maintained by the user. The app (or any app for that matter) shouldnt be allowed arbitrarily create them. #3:
Nothing should be cached on the nodes. If Statefullness is needed, a PVC should be used. You risk filling up the node disk and you punish the os/kubelet disk with IOPS. Kubelet nodes, especially those running etcd, are sensitive to disk latency. The practice of writing to a host should be avoided. Most enterprises disable |
|
#1: Resources with prefix/suffix can get difficult to cleanup if these resources are left over for some reason. Also the namespace per CI approach can provide an isolated environment for each CI job. For instance, multiple instances of same CI job might be running, and they can access defined service name without interfering with each other. #2: Can you elaborate? In case a CI job wants to deploy some resources into some namespaces, how to give the job pod deploy permissions with a credential? Even the namespace to deploy in the job can be dynamic, for instance, the job might want to deploy into namespace identified by git branch, etc. #3: I thought about mounting pvc to store cache, but that is problematic for concurrent jobs, as a single pvc is generally not possible to be mounted to multiple nodes for read/write, not to mention the configuration overhead. I also thought about uploading/downloading cache to particular slot of some file system service, but that is much slower, and can even be slower than the case without cache if you have a fast network. I guess there is no perfect approach for caching. |
|
For #2, also want to mention that only authorized jobs can use a particular Kubernetes executor. So one can only allow trusted jobs to run with a certain Kubernetes executor with extra cluster roles. |
|
Any resource (including a namespace) could get left over. The K8s API provides a convenient way of monitoring these resources with a
A Namespace in itself doesn't isolate resources. It's just a way of logically grouping resources together. Further, if a CI Job needs an isolated environment, the last thing I'd want to do is give it the "keys to the castle" of my K8s cluster.
Via a ServiceAccount
I wouldn't put too much emphasis on caching; I think for most folks that are using this, re-downloading a few dependencies isn't that big of a deal. At an enterprise scale, maybe. Every major cloud provider these days has Anyways, I won't berate these issues any more; just wanted to give you some feedback. Thanks |
|
I will think about your comments for future development. Thanks a lot for all the feedback. |
|
OneDev referenced from other issue 1 year ago
|
Type |
Improvement
|
Priority |
Normal
|
Assignee |
I thought I'd lend some ideas for improvement for the Helm / Kubernetes install:
patch
nodesclusterrolebindings
rule. The Service Account for agent builds should be predetermined with a predetermined roleUnfortunately #4 is a showstopper for me. However, the project looks awesome and if I find some time I will try to address the above via a PR.