-
Previous Value Current Value Open
Closed
-
I am using below values.yaml and it works. Please check your custom deploy config to make sure your setting is taking effect.
... ingress: ## Optionally specify dns name to access OneDev via ingress controller host: "test.onedev.io" ## Whether or not to enable TLS for above host tls: false ## Will be used as ingressClassName of ingress spec to match controller class: nginx ## Set to false to disable the default certificate issuer from this chart if you want to use a custom one enableDefaultIssuer: true ## Custom annotations for the ingress annotations: # kubernetes.io/tls-acme: "true" # cert-manager.io/issuer: "" # cert-manager.io/cluster-issuer: "" nginx.ingress.kubernetes.io/proxy-body-size: "100m" ...The NPE when manually upload an artifact is a bug though which will be fixed in issue #917
-
I am using below values.yaml and it works. Please check your custom deploy config to make sure your setting is taking effect.
OK, the problem was... weird and I think it boils down to 1dev sending everything as one request hence somewhat "arbitrary" 50M limit, which didn't match anything... but - we had
nginx.ingress.kubernetes.io/proxy-body-size: "200m", and we were producing 4 files that were subsequently deployed thus in the end we hit that 200M limit as a sum of size of all files... -
I'm hitting the issue again and it's kinda werid... I had limit in 1dev set to 400M and in nginx/k8s ingress to 768M. Our (filtered) artifacts total 259M but we were hitting the limit again. Individual files are about 60-70M. We do have artifacts filter for
du -sh target/_distdirectory which is only:$ du -sh target/_dist 259M target/_distWhole target directory is about ~760M:
$ du -sh target/ 761M target/Would it be possible for 1dev to publish (filtered) artifacts individually so we could have saner nginx limits?
-
I created a k8s cluster at google cloud, but can not reproduce it. Can you please help to reproduce in that environment so that I can investigate the issue more easily?
-
Can't share environment as it's our main deployment and getting access could be problematic.
This is the project that causes issues: https://tigase.dev/tigase/_server/tigase-server/
It's rather simple but packs a lot and also ties
jibto build docker images. Neverthelesstarget/_distis ~250M and individual files are smaller. -
OneDev will only tar all files matching specified patterns in specified source directory. For your case, 250M is still less than 768M limit you set on Nginx side. And this sounds odd to me.
I do not mean to request to share your k8s env. If you can reproduce the issue with a simple k8s setup on google cloud, it will help a lot.
-
Previous Value Current Value Closed
Open
-
Name Previous Value Current Value Type
Bug
Question
-
More info for my test on a new k8s cluster on GCP. I deployed OneDev and enabled ingress so that traffic goes through nginx. Initially I set
nginx.ingress.kubernetes.io/proxy-body-sizeas200m. Then I published a 160m file which works fine. Then I published five 160m files altogether, and Nginx complains about "entity too large". After changing the body size to "1000m", it works fine again. -
OneDev will only tar all files matching specified patterns in specified source directory. For your case, 250M is still less than 768M limit you set on Nginx side. And this sounds odd to me.
Would you consider NOT-taring the files? In our setup it's relatively simple (only 5 archives) but if we add some more flavours it will baloon again.
Yes, we bumped the limit, but IMHO it should be per-file so you shouldn't set it to very high values.
-
That can slow down transfering considerably espectially when there many files to publish (such as publishing javadoc etc).
-
Haven't considered publishing javadoc via 1dev build pipeline - maybe/possible valid point.
Maybe an option in Publish / Artifacts whether "to tar or not to tar" (pardon the pun :) )? Though looking at "Publish" sub-options I'd argue that this could be dynamic - for site/reports/etc (implies lots of small fiels) use tar and for "artifacts" (implies few big files) don't use tar?
-
Sorry I don't want to complicate things here.
-
You may just define
nginx.ingress.kubernetes.io/proxy-body-sizeas0to avoid such issues. -
You may just define
nginx.ingress.kubernetes.io/proxy-body-sizeas0to avoid such issues.Well, yes. But that somewhat defies the purpose of the limits?
-
Then this should be set to upper bound size of all possible files being published, 😂
-
Previous Value Current Value Open
Closed
| Type |
Question
|
| Priority |
Normal
|
| Assignee |
When tryinh to publish artifact bigger than 50M I'm getting error "413 Request Entity Too Large" from nginx
We are running custom k8s chart, so I already increased nginx's maximum body size in ingress:
nginx.ingress.kubernetes.io/proxy-body-size: 200mbut still the issue persists.I dug a bit in the sources of PublishArtifact step (https://github.com/theonedev/onedev/blob/main/server-core/src/main/java/io/onedev/server/buildspec/step/PublishArtifactStep.java#L76-L76) and it seems like a regular copy… (though, we are using Kubernetes Executor, so there's that).
I though that file size limit from the performance options may be the problem (though publication of artifacts smaller than 50M works just fine) but after increasing this option from 20M to 100M it still doesn't work. However, manual upload now doesn't complain about file size limit, though it fails when trying to submit form just the same:
Any hint where this 50M could be configured would be highly appreciated.