Side Question: I noticed the default persistence.size is 100Gi. Is that still needed when using database.external? Can persistence.size be set to null?
Everything goes well until the onedevserver containers try to spin up. Here are the details:
Name: onedev-0
Namespace: onedev
Priority: 0
Node: refined-salmon-coalesce-75c004fb-ksdpd/192.168.50.186
Start Time: Sun, 04 Feb 2024 16:42:12 +0000
Labels: app.kubernetes.io/instance=onedev
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=onedev
controller-revision-hash=onedev-67f7cd54b9
statefulset.kubernetes.io/pod-name=onedev-0
Annotations: cni.projectcalico.org/containerID: 83eb2e5cfcbe9f60fd041d4887a60542ab86e4c4952948cc431f2cc5b05e8d30
cni.projectcalico.org/podIP: 10.42.1.38/32
cni.projectcalico.org/podIPs: 10.42.1.38/32
Status: Running
IP: 10.42.1.38
IPs:
IP: 10.42.1.38
Controlled By: StatefulSet/onedev
Containers:
onedevserver:
Container ID: containerd://a23ef71572b4bc7facc44fd918ffcdf12a6b923205bcf18fea8835b42325bf94
Image: docker.io/1dev/server:10.0.0
Image ID: docker.io/1dev/server@sha256:fdfb490bca03a7b928f8d77fa6dac59bc519ef55cade7be2a9473ef9f3e97f38
Ports: 6610/TCP, 6611/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 04 Feb 2024 16:44:49 +0000
Finished: Sun, 04 Feb 2024 16:45:04 +0000
Ready: False
Restart Count: 4
Environment:
k8s_service: onedev
max_memory_percent: 50
hibernate_dialect: io.onedev.server.persistence.PostgreSQLDialect
hibernate_connection_driver_class: org.postgresql.Driver
hibernate_connection_url: jdbc:postgresql://onedev-psql-postgresql-ha-pgpool.onedev.svc.cluster.local:5432/onedev
hibernate_connection_username: postgres
hibernate_connection_password: <set to the key 'dbPassword' in secret 'onedev-secrets'> Optional: false
hibernate_hikari_maximumPoolSize: 25
Mounts:
/opt/onedev from data (rw)
/opt/onedev/conf/trust-certs from trust-certs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5phnp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-onedev-0
ReadOnly: false
trust-certs:
Type: Secret (a volume populated by a Secret)
SecretName: onedev-trustcerts
Optional: true
kube-api-access-5phnp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m9s default-scheduler Successfully assigned onedev/onedev-0 to refined-salmon-coalesce-75c004fb-ksdpd
Normal Pulled 92s (x5 over 4m5s) kubelet Container image "docker.io/1dev/server:10.0.0" already present on machine
Normal Created 92s (x5 over 4m4s) kubelet Created container onedevserver
Normal Started 92s (x5 over 4m4s) kubelet Started container onedevserver
Warning BackOff 48s (x10 over 3m28s) kubelet Back-off restarting failed container onedevserver in pod onedev-0_onedev(57838e46-1223-4d17-bd24-a61bd69929a7)
LAST SEEN TYPE REASON OBJECT MESSAGE
5m48s Normal Killing pod/onedev-0 Stopping container onedevserver
5m42s Normal Scheduled pod/onedev-0 Successfully assigned onedev/onedev-0 to refined-salmon-coalesce-75c004fb-ksdpd
3m6s Normal Pulled pod/onedev-0 Container image "docker.io/1dev/server:10.0.0" already present on machine
3m6s Normal Created pod/onedev-0 Created container onedevserver
3m6s Normal Started pod/onedev-0 Started container onedevserver
37s Warning BackOff pod/onedev-0 Back-off restarting failed container onedevserver in pod onedev-0_onedev(57838e46-1223-4d17-bd24-a61bd69929a7)
5m3s Warning FailedMount pod/onedev-1 Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data trust-certs kube-api-access-p8btv]: timed out waiting for the condition
4m49s Normal Scheduled pod/onedev-1 Successfully assigned onedev/onedev-1 to refined-salmon-coalesce-75c004fb-c8d2l
34s Warning FailedAttachVolume pod/onedev-1 AttachVolume.Attach failed for volume "pvc-28f8006a-ac57-44f2-9e66-248cddb3b2e8" : rpc error: code = Aborted desc = volume pvc-28f8006a-ac57-44f2-9e66-248cddb3b2e8 is not ready for workloads
33s Warning FailedMount pod/onedev-1 Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data trust-certs kube-api-access-227d2]: timed out waiting for the condition
5m43s Normal SuccessfulCreate statefulset/onedev create Pod onedev-0 in StatefulSet onedev successful
4m50s Normal SuccessfulCreate statefulset/onedev create Pod onedev-1 in StatefulSet onedev successful
5m48s Warning FailedToUpdateEndpoint endpoints/onedev Failed to update endpoint onedev/onedev: Operation cannot be fulfilled on endpoints "onedev": the object has been modified; please apply your changes to the latest version and try again
Because I was getting nowhere, I decided to try a different database, mysql. I copied and pasted the OneDev instructions from here: Deployed into Kubernetes and had a bit of success. The first onedevserver pod came up, but the second did not. Here's that info:
Name: onedev-1
Namespace: onedev
Priority: 0
Node: refined-salmon-coalesce-75c004fb-c8d2l/192.168.50.187
Start Time: Sun, 04 Feb 2024 15:37:17 +0000
Labels: app.kubernetes.io/instance=onedev
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=onedev
controller-revision-hash=onedev-77d87646b9
statefulset.kubernetes.io/pod-name=onedev-1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/onedev
Containers:
onedevserver:
Container ID:
Image: docker.io/1dev/server:10.0.0
Image ID:
Ports: 6610/TCP, 6611/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
k8s_service: onedev
max_memory_percent: 50
hibernate_dialect: org.hibernate.dialect.MySQL5InnoDBDialect
hibernate_connection_driver_class: com.mysql.cj.jdbc.Driver
hibernate_connection_url: jdbc:mysql://onedev-mysql.onedev.svc.cluster.local:3306/onedev?serverTimezone=UTC&allowPublicKeyRetrieval=true&useSSL=false
hibernate_connection_username: root
hibernate_connection_password: <set to the key 'dbPassword' in secret 'onedev-secrets'> Optional: false
hibernate_hikari_maximumPoolSize: 25
Mounts:
/opt/onedev from data (rw)
/opt/onedev/conf/trust-certs from trust-certs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p8btv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-onedev-1
ReadOnly: false
trust-certs:
Type: Secret (a volume populated by a Secret)
SecretName: onedev-trustcerts
Optional: true
kube-api-access-p8btv:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56m default-scheduler Successfully assigned onedev/onedev-1 to refined-salmon-coalesce-75c004fb-c8d2l
Warning FailedMount 7m2s (x4 over 36m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[trust-certs kube-api-access-p8btv data]: timed out waiting for the condition
Warning FailedMount 2m30s (x5 over 45m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-p8btv data trust-certs]: timed out waiting for the condition
Warning FailedAttachVolume 88s (x35 over 56m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-28f8006a-ac57-44f2-9e66-248cddb3b2e8" : rpc error: code = Aborted desc = volume pvc-28f8006a-ac57-44f2-9e66-248cddb3b2e8 is not ready for workloads
Warning FailedMount 13s (x16 over 54m) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data trust-certs kube-api-access-p8btv]: timed out waiting for the condition
100s Warning FailedAttachVolume pod/onedev-1 AttachVolume.Attach failed for volume "pvc-28f8006a-ac57-44f2-9e66-248cddb3b2e8" : rpc error: code = Aborted desc = volume pvc-28f8006a-ac57-44f2-9e66-248cddb3b2e8 is not ready for workloads
2m4s Warning FailedMount pod/onedev-1 Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data trust-certs kube-api-access-p8btv]: timed out waiting for the condition
8m49s Warning FailedMount pod/onedev-1 Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[kube-api-access-p8btv data trust-certs]: timed out waiting for the condition
4m18s Warning FailedMount pod/onedev-1 Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[trust-certs kube-api-access-p8btv data]: timed out waiting for the condition
I was hoping I could work this out without creating an issue, but I have run into a brick wall. Any help would be much appreciated!
Robin Shen commented 3 months ago
Seems that pvc is not satisfied? Do you have default storage class? If not, please specify the persistence.storageClassName value:
I do have a default storage class. In fact, it was used to provision the postgres pods and when I tested mysql, one onedevserver pod spun up but the other two would not (as shown above for "onedev-1").
Robin Shen commented 3 months ago
The message indicates that dynamic volume provision can not complete for pvc "data-onedev-1". Seems like a cluster issue instead of OneDev.
PS: just tested deploying 3 replicas on gke cluster, and it works.
Johnathan Maravilla commented 3 months ago
Thanks. I'll seek help in a Kubernetes-centered forum. Regarding my side question -
I noticed the default persistence.size is 100Gi. Is that still needed when using database.external? Can persistence.size be set to null?
What is the best practice here?
Thanks again.
Robin Shen commented 3 months ago
It depends on size of your git repository, assume git repository is 1G, you will need at least 3G space (indexes, etc). If you need to store artifacts and packages, more will be necessary. Please give it an explicit value always.
Johnathan Maravilla commented 3 months ago
Thanks for that. One more thing before I go. It appears that I no longer have a PVC and the container itself within the onedevserver is crashing:
kubectl get events -n onedev
LAST SEEN TYPE REASON OBJECT MESSAGE
54m Normal ExternalProvisioning persistentvolumeclaim/data-onedev-0 waiting for a volume to be created, either by external provisioner "driver.longhorn.io" or manually created by system administrator
54m Normal Provisioning persistentvolumeclaim/data-onedev-0 External provisioner is provisioning volume for claim "onedev/data-onedev-0"
54m Normal ProvisioningSucceeded persistentvolumeclaim/data-onedev-0 Successfully provisioned volume pvc-6099d8ae-d6f9-49a2-a4d6-ed26fef6e9ad
53m Normal Provisioning persistentvolumeclaim/data-onedev-1 External provisioner is provisioning volume for claim "onedev/data-onedev-1"
53m Normal ExternalProvisioning persistentvolumeclaim/data-onedev-1 waiting for a volume to be created, either by external provisioner "driver.longhorn.io" or manually created by system administrator
53m Normal ProvisioningSucceeded persistentvolumeclaim/data-onedev-1 Successfully provisioned volume pvc-4c515e16-ae9e-4f75-b065-4792d6071b90
53m Normal ExternalProvisioning persistentvolumeclaim/data-onedev-2 waiting for a volume to be created, either by external provisioner "driver.longhorn.io" or manually created by system administrator
53m Normal Provisioning persistentvolumeclaim/data-onedev-2 External provisioner is provisioning volume for claim "onedev/data-onedev-2"
53m Normal ProvisioningSucceeded persistentvolumeclaim/data-onedev-2 Successfully provisioned volume pvc-69e9ab0b-b700-4dc8-b57a-bc692720264d
56m Normal ExternalProvisioning persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-0 waiting for a volume to be created, either by external provisioner "driver.longhorn.io" or manually created by system administrator
56m Normal Provisioning persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-0 External provisioner is provisioning volume for claim "onedev/data-onedev-psql-postgresql-ha-postgresql-0"
56m Normal ProvisioningSucceeded persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-0 Successfully provisioned volume pvc-9966d1fb-21df-4f49-ae0c-32f8f4290e5f
56m Normal ExternalProvisioning persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-1 waiting for a volume to be created, either by external provisioner "driver.longhorn.io" or manually created by system administrator
56m Normal Provisioning persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-1 External provisioner is provisioning volume for claim "onedev/data-onedev-psql-postgresql-ha-postgresql-1"
56m Normal ProvisioningSucceeded persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-1 Successfully provisioned volume pvc-f830b0db-c806-4b4a-a5c0-1730636b28b2
56m Normal ExternalProvisioning persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-2 waiting for a volume to be created, either by external provisioner "driver.longhorn.io" or manually created by system administrator
56m Normal Provisioning persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-2 External provisioner is provisioning volume for claim "onedev/data-onedev-psql-postgresql-ha-postgresql-2"
56m Normal ProvisioningSucceeded persistentvolumeclaim/data-onedev-psql-postgresql-ha-postgresql-2 Successfully provisioned volume pvc-ca38a23d-7e0c-4289-916a-6ae3f75b3f70
54m Warning FailedScheduling pod/onedev-0 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
54m Warning FailedScheduling pod/onedev-0 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
54m Normal Scheduled pod/onedev-0 Successfully assigned onedev/onedev-0 to frank-louse-coalesce-4d41fcd4-vxvtz
54m Normal SuccessfulAttachVolume pod/onedev-0 AttachVolume.Attach succeeded for volume "pvc-6099d8ae-d6f9-49a2-a4d6-ed26fef6e9ad"
54m Normal Pulling pod/onedev-0 Pulling image "docker.io/1dev/server:10.0.0"
53m Normal Pulled pod/onedev-0 Successfully pulled image "docker.io/1dev/server:10.0.0" in 15.80388151s (15.803904592s including waiting)
51m Normal Created pod/onedev-0 Created container onedevserver
51m Normal Started pod/onedev-0 Started container onedevserver
51m Normal Pulled pod/onedev-0 Container image "docker.io/1dev/server:10.0.0" already present on machine
3m56s Warning BackOff pod/onedev-0 Back-off restarting failed container onedevserver in pod onedev-0_onedev(10c97e39-3189-42c3-be89-f5008d6d57bf)
53m Warning FailedScheduling pod/onedev-1 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
53m Warning FailedScheduling pod/onedev-1 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
53m Normal Scheduled pod/onedev-1 Successfully assigned onedev/onedev-1 to frank-louse-coalesce-4d41fcd4-vxvtz
53m Normal SuccessfulAttachVolume pod/onedev-1 AttachVolume.Attach succeeded for volume "pvc-4c515e16-ae9e-4f75-b065-4792d6071b90"
51m Normal Pulled pod/onedev-1 Container image "docker.io/1dev/server:10.0.0" already present on machine
51m Normal Created pod/onedev-1 Created container onedevserver
51m Normal Started pod/onedev-1 Started container onedevserver
3m28s Warning BackOff pod/onedev-1 Back-off restarting failed container onedevserver in pod onedev-1_onedev(335396d0-83bd-4d4a-a0a9-1b2bb2485782)
53m Warning FailedScheduling pod/onedev-2 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
53m Warning FailedScheduling pod/onedev-2 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
53m Normal Scheduled pod/onedev-2 Successfully assigned onedev/onedev-2 to frank-louse-coalesce-4d41fcd4-vxvtz
53m Normal SuccessfulAttachVolume pod/onedev-2 AttachVolume.Attach succeeded for volume "pvc-69e9ab0b-b700-4dc8-b57a-bc692720264d"
50m Normal Pulled pod/onedev-2 Container image "docker.io/1dev/server:10.0.0" already present on machine
50m Normal Created pod/onedev-2 Created container onedevserver
50m Normal Started pod/onedev-2 Started container onedevserver
3m2s Warning BackOff pod/onedev-2 Back-off restarting failed container onedevserver in pod onedev-2_onedev(f3b8a327-8910-4d1a-8a55-f9abfd191628)
56m Normal Scheduled pod/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 Successfully assigned onedev/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 to frank-louse-coalesce-4d41fcd4-vxvtz
56m Normal Pulling pod/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 Pulling image "docker.io/bitnami/pgpool:4.5.0-debian-11-r4"
56m Normal Pulled pod/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 Successfully pulled image "docker.io/bitnami/pgpool:4.5.0-debian-11-r4" in 4.252696426s (4.252705268s including waiting)
56m Normal Created pod/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 Created container pgpool
56m Normal Started pod/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 Started container pgpool
55m Warning Unhealthy pod/onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4 Readiness probe failed: psql: error: connection to server on socket "/opt/bitnami/pgpool/tmp/.s.PGSQL.5432" failed: FATAL: failed to create a backend 0 connection...
56m Normal SuccessfulCreate replicaset/onedev-psql-postgresql-ha-pgpool-79b7d88cfd Created pod: onedev-psql-postgresql-ha-pgpool-79b7d88cfd-lhcv4
56m Normal ScalingReplicaSet deployment/onedev-psql-postgresql-ha-pgpool Scaled up replica set onedev-psql-postgresql-ha-pgpool-79b7d88cfd to 1
56m Warning FailedScheduling pod/onedev-psql-postgresql-ha-postgresql-0 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
56m Warning FailedScheduling pod/onedev-psql-postgresql-ha-postgresql-0 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
56m Normal Scheduled pod/onedev-psql-postgresql-ha-postgresql-0 Successfully assigned onedev/onedev-psql-postgresql-ha-postgresql-0 to frank-louse-coalesce-4d41fcd4-vxvtz
56m Normal SuccessfulAttachVolume pod/onedev-psql-postgresql-ha-postgresql-0 AttachVolume.Attach succeeded for volume "pvc-9966d1fb-21df-4f49-ae0c-32f8f4290e5f"
56m Normal Pulling pod/onedev-psql-postgresql-ha-postgresql-0 Pulling image "docker.io/bitnami/postgresql-repmgr:16.1.0-debian-11-r21"
55m Normal Pulled pod/onedev-psql-postgresql-ha-postgresql-0 Successfully pulled image "docker.io/bitnami/postgresql-repmgr:16.1.0-debian-11-r21" in 6.47917498s (6.479200832s including waiting)
55m Normal Created pod/onedev-psql-postgresql-ha-postgresql-0 Created container postgresql
55m Normal Started pod/onedev-psql-postgresql-ha-postgresql-0 Started container postgresql
56m Warning FailedScheduling pod/onedev-psql-postgresql-ha-postgresql-1 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
56m Warning FailedScheduling pod/onedev-psql-postgresql-ha-postgresql-1 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
56m Normal Scheduled pod/onedev-psql-postgresql-ha-postgresql-1 Successfully assigned onedev/onedev-psql-postgresql-ha-postgresql-1 to frank-louse-coalesce-4d41fcd4-vxvtz
56m Normal SuccessfulAttachVolume pod/onedev-psql-postgresql-ha-postgresql-1 AttachVolume.Attach succeeded for volume "pvc-f830b0db-c806-4b4a-a5c0-1730636b28b2"
56m Normal Pulling pod/onedev-psql-postgresql-ha-postgresql-1 Pulling image "docker.io/bitnami/postgresql-repmgr:16.1.0-debian-11-r21"
55m Normal Pulled pod/onedev-psql-postgresql-ha-postgresql-1 Successfully pulled image "docker.io/bitnami/postgresql-repmgr:16.1.0-debian-11-r21" in 4.830934811s (4.830952982s including waiting)
55m Normal Created pod/onedev-psql-postgresql-ha-postgresql-1 Created container postgresql
55m Normal Started pod/onedev-psql-postgresql-ha-postgresql-1 Started container postgresql
55m Warning Unhealthy pod/onedev-psql-postgresql-ha-postgresql-1 Readiness probe failed: psql: error: connection to server at "127.0.0.1", port 5432 failed: Connection refused...
56m Warning FailedScheduling pod/onedev-psql-postgresql-ha-postgresql-2 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
56m Warning FailedScheduling pod/onedev-psql-postgresql-ha-postgresql-2 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
56m Normal Scheduled pod/onedev-psql-postgresql-ha-postgresql-2 Successfully assigned onedev/onedev-psql-postgresql-ha-postgresql-2 to frank-louse-coalesce-4d41fcd4-vxvtz
56m Normal SuccessfulAttachVolume pod/onedev-psql-postgresql-ha-postgresql-2 AttachVolume.Attach succeeded for volume "pvc-ca38a23d-7e0c-4289-916a-6ae3f75b3f70"
56m Normal Pulling pod/onedev-psql-postgresql-ha-postgresql-2 Pulling image "docker.io/bitnami/postgresql-repmgr:16.1.0-debian-11-r21"
55m Normal Pulled pod/onedev-psql-postgresql-ha-postgresql-2 Successfully pulled image "docker.io/bitnami/postgresql-repmgr:16.1.0-debian-11-r21" in 6.461018233s (6.461029269s including waiting)
55m Normal Created pod/onedev-psql-postgresql-ha-postgresql-2 Created container postgresql
55m Normal Started pod/onedev-psql-postgresql-ha-postgresql-2 Started container postgresql
55m Warning Unhealthy pod/onedev-psql-postgresql-ha-postgresql-2 Readiness probe failed: psql: error: connection to server at "127.0.0.1", port 5432 failed: Connection refused...
56m Normal SuccessfulCreate statefulset/onedev-psql-postgresql-ha-postgresql create Claim data-onedev-psql-postgresql-ha-postgresql-0 Pod onedev-psql-postgresql-ha-postgresql-0 in StatefulSet onedev-psql-postgresql-ha-postgresql success
56m Normal SuccessfulCreate statefulset/onedev-psql-postgresql-ha-postgresql create Pod onedev-psql-postgresql-ha-postgresql-0 in StatefulSet onedev-psql-postgresql-ha-postgresql successful
56m Normal SuccessfulCreate statefulset/onedev-psql-postgresql-ha-postgresql create Claim data-onedev-psql-postgresql-ha-postgresql-2 Pod onedev-psql-postgresql-ha-postgresql-2 in StatefulSet onedev-psql-postgresql-ha-postgresql success
56m Normal SuccessfulCreate statefulset/onedev-psql-postgresql-ha-postgresql create Claim data-onedev-psql-postgresql-ha-postgresql-1 Pod onedev-psql-postgresql-ha-postgresql-1 in StatefulSet onedev-psql-postgresql-ha-postgresql success
56m Normal SuccessfulCreate statefulset/onedev-psql-postgresql-ha-postgresql create Pod onedev-psql-postgresql-ha-postgresql-2 in StatefulSet onedev-psql-postgresql-ha-postgresql successful
56m Normal SuccessfulCreate statefulset/onedev-psql-postgresql-ha-postgresql create Pod onedev-psql-postgresql-ha-postgresql-1 in StatefulSet onedev-psql-postgresql-ha-postgresql successful
54m Normal SuccessfulCreate statefulset/onedev create Claim data-onedev-0 Pod onedev-0 in StatefulSet onedev success
54m Normal SuccessfulCreate statefulset/onedev create Pod onedev-0 in StatefulSet onedev successful
53m Normal SuccessfulCreate statefulset/onedev create Claim data-onedev-1 Pod onedev-1 in StatefulSet onedev success
53m Normal SuccessfulCreate statefulset/onedev create Pod onedev-1 in StatefulSet onedev successful
53m Normal SuccessfulCreate statefulset/onedev create Claim data-onedev-2 Pod onedev-2 in StatefulSet onedev success
53m Normal SuccessfulCreate statefulset/onedev create Pod onedev-2 in StatefulSet onedev successful
53m Warning FailedToUpdateEndpoint endpoints/onedev Failed to update endpoint onedev/onedev: Operation cannot be fulfilled on endpoints "onedev": the object has been modified; please apply your changes to the latest version and try again
kubectl logs -n onedev onedev-2
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
INFO - Launching application from '/app'...
INFO - Starting application...
INFO - Successfully checked /opt/onedev
INFO - Stopping application...
<-- Wrapper Stopped
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
06:46:23 INFO i.onedev.commons.bootstrap.Bootstrap - Launching application from '/opt/onedev'...
06:46:23 INFO i.onedev.commons.bootstrap.Bootstrap - Cleaning temp directory...
06:46:23 INFO io.onedev.commons.loader.AppLoader - Starting application...
06:46:27 ERROR i.onedev.commons.bootstrap.Bootstrap - Error booting application
java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: unable to get session context
at io.onedev.commons.bootstrap.Bootstrap.unchecked(Bootstrap.java:318)
at io.onedev.commons.utils.ExceptionUtils.unchecked(ExceptionUtils.java:31)
at io.onedev.server.persistence.PersistenceUtils.openConnection(PersistenceUtils.java:38)
at io.onedev.server.data.DefaultDataManager.openConnection(DefaultDataManager.java:238)
at io.onedev.server.ee.clustering.DefaultClusterManager.start(DefaultClusterManager.java:102)
at io.onedev.server.OneDev.start(OneDev.java:139)
at io.onedev.commons.loader.DefaultPluginManager.start(DefaultPluginManager.java:44)
at io.onedev.commons.loader.AppLoader.start(AppLoader.java:60)
at io.onedev.commons.bootstrap.Bootstrap.main(Bootstrap.java:199)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:349)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.postgresql.util.PSQLException: FATAL: unable to get session context
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2497)
at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2618)
at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:135)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:250)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
at org.postgresql.Driver.makeConnection(Driver.java:458)
at org.postgresql.Driver.connect(Driver.java:260)
at io.onedev.server.persistence.PersistenceUtils.openConnection(PersistenceUtils.java:36)
... 12 common frames omitted
06:46:27 INFO io.onedev.commons.loader.AppLoader - Stopping application...
<-- Wrapper Stopped
Do I need to create the onedev db within postgres? Edit: Probably not, the same issue happens with mysql after testing.
Robin Shen commented 3 months ago
Yes, db needs to be created before OneDev connects to db server. For MySQL, it will be created automatically via enviornment variable in the example mysql deployment spec.
Johnathan Maravilla commented 3 months ago
Doesn't seem to be the case with MySQL (9.18.0)...
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
INFO - Launching application from '/app'...
INFO - Starting application...
INFO - Successfully checked /opt/onedev
INFO - Stopping application...
<-- Wrapper Stopped
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
15:17:11 INFO i.onedev.commons.bootstrap.Bootstrap - Launching application from '/opt/onedev'...
15:17:11 INFO i.onedev.commons.bootstrap.Bootstrap - Cleaning temp directory...
15:17:11 INFO io.onedev.commons.loader.AppLoader - Starting application...
15:17:15 ERROR i.onedev.commons.bootstrap.Bootstrap - Error booting application
java.lang.RuntimeException: java.sql.SQLSyntaxErrorException: Unknown database 'onedev'
at io.onedev.commons.bootstrap.Bootstrap.unchecked(Bootstrap.java:318)
at io.onedev.commons.utils.ExceptionUtils.unchecked(ExceptionUtils.java:31)
at io.onedev.server.persistence.PersistenceUtils.openConnection(PersistenceUtils.java:38)
at io.onedev.server.data.DefaultDataManager.openConnection(DefaultDataManager.java:238)
at io.onedev.server.ee.clustering.DefaultClusterManager.start(DefaultClusterManager.java:102)
at io.onedev.server.OneDev.start(OneDev.java:139)
at io.onedev.commons.loader.DefaultPluginManager.start(DefaultPluginManager.java:44)
at io.onedev.commons.loader.AppLoader.start(AppLoader.java:60)
at io.onedev.commons.bootstrap.Bootstrap.main(Bootstrap.java:199)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:349)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.sql.SQLSyntaxErrorException: Unknown database 'onedev'
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:832)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:207)
at io.onedev.server.persistence.PersistenceUtils.openConnection(PersistenceUtils.java:36)
... 12 common frames omitted
15:17:15 INFO io.onedev.commons.loader.AppLoader - Stopping application...
<-- Wrapper Stopped
Johnathan Maravilla commented 3 months ago
Back to postgres-ha, it appears I am making some headway here. Getting excited that I am almost at the finish line, haha.
After creating a 'onedev' db in postgres and then installing onedev with one replica set, it appears to work! No crashes, although I am not sure if the "INFO - Stopping application..." is supposed to happen:
> k logs -n onedev onedev-0
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
INFO - Launching application from '/app'...
INFO - Starting application...
INFO - Successfully checked /opt/onedev
INFO - Stopping application...
<-- Wrapper Stopped
When I do a helm upgrade to onedev with more than 1 replicas, (I set to 3) all 3 pods start crashing again...
> k logs -n onedev onedev-0
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
INFO - Launching application from '/app'...
INFO - Starting application...
INFO - Populating /opt/onedev...
INFO - Successfully populated /opt/onedev
INFO - Stopping application...
<-- Wrapper Stopped
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
16:35:54 INFO i.onedev.commons.bootstrap.Bootstrap - Launching application from '/opt/onedev'...
16:35:54 INFO io.onedev.commons.loader.AppLoader - Starting application...
16:36:14 INFO i.o.s.e.i.DefaultBuildMetricManager - Caching build metric info...
16:36:14 INFO i.o.s.e.i.DefaultBuildParamManager - Caching build param info...
16:36:15 INFO i.o.s.e.impl.DefaultBuildManager - Caching build info...
16:36:15 INFO i.o.s.e.impl.DefaultProjectManager - Checking projects...
16:36:15 INFO i.o.s.e.i.DefaultAgentAttributeManager - Caching agent attribute info...
16:36:15 INFO i.o.s.e.impl.DefaultIssueManager - Caching issue info...
16:36:22 WARN io.onedev.server.OneDev - Please set up the server at http://10.43.2.106
16:40:21 ERROR i.onedev.commons.bootstrap.Bootstrap - Error booting application
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at io.onedev.server.event.Listener.notify(Listener.java:21)
at io.onedev.server.event.DefaultListenerRegistry.invokeListeners(DefaultListenerRegistry.java:103)
at io.onedev.server.event.DefaultListenerRegistry.post(DefaultListenerRegistry.java:153)
at io.onedev.server.persistence.TransactionInterceptor$1.call(TransactionInterceptor.java:23)
at io.onedev.server.persistence.DefaultTransactionManager.lambda$call$0(DefaultTransactionManager.java:59)
at io.onedev.server.persistence.DefaultSessionManager.call(DefaultSessionManager.java:90)
at io.onedev.server.persistence.DefaultTransactionManager.call(DefaultTransactionManager.java:57)
at io.onedev.server.persistence.TransactionInterceptor.invoke(TransactionInterceptor.java:18)
at io.onedev.server.persistence.dao.DefaultDao.persist(DefaultDao.java:52)
at io.onedev.server.persistence.TransactionInterceptor$1.call(TransactionInterceptor.java:23)
at io.onedev.server.persistence.DefaultTransactionManager.lambda$call$0(DefaultTransactionManager.java:59)
at io.onedev.server.persistence.DefaultSessionManager.call(DefaultSessionManager.java:90)
at io.onedev.server.persistence.DefaultTransactionManager.call(DefaultTransactionManager.java:57)
at io.onedev.server.persistence.TransactionInterceptor.invoke(TransactionInterceptor.java:18)
at jdk.internal.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at com.google.inject.internal.DelegatingInvocationHandler.invoke(DelegatingInvocationHandler.java:50)
at com.sun.proxy.$Proxy29.persist(Unknown Source)
at io.onedev.server.entitymanager.impl.DefaultSettingManager.saveSetting(DefaultSettingManager.java:116)
at io.onedev.server.persistence.TransactionInterceptor$1.call(TransactionInterceptor.java:23)
at io.onedev.server.persistence.DefaultTransactionManager.lambda$call$0(DefaultTransactionManager.java:59)
at io.onedev.server.persistence.DefaultSessionManager.call(DefaultSessionManager.java:90)
at io.onedev.server.persistence.DefaultTransactionManager.call(DefaultTransactionManager.java:57)
at io.onedev.server.persistence.TransactionInterceptor.invoke(TransactionInterceptor.java:18)
at io.onedev.server.entitymanager.impl.DefaultSettingManager.saveEmailTemplates(DefaultSettingManager.java:297)
at io.onedev.server.persistence.TransactionInterceptor$1.call(TransactionInterceptor.java:23)
at io.onedev.server.persistence.DefaultTransactionManager.lambda$call$0(DefaultTransactionManager.java:59)
at io.onedev.server.persistence.DefaultSessionManager.call(DefaultSessionManager.java:90)
at io.onedev.server.persistence.DefaultTransactionManager.call(DefaultTransactionManager.java:57)
> k logs -n onedev onedev-1 --previous
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
INFO - Launching application from '/app'...
INFO - Starting application...
INFO - Successfully checked /opt/onedev
INFO - Stopping application...
<-- Wrapper Stopped
--> Wrapper Started as Console
Java Service Wrapper Standard Edition 64-bit 3.5.51
Copyright (C) 1999-2022 Tanuki Software, Ltd. All Rights Reserved.
http://wrapper.tanukisoftware.com
Licensed to OneDev for Service Wrapping
Launching a JVM...
WrapperManager: Initializing...
default THREAD FACTORY
17:09:17 INFO i.onedev.commons.bootstrap.Bootstrap - Launching application from '/opt/onedev'...
17:09:17 INFO i.onedev.commons.bootstrap.Bootstrap - Cleaning temp directory...
17:09:18 INFO io.onedev.commons.loader.AppLoader - Starting application...
17:09:21 ERROR i.onedev.commons.bootstrap.Bootstrap - Error booting application
java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: Sorry, too many clients already
at io.onedev.commons.bootstrap.Bootstrap.unchecked(Bootstrap.java:318)
at io.onedev.commons.utils.ExceptionUtils.unchecked(ExceptionUtils.java:31)
at io.onedev.server.persistence.PersistenceUtils.openConnection(PersistenceUtils.java:38)
at io.onedev.server.data.DefaultDataManager.openConnection(DefaultDataManager.java:238)
at io.onedev.server.ee.clustering.DefaultClusterManager.start(DefaultClusterManager.java:102)
at io.onedev.server.OneDev.start(OneDev.java:139)
at io.onedev.commons.loader.DefaultPluginManager.start(DefaultPluginManager.java:44)
at io.onedev.commons.loader.AppLoader.start(AppLoader.java:60)
at io.onedev.commons.bootstrap.Bootstrap.main(Bootstrap.java:199)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.tanukisoftware.wrapper.WrapperSimpleApp.run(WrapperSimpleApp.java:349)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.postgresql.util.PSQLException: FATAL: Sorry, too many clients already
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:520)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:141)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
at org.postgresql.Driver.makeConnection(Driver.java:458)
at org.postgresql.Driver.connect(Driver.java:260)
at io.onedev.server.persistence.PersistenceUtils.openConnection(PersistenceUtils.java:36)
... 12 common frames omitted
17:09:21 INFO io.onedev.commons.loader.AppLoader - Stopping application...
<-- Wrapper Stopped
Johnathan Maravilla commented 3 months ago
Closing issue as I believe this to an issue with bitnami/postgresql-ha. I'll do some investigating on how to fix this. If I come across a fix, I'll comment so that anyone else whoever runs into this will have a reference.
I know you are super busy, but it I'd like to request official documentation on how to prepare a supported db for external connections.
Thanks again, cheers!
Johnathan Maravillachanged state to 'Closed'3 months ago
Previous Value
Current Value
Open
Closed
Johnathan Maravilla commented 3 months ago
Issue is related to bitnami/postgresql-ha.
Robin Shen commented 3 months ago
Closing issue as I believe this to an issue with bitnami/postgresql-ha. I'll do some investigating on how to fix this. If I come across a fix, I'll comment so that anyone else whoever runs into this will have a reference.
I know you are super busy, but it I'd like to request official documentation on how to prepare a supported db for external connections.
Thanks again, cheers!
Thanks for willing to share with your further investigations. I will add documentation on how to set up PostgreSQL.
Johnathan Maravilla commented 3 months ago
Ended up abandoning bitnami/postgresql-ha. The issue I ran into with attempting to do more than one replica and getting the "FATAL: Sorry, too many clients already" may be resolved with increasing the amount of Pgpool-II replicas (if anyone sees this and want to go down this road). This ended up being overly complicated for my environment.
I instead decided to use stackgres.io instead and HA worked flawlessly! Cheers!
Robin Shen commented 3 months ago
Thanks for the update. Have you also solved the pvc not ready issue? Is multiple onedev replicas can get up and running now?
Hello, Robin!
I am experiencing an issue where I am my onedev pod and replicas will not spin up successfully.
Here is my setup (probably more information needed, than necessary):
Here are the steps I take from a fresh install of a 3 node cluster:
Side Question: I noticed the default persistence.size is 100Gi. Is that still needed when using database.external? Can persistence.size be set to null?
Everything goes well until the onedevserver containers try to spin up. Here are the details:
Because I was getting nowhere, I decided to try a different database, mysql. I copied and pasted the OneDev instructions from here: Deployed into Kubernetes and had a bit of success. The first onedevserver pod came up, but the second did not. Here's that info:
I was hoping I could work this out without creating an issue, but I have run into a brick wall. Any help would be much appreciated!