Zhou You opened 2 years ago
|
|||||
大部分都是idle状态,你可以编辑hibernate.properties将maximumPoolSize设置小一点。 |
|||||
另外这个错误容易复现吗? |
|||||
一开始我通过环境变量指定一些配置但是没作用看了代码发现是硬编码的几个变量有用,后来改了hibernate.properties
数据库就是用
连接占用是从5开始慢慢增长的 |
|||||
可能是Docker (Swarm / K8S) 的IPVS default idle timeout(900s)超时被丢弃了空闲连接,因此,当 hikari 关闭连接时,postgres 没有任何感知,但是从 postgres 的角度来看,仍然存在连接。造成连接泄漏,我改下maxLifeTime观察一天看下情况 |
|||||
IPVS 这方面不太了解。如果找到原因请告知,我也好写在文档里,感谢 ?? |
|||||
经过确认确实是IPVS的问题,将maxLifetime改为10分钟,再也没有连接泄漏了
不过还是建于连接池的配置最好都能通过环境变量进行调整 |
|||||
Robin Shen changed state to 'Closed' 2 years ago
|
|||||
Thank you. These properties will be added to environment variables. |
|||||
As I also run OneDev in docker I encountered that problem. Instead of changing the In my docker compose file I have added the following to all services that have such an idle behavior:
The above tells docker to set the sysctl properties for the onedev service/container. Linux will then send the first tcp keep alive probe after 300 seconds of inactivity and then continues to send probes every 30 seconds. It will also send up to 10 probes if no probes have been answered. After 10 failed probes the connection is considered dead. This will keep the IPVS updated and the connection will not be silently killed by IPVS because of idle time. Because IPVS does not tell anybody that it has killed a connection (does not send any reset packets) both sides of the connection might leak the connection. In case of postgres the configured max connections on postgres side can be reached easily. |
|||||
default TCP keepalive settings:
However, as https://cloud.google.com/compute/docs/troubleshooting/general-tips mentions:
They recommend the following settings:
In k8s we can to do this via the initContainer route.
The Kubernetes issue tracker mentions this: https://github.com/kubernetes/kubernetes/issues/32457#issuecomment-680325785 Since these settings seem to be reasonable defaults, I propose we make these our charts defaults as well. However, it is simple and convenient to setting of HikariCP connection pool through environment variables |
|||||
Sure it is an easy solution, but at the cost of postgres having to tear down and spin up a postgres backend every x minutes. That is something you can avoid when adjusting keep alive settings. Keep in mind that However I still think configuring tcp keep alive settings for a given k8s/swarm service is better, because I feel like the application should not assume/research anything about the underlying default system configuration in a cloud environment. Instead at deployment time the linux namespace for that container/service should be configured to better support the application characteristics. But at the end all solutions work. |
Type |
Question
|
Priority |
Normal
|
Assignee |
连接池配置会占满数据库所有链接,影响其它程序