-
Nothing changed for this, please check your network connection.
-
Previous Value Current Value Open
Closed
-

-
I tested to access ollama api via http and it works fine. Also check if you have proxy configured
-
No proxy is configured; everything is on the same machine within the same Docker network, so the behavior is odd. As you saw, I can reach it directly from inside the onedev container with curl, yet the onedev application times out—the request never even leaves.
-
Not sure what happens. Maybe a docker network issue.
-

-
During this period, I restarted the onedev and lmrouter containers individually, but the issue persists. Other containers on the same network can access it without any problems, such as Open WebUI.
-
I am not an expert on docker network things. But believe it is not a OneDev issue according to my test here.
-
You may run OneDev in bare metal mode to see if it works.
-
It seems a bit like a DNS issue; I'm looking into it.
-
写了个java进行测试,lmrouter服务不支持 HTTP/2,java程序可能默认用了HTTP/2进行访问所以超时了, === Testing HTTP/1.1 === ✓ Success! Status: 200 Time: 52ms Protocol: HTTP_1_1 === Testing HTTP/2 === ✗ Failed: HttpTimeoutException Message: request timed out
-
感谢反馈,考虑到http/2的性能优势,且绝大多数大模型 api 都提供http/2支持,OneDev不打算支持http/1.1了。
| Type |
Question
|
| Priority |
Normal
|
| Assignee | |
| Labels |
No labels
|
Starting from the last two or three releases, has the model configuration blocked HTTP access? I have an LLM gateway on my internal network; originally I only needed to set http://lmrouter:3000/v1 and it worked, but now it fails—requests aren’t even sent, even though I can reach lmrouter directly from inside the container.