Running OneDev Agent (and OneDev) as docker container I noticed that exceptions seem to be logged line by line which causes docker do produces one log statement per exception stack frame.
When using log shippers and a centralized logging tool like Grafana + Grafana Promtail + Grafana Loki (or Elasticsearch + Kibana + Filebeat) this is unfortunate since you can not search for parts of an exception to find the whole exception. You would only find that single log line with that single stack frame matching the search. Then you would need to create a time search around that found log line to see all the other log lines that contain the rest of the exception.
I guess you are using printStackTrace and I would suggest using a real logger instead. That would result in a single, multiline log statement as it should be.
Robin Shen commented 1 year ago
OneDev logs its exception as a single statement. Do you mean stack traces from external tool? In that case, OneDev can only handle them literally line by line.
jbauer commented 1 year ago
No these are exceptions from the agent.
For example I currently see the exception
16:30:17 INFO io.onedev.agent.Agent - Connecting to http://onedev:6610...
16:30:24 INFO io.onedev.agent.AgentSocket - Connected to server
16:30:24 ERROR io.onedev.agent.AgentSocket - Error processing websocket message
java.lang.RuntimeException: Name 'onedev-agent-2' already used by another agent
at io.onedev.agent.AgentSocket.onMessage(AgentSocket.java:217)
at jdk.internal.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.eclipse.jetty.websocket.common.events.annotated.CallableMethod.call(CallableMethod.java:70)
at org.eclipse.jetty.websocket.common.events.annotated.OptionalSessionCallableMethod.call(OptionalSessionCallableMethod.java:72)
at org.eclipse.jetty.websocket.common.events.JettyAnnotatedEventDriver.onBinaryMessage(JettyAnnotatedEventDriver.java:143)
at org.eclipse.jetty.websocket.common.message.SimpleBinaryMessage.messageComplete(SimpleBinaryMessage.java:75)
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:67)
at org.eclipse.jetty.websocket.common.events.JettyAnnotatedEventDriver.onBinaryFrame(JettyAnnotatedEventDriver.java:130)
at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:147)
at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:326)
at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:202)
at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:225)
at org.eclipse.jetty.websocket.common.Parser.parseSingleFrame(Parser.java:259)
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:459)
at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:440)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
at java.base/java.lang.Thread.run(Thread.java:829)
being logged on the console of the docker container. By default docker uses json-file to store the output of the container process on disk. This json file is in /var/lib/docker/containers/<container-id>/<container-id>-json.log. In that file each line of the above exception is a dedicated JSON object containing only one row which indicates that the OneDev Agent does printStackTrace which internally loops through each StackTraceElement and calls outputStream.println(stackTraceElement). Each call to the print function of stderr stream produces a log entry. When doing something like Logger.error("oops", exception) then a string is built first containing everything and a single print statement is done.
In Grafana Loki that same exception does then look like
These are all single log statements containing only a single line of the stack trace (because it is just the json log of docker container parsed). It should be one log statement containing the full stack trace.
jbauer commented 1 year ago
Hmmm I am not quite sure which agent version is running (using OneDev 7.7.2), but I looked in agent source code on main branch and it seems to use logger.error. Strange.
Robin Shenchanged state to 'Closed'1 year ago
Previous Value
Current Value
Open
Closed
Robin Shen commented 1 year ago
Just as you've discovered, OneDev is logging its message with logback as expected.
Running OneDev Agent (and OneDev) as docker container I noticed that exceptions seem to be logged line by line which causes docker do produces one log statement per exception stack frame.
When using log shippers and a centralized logging tool like Grafana + Grafana Promtail + Grafana Loki (or Elasticsearch + Kibana + Filebeat) this is unfortunate since you can not search for parts of an exception to find the whole exception. You would only find that single log line with that single stack frame matching the search. Then you would need to create a time search around that found log line to see all the other log lines that contain the rest of the exception.
I guess you are using
printStackTrace
and I would suggest using a real logger instead. That would result in a single, multiline log statement as it should be.