How to send your Spring Batch Job log messages to a separate file

by Joris KuipersApril 14, 2017

In one of my current projects we’re developing a web application which also has a couple of dozen batch jobs that perform all sort of tasks at particular times. These jobs produce quite a bit of logging output when they’re run, which is important to see what has happened during a job exactly. What we noticed however, is that the batch logging would make it hard to quickly spot the other logging performed by the application while also running a batch job. In addition to that, it wasn’t always clear in the context of what job a log statement was issued.
To address these issues I came up with a simple solution based on Logback Filters, which I’ll describe in this blog.

Logback Appenders

We’re using Logback as a logging framework. Logback defines the concept of appenders: appenders are responsible for handling the actual log messages emitted by the loggers in the application by writing them to the console, to a file, to a socket, etc.
Many applications define one or more appenders and them simply list them all as part of their root logger section in the logback.xml configuration file:

<configuration scan="true">
  <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>logstash-server</destination>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
  </appender>
  <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <file>log/server.log</file>
    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <fileNamePattern>log/server.%d{yyyy-MM-dd}.log</fileNamePattern>
      <maxHistory>30</maxHistory>
    </rollingPolicy>
    <encoder>
      <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %mdc %-5level %logger{36} - %msg%n</pattern>
    </encoder>
  </appender>
  <root level="info">
    <appender-ref ref="LOGSTASH"/>
    <appender-ref ref="FILE"/>
  </root>
</configuration>

This setup will send all log messages to both of the configured appenders.

You can also configure loggers to use a specific appender. We use this in some applications to for example write all SOAP-related logging to a dedicated file, as we don’t want to pollute our regular logs with tons of SOAP messages but still want to capture those messages:

...
<appender name="SOAP" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>log/soap.log</file>
  ...
</appender>
<logger name="org.springframework.ws.client.MessageTracing" level="trace" additivity="false">
  <appender-ref ref="SOAP"/>
</logger>
<logger name= "org.springframework.ws.soap.server.endpoint.interceptor.SoapEnvelopeLoggingInterceptor" level="debug" additivity="false">
  <appender-ref ref="SOAP"/>
</logger>
<root level="info">
  <appender-ref ref="FILE"/>
</root>

By specifying the appender-ref inside the logger element and using additivity=”false” we ensure that the output of the SOAP-related loggers is sent only to the SOAP appender. Note that this appender is not listed inside the root element.

In many situations these options provide you with sufficient ammunition to handle your problems.

In the situation with Batch Job logging, however, this won’t work: there isn’t just a single logger category that will be used for all batch-related logging, and many loggers could be used both in the context of a batch job and outside such a context. So how to tackle this? Enter Logback Filters.

Logback Filters

Another feature provided by Logback is Filters. You can declare one or more filters inside of an appender: each filter can determine if a log message (more precisely: a log event) should be passed on to the appender, or if it should be filtered out. Several useful implementations are provided out of the box, but it’s easy to define your own as well.

In our situation, we want to have a filter that we can configure to do one of two things: filter out all batch-related log messages, or filter out all non-batch-related logging. Then we can add that filter to two separate file appenders: one exclusively for all the batch logging, and one for all the other logging.

Here’s an initial implementation of our Logback Filter (imports are implied)

public class LogbackBatchFilter extends Filter<ILoggingEvent> {
  static final String MDC_KEY = "batch-job";
  private Mode mode = Mode.EXCLUDE_BATCH;
  public void setMode(Mode mode) {
     this.mode = mode;
  }
  @Override
  public FilterReply decide(ILoggingEvent iLoggingEvent) {
    boolean runningBatchJob = ...; // how to determine this?
    if (mode == Mode.EXCLUDE_BATCH && runningBatchJob ||
        mode == Mode.BATCH_ONLY && !runningBatchJob)
    {
      return FilterReply.DENY;
    }
    return FilterReply.NEUTRAL;
  }
  enum Mode {
    BATCH_ONLY,
    EXCLUDE_BATCH
  }
}

This filter can be configured to run in two modes: by default it will filter out all batch messages, using an EXCLUDE_BATCH mode. However, when it’s running in BATCH_ONLY mode it does exactly the opposite and filters out all non-batch-related messages.
The NEUTRAL reply means we’re not filtering out the given logging event in our filter, but there might still be another filter down the filter chain that decides to filter it out anyway: ACCEPT instead of NEUTRAL would mean that we’re short-circuiting such a filter chain if there are additional filters, which is not what we’re after here.

To use the filter, you simply add it within your appender config:

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <filter class="nl.trifork.batch.LogbackBatchFilter"/>
  <file>log/server.log</file>
  ...
</appender>
<appender name="FILE-BATCH" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <filter class="nl.trifork.batch.LogbackBatchFilter">
     <mode>BATCH_ONLY</mode>
  </filter>
  <file>log/batch.log</file>
  ....
</appender>
<root level="info">
  <appender-ref ref="FILE" />
  <appender-ref ref="FILE-BATCH" />
</root>

Easy, right?
There’s just one little unresolved detail: how do we determine the value of the runningBatchJob flag inside the decide method?

Revisiting the Mapped Diagnostic Context

SLF4J, the logging API implemented by Logback, defines the concept of a Mapped Diagnostic Context (MDC) which can hold String-based key-value pairs associated with the current thread, which can be included in your logging output. I’ve blogged about this before in the context of security, to ensure that every log message contains the current user name.

You can also access the MDC properties in a Logback Filter: they’re available directly from the ILoggingEvent that’s passed in. So, if we could set an MDC key whenever we’re running a batch job, then that could tell us whether we should be excluding log messages inside the filter.
How to make this happen?

In our application we’re using Spring Batch’s SimpleJobLauncher to start all Batch Jobs. This launcher can be configured with a TaskExecutor to run the job on a separate thread, typically obtained from some thread pool.
We want to achieve two things:

  1. When the launcher’s run method is called, we want to set the name of the job that will run as an MDC entry, and clear that entry when the method is done;
  2. When the job runs in a separate thread, we want have the same MDC entry set there as well.

These two things combined will ensure that all logging performed in the context of running a batch job will have the expected MDC entry.
What I ended up doing was subclassing that SimpleJobLauncher to inject some MDC-related code:

/**
 * SimpleJobLauncher subclass which ensures that an MDC key {@code batch-job}
 * will be set with the name of the job to be run, both in the current thread
 * as well as in the thread used by a custom configured {@link TaskExecutor}.
 * This in turn enables filtering of log messages issued in the context of
 * batch jobs.
 */
public class MDCPopulatingJobLauncher extends SimpleJobLauncher {
 static final String MDC_KEY = "batch-job";
 @Override
 public JobExecution run(Job job, JobParameters jobParameters)
     throws JobExecutionAlreadyRunningException, JobRestartException,
            JobInstanceAlreadyCompleteException, JobParametersInvalidException {
   MDC.put(MDC_KEY, job.getName());
   try {
     return super.run(job, jobParameters);
   } finally {
     MDC.remove(MDC_KEY);
   }
 }
  @Override
  public void setTaskExecutor(TaskExecutor taskExecutor) {
     super.setTaskExecutor(
       new MDCPopulatingTaskExecutorDecorator(taskExecutor));
  }
  private static class MDCPopulatingTaskExecutorDecorator
                       implements TaskExecutor {
    private TaskExecutor targetExecutor;
    MDCPopulatingTaskExecutorDecorator(TaskExecutor targetExecutor) {
      this.targetExecutor = targetExecutor;
    }
    @Override
    public void execute(Runnable task) {
      String mdcValue = MDC.get(MDC_KEY);
      targetExecutor.execute(() -> {
        MDC.put(MDC_KEY, mdcValue);
        try {
          task.run();
        } finally {
          MDC.remove(MDC_KEY);
        }
      });
    }
  }
}

So, this gives us the missing piece of the puzzle to complete the Filter:

boolean runningBatchJob = iLoggingEvent.getMDCPropertyMap().containsKey(MDC_KEY);

(This assumes that we’re doing a static import of that MDC_KEY constant)

Now you simply configure a Spring Bean of type MDCPopulatingJobLauncher instead of SimpleJobLauncher in your Batch configuration class or XML file and you’re done! As a bonus, you can include the name of the currently executing batch job in your batch logging by referencing it in your pattern through %X{batch-job}.

Note that for appenders like the Logstash one shown earlier which include the full MDC, you probably don’t need the filter as you could filter in Kibana or Graylog or whatever you’re using based on the presence of the batch-job MDC entry directly.