When running text logs in production, you’ve probably hit the situation where you want to trace a specific request in CloudWatch Logs or Elasticsearch, but can’t search without parsing with regex.

Switching to JSON structured logging turns your logs into structured data, enabling field-based search and aggregation. Request IDs and user IDs can be stored as dedicated fields, dramatically improving traceability.

Why JSON Structured Logging

With text logs, collection agents must extract fields using regex. If the format changes even slightly, the parse configuration breaks.

With JSON, fields are already separated from the start, so you can instantly query in CloudWatch Logs Insights with expressions like fields @timestamp, requestId | filter level = "ERROR". In container environments, writing JSON to stdout also plays nicely with log collection agents.

Adding the logstash-logback-encoder Dependency

Since this library is not managed by Spring Boot, you need to specify the version explicitly.

Maven

<dependency>
    <groupId>net.logstash.logback</groupId>
    <artifactId>logstash-logback-encoder</artifactId>
    <version>8.0</version>
</dependency>

Gradle

implementation 'net.logstash.logback:logstash-logback-encoder:8.0'

For Kotlin DSL, write implementation("net.logstash.logback:logstash-logback-encoder:8.0"). You can check the latest version on GitHub Releases.

Configuring JSON Output with logback-spring.xml

LogstashEncoder outputs in logstash-compatible format including @version and @timestamp. JsonEncoder, on the other hand, produces simpler JSON output without these fields. Use LogstashEncoder for ELK stacks and JsonEncoder for simpler use cases where a lighter footprint is preferred.

Create src/main/resources/logback-spring.xml:

<configuration>
    <appender name="CONSOLE_JSON" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <customFields>{"app":"my-service","env":"production"}</customFields>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="CONSOLE_JSON"/>
    </root>
</configuration>

Use customFields to add fixed fields such as the application name and environment name.

Understanding MDC

MDC (Mapped Diagnostic Context) is a thread-local key-value store. Once you call MDC.put("requestId", "abc123"), the requestId field is automatically appended to every log entry emitted on that thread.

LogstashEncoder automatically outputs MDC contents as JSON fields, so no special configuration is needed.

Auto-Attaching Request IDs with OncePerRequestFilter

@Component
public class MdcRequestFilter extends OncePerRequestFilter {

    @Override
    protected void doFilterInternal(HttpServletRequest request,
                                    HttpServletResponse response,
                                    FilterChain filterChain)
            throws ServletException, IOException {
        String raw = request.getHeader("X-Request-ID");
        // Log injection protection: length cap (128 chars) and control character check
        String requestId = (raw != null && raw.length() <= 128
                && raw.chars().noneMatch(c -> c < 0x20))
                ? raw : UUID.randomUUID().toString();
        try {
            MDC.put("requestId", requestId);
            response.setHeader("X-Request-ID", requestId);
            filterChain.doFilter(request, response);
        } finally {
            MDC.clear();
        }
    }
}

MDC.clear() in the finally block is critical. Because threads are reused in thread pools, failing to clear MDC will leak values from previous requests into subsequent ones.

Propagating MDC in @Async Methods

@Async methods run on a different thread, so MDC is not propagated by default. You can propagate it by implementing TaskDecorator and configuring it on ThreadPoolTaskExecutor.

public class MdcTaskDecorator implements TaskDecorator {
    @Override
    public Runnable decorate(Runnable runnable) {
        Map<String, String> ctx = MDC.getCopyOfContextMap();
        return () -> {
            try {
                if (ctx != null) MDC.setContextMap(ctx);
                runnable.run();
            } finally {
                MDC.clear();
            }
        };
    }
}
@Configuration
@EnableAsync
public class AsyncConfig {
    @Bean
    public ThreadPoolTaskExecutor taskExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setTaskDecorator(new MdcTaskDecorator());
        executor.initialize();
        return executor;
    }
}

Attaching User IDs with HandlerInterceptor

Filters operate at the Servlet container level, whereas Interceptors run after Spring MVC’s DispatcherServlet and therefore have access to the SecurityContext. For guidance on when to use Filters vs. Interceptors, see this article.

@Component
public class MdcUserInterceptor implements HandlerInterceptor {

    @Override
    public boolean preHandle(HttpServletRequest request,
                             HttpServletResponse response,
                             Object handler) {
        Authentication auth = SecurityContextHolder.getContext().getAuthentication();
        if (auth != null && auth.isAuthenticated()) {
            MDC.put("userId", auth.getName());
        }
        return true;
    }

    @Override
    public void afterCompletion(HttpServletRequest request,
                                HttpServletResponse response,
                                Object handler, Exception ex) {
        MDC.remove("userId");
    }
}

Adding @Component alone does not register the interceptor with Spring MVC. You must explicitly register it in a class that implements WebMvcConfigurer.

@Configuration
public class WebMvcConfig implements WebMvcConfigurer {

    @Autowired
    private MdcUserInterceptor mdcUserInterceptor;

    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(mdcUserInterceptor);
    }
}

MDC.remove("userId") removes only the userId key. Since MDC.clear() is handled by the Filter, removing only the specific key is sufficient here.

Native Structured Logging in Spring Boot 3.4

Starting with Spring Boot 3.4, structured logging can be enabled without any additional libraries.

logging.structured.format.console=logstash

You can also choose ecs (Elastic Common Schema) or graylog formats. However, customization options are more limited compared to logstash-logback-encoder. It is worth trying for new projects where simplicity is the priority.

Switching Log Format by Environment

A practical pattern is human-readable text logs locally and JSON in production. You can switch between them using the <springProfile> tag in logback-spring.xml.

<configuration>
    <springProfile name="prod">
        <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
            <encoder class="net.logstash.logback.encoder.LogstashEncoder">
                <customFields>{"app":"my-service"}</customFields>
            </encoder>
        </appender>
        <root level="INFO"><appender-ref ref="CONSOLE"/></root>
    </springProfile>

    <springProfile name="!prod">
        <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
            <encoder>
                <pattern>%d{HH:mm:ss} %-5level %logger{36} - %msg%n</pattern>
            </encoder>
        </appender>
        <root level="DEBUG"><appender-ref ref="CONSOLE"/></root>
    </springProfile>
</configuration>

When using Spring Boot 3.4’s native support, logback-spring.xml is not needed. Simply add the following to application-prod.properties to enable JSON logging in production:

logging.structured.format.console=logstash

Verifying the JSON Output

Here is a sample output when combining the Filter and Interceptor:

{
  "@timestamp": "2026-04-04T12:34:56.789Z",
  "level": "INFO",
  "message": "Order created successfully",
  "logger_name": "com.example.OrderService",
  "requestId": "550e8400-e29b-41d4-a716-446655440000",
  "userId": "user-123",
  "app": "my-service"
}

In CloudWatch Logs Insights, you can instantly narrow down logs for a specific request with fields @timestamp, requestId, userId, message | filter requestId = "550e8400-e29b-41d4-a716-446655440000". In Elasticsearch’s Kibana, the requestId field is automatically indexed, so you can use it directly in field searches.

Configuring the log collection agent (CloudWatch Agent, Fluent Bit, etc.) is infrastructure work, but as long as your application outputs JSON, the collection-side configuration becomes significantly simpler.

Summary

Adopting JSON structured logging is not that difficult. Add logstash-logback-encoder, configure logback-spring.xml, and set the request ID in MDC via OncePerRequestFilter — that alone makes log searching in Elasticsearch and CloudWatch Logs far more manageable.

With Spring Boot 3.4 or later, you can get started with just one line: logging.structured.format.console=logstash. Try the simple setup first, and switch to logstash-logback-encoder when you need more customization.

If you haven’t set up Logback basics yet, check that out as well. For an overview of observability combining JSON structured logging with Micrometer metrics, see this article. For content that also covers distributed tracing, refer to this article.