Even if you can visualize metrics with Prometheus and Grafana, pinpointing which service is causing latency can still be difficult. That’s exactly what distributed tracing solves.

This article walks through the steps to propagate trace IDs between two services using Micrometer Tracing and Zipkin, and verify the results in the Zipkin UI. The guide assumes Spring Boot 3.2 or later. RestClient was introduced in Spring Boot 3.2 — if you’re on 3.0/3.1, use WebClient as an alternative (see also: RestTemplate & WebClient guide).

What Is Distributed Tracing?

The three pillars of observability are metrics, logs, and traces.

Tracing tracks the “journey” of a request as it flows through multiple services. A single trace ID is assigned to each request, and child spans are created as the request crosses service boundaries. Visualized in Zipkin as a waterfall chart, you can instantly see which service consumed how much time.

From Spring Cloud Sleuth to Micrometer Tracing

In Spring Boot 2.x, Spring Cloud Sleuth was the go-to tracing solution. However, with the move to Spring Boot 3.x (Spring Framework 6), Sleuth was deprecated and its successor, Micrometer Tracing, was integrated into the ecosystem.

Micrometer Tracing supports two tracers — Brave and OpenTelemetry — selectable via a bridge. In this article, we’ll use Brave to send traces to Zipkin.

Sample Architecture

This article uses two services:

  • order-service (port 8080) — calls inventory-service via RestClient
  • inventory-service (port 8081) — provides an inventory check API

Zipkin will be run locally via Docker.

Adding Dependencies

You need two dependencies: micrometer-tracing-bridge-brave and zipkin-reporter-brave.

<!-- Micrometer Tracing (Brave bridge) -->
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
<!-- Zipkin reporter -->
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-reporter-brave</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

For comparison, here’s what it looked like with Spring Boot 2.x (Sleuth):

<!-- Spring Boot 2.x (Sleuth) -->
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>

Version management can be left to the Spring Boot BOM.

application.properties Configuration

spring.application.name=order-service

# Sample all traces during development (reduce to 0.1–0.3 in production)
management.tracing.sampling.probability=1.0

# Default is localhost:9411. Specify explicitly when changing for Kubernetes, etc.
management.zipkin.tracing.endpoint=http://localhost:9411/api/v2/spans

Make sure to set spring.application.name — it becomes the service name shown in the Zipkin UI.

Starting Zipkin with Docker

docker run -d -p 9411:9411 openzipkin/zipkin

The UI will be available at http://localhost:9411. If you prefer managing it with docker-compose (see also: Docker containerization guide):

services:
  zipkin:
    image: openzipkin/zipkin
    ports:
      - "9411:9411"

Implementing order-service

In Spring Boot 3.2+, the RestClient.Builder Bean is automatically instrumented by ObservationRestClientCustomizer, so trace IDs are injected into HTTP headers with no additional configuration.

@Configuration
public class RestClientConfig {
    @Bean
    public RestClient restClient(RestClient.Builder builder) {
        return builder.baseUrl("http://localhost:8081").build();
    }
}
@RestController
public class OrderController {

    private final RestClient restClient;

    public OrderController(RestClient restClient) {
        this.restClient = restClient;
    }

    @GetMapping("/orders/{id}")
    public String getOrder(@PathVariable String id) {
        String inventory = restClient.get()
                .uri("/inventory/{id}", id)
                .retrieve()
                .body(String.class);
        return "Order: " + id + ", Inventory: " + inventory;
    }
}

When using Brave, trace IDs are propagated via B3 headers (b3). If you use the OpenTelemetry bridge instead, the W3C Trace Context format (traceparent) is used and requires additional configuration.

Implementing inventory-service

@RestController
public class InventoryController {

    @GetMapping("/inventory/{id}")
    public String getInventory(@PathVariable String id) throws InterruptedException {
        Thread.sleep(200); // Add a delay to make the trace more visible
        return "in-stock";
    }
}

The trace ID is automatically carried over from the incoming B3 headers. No special configuration is needed on this side.

Embedding traceId and spanId in Logs

Micrometer Tracing automatically sets traceId and spanId in the MDC. Simply add them to your logback-spring.xml pattern:

<pattern>%d{HH:mm:ss} [%X{traceId},%X{spanId}] %-5level %logger{36} - %msg%n</pattern>

For more on log configuration, see the Logback & SLF4J article. This lets you cross-reference logs and Zipkin traces using the traceId.

Verifying the Setup

Start both services and send a request with curl:

curl http://localhost:8080/orders/123

Check that the traceId matches across both services’ logs:

# order-service
[abc123def,111aaa] INFO  OrderController - ...

# inventory-service (same traceId propagated)
[abc123def,222bbb] INFO  InventoryController - ...

Open http://localhost:9411 and click Run Query to view the traces.

Reading the Zipkin UI

  • Filtering — Specify a service name (e.g., order-service) and time range, then click Find Traces. When looking for slow requests, filtering by Duration (minimum latency) is convenient.
  • Waterfall view — Click a trace to see each span’s start and end times as a horizontal bar chart. The 200ms delay from Thread.sleep(200) in inventory-service should be clearly visible.
  • Error inspection — Spans with errors are highlighted in red. Click a span to view details such as exception messages and HTTP status codes.

Asynchronous Processing (Kafka, etc.)

For asynchronous processing with Kafka or Spring’s @Async, automatic propagation via HTTP headers is not available. When using spring-kafka, Micrometer Tracing provides automatic instrumentation. See the Spring Boot + Kafka article for details.

Production Configuration

sampling.probability=1.0 traces every request and is intended for development. In production, reduce it to around 0.1–0.3 to minimize overhead.

In a Kubernetes environment, specify the Zipkin endpoint using the Service domain name:

management.zipkin.tracing.endpoint=http://zipkin.monitoring.svc.cluster.local:9411/api/v2/spans

To switch between B3 and W3C Trace Context, use management.tracing.propagation.type=w3c. This is useful when other services require the W3C format or when forwarding through an OTel collector.

For Kubernetes deployment, see the Kubernetes deployment guide.

Summary

While Sleuth is gone with the move to Spring Boot 3.x, migrating to Micrometer Tracing requires swapping just two dependencies with almost no code changes.

With the three pillars in place — metrics (Prometheus + Grafana), logs (Logback), and traces (Micrometer + Zipkin) — troubleshooting microservice issues becomes significantly easier. Start by trying it out locally.