Spring Boot Actuator Introduction covered health checks and endpoint exposure. The next step is “I want to properly monitor metrics in production.”
This article walks through building a Micrometer → Prometheus → Grafana monitoring pipeline locally, all the way to viewing custom metrics on a dashboard.
Overview
The setup consists of three components.
- Spring Boot + Micrometer — Auto-instruments JVM and HTTP metrics and exposes them at
/actuator/prometheus - Prometheus — Periodically scrapes the endpoint and stores data as time series
- Grafana — Uses Prometheus as a data source to render dashboards
Adding Dependencies
Both spring-boot-starter-actuator and micrometer-registry-prometheus are required. If you plan to use the @Timed annotation described later, spring-boot-starter-aop is also needed.
// build.gradle
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'io.micrometer:micrometer-registry-prometheus'
implementation 'org.springframework.boot:spring-boot-starter-aop' // Required when using @Timed
}
For Maven:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<!-- Required when using @Timed -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
In Spring Boot 3.x, the group ID for micrometer-registry-prometheus is io.micrometer. You can leave version management to the Spring Boot BOM.
Exposing Endpoints via application.yml
The endpoints are not exposed to the web by default, so you need to specify them explicitly in application.yml.
management:
endpoints:
web:
exposure:
include: health, info, prometheus
# Same behavior with YAML array syntax: include: [health, info, prometheus]
After starting the application, running curl http://localhost:8080/actuator/prometheus will return a response in OpenMetrics format like this:
# HELP jvm_memory_used_bytes The amount of used memory
# TYPE jvm_memory_used_bytes gauge
jvm_memory_used_bytes{area="heap",id="G1 Eden Space"} 2.3068672E7
Starting Prometheus and Grafana with Docker Compose
Prepare two files: docker-compose.yml and prometheus.yml.
# docker-compose.yml
services:
prometheus:
image: prom/prometheus:v2.51.0
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus # Persist data so metrics history survives restarts
grafana:
image: grafana/grafana:10.4.0
ports:
- "3000:3000"
environment:
# ※ For local development only. Always use a strong password in production.
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
volumes:
prometheus_data:
grafana_data:
Defining both prometheus_data and grafana_data ensures data is not lost when containers are restarted. Image versions are pinned to those available at the time of writing.
# prometheus.yml
global:
scrape_interval: 30s # 30s for local development; 15s is common for production
scrape_configs:
- job_name: 'spring-boot'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['host.docker.internal:8080']
# For Linux:
# - Docker Desktop for Linux: host.docker.internal is available
# - Docker Engine installed directly: 172.17.0.1:8080 (check Gateway IP via docker network inspect bridge)
# Or use localhost:8080 with network_mode: host
After running docker compose up -d, open http://localhost:9090/targets and confirm the spring-boot State shows UP.
Registering Prometheus as a Grafana Data Source
Log in to Grafana at http://localhost:3000 (admin/admin). A 502 error may appear immediately after startup — wait a few seconds and reload.
- Go to Connections > Data sources > Add new data source and select Prometheus
- Enter
http://prometheus:9090as the URL (DNS resolution within the Docker network) - Click Save & Test and confirm “Data source is working”
Importing Official Dashboards
Go to Dashboards > Import and enter a dashboard ID to get a full-featured dashboard instantly.
- 4701 — JVM Micrometer (heap memory, GC, thread count, etc.). Works fully with Prometheus alone.
- 17175 — Spring Boot Observability (integrated dashboard for Spring Boot 3.x). Log-related panels will show No data without Loki configured, but JVM metrics and HTTP request panels display correctly.
Note: The commonly referenced 10280 (Spring Boot 2.1 Statistics) is designed for Spring Boot 2.x. In Spring Boot 3.x, HTTP metric names and tag structures have changed, so HTTP request panels will show “No data” after importing. If you are on 3.x, use 4701 or 17175 instead.
After importing, select the Prometheus data source you registered earlier and metrics will start flowing in immediately.
Adding Custom Metrics
Injecting MeterRegistry is all you need to add metrics for your business logic.
@Service
public class OrderService {
private final Counter orderCounter;
private final AtomicInteger pendingOrders;
public OrderService(MeterRegistry registry) {
this.orderCounter = Counter.builder("order.created.total")
.description("Total number of orders created")
.register(registry);
this.pendingOrders = new AtomicInteger(0);
Gauge.builder("order.pending", pendingOrders, AtomicInteger::get)
.description("Number of pending orders")
.register(registry);
}
public void createOrder(Order order) {
orderCounter.increment();
pendingOrders.incrementAndGet();
}
}
The @Timed annotation is a convenient way to measure method execution time.
import io.micrometer.core.annotation.Timed;
@Timed(value = "order.process.time", description = "Time taken to process order")
public void processOrder(Long orderId) {
// Order processing
}
Using @Timed requires registering a TimedAspect Bean. Without it, @Timed is completely ignored.
@Bean
public TimedAspect timedAspect(MeterRegistry registry) {
return new TimedAspect(registry);
}
Also note that self-invocation within the same class (calls that bypass the AOP proxy) will not be measured.
Viewing Custom Metrics in Grafana
Metrics you add can be checked immediately from Grafana’s Explore view.
- Open Explore and search for
order_created_totalin the Metrics Browser - For Counters, the standard approach is to use the
rate()function to view the rate of increase rather than the raw value
rate(order_created_total[5m])
[5m] means “average rate of increase over 5 minutes.” A good rule of thumb is to use a time window at least 4× the scrape_interval — using a narrow window like [1m] produces an unstable graph due to too few samples.
- Once the graph is rendered, click Add to dashboard in the top right to add the panel to a dashboard
- For the current value of a Gauge, the Stat or Gauge visualization type is the clearest choice
Click Save dashboard to make it shareable with your team.
Security Configuration for Production
The Prometheus endpoint contains information such as memory usage and thread counts, so take care not to expose it publicly.
The simplest approach is management port isolation.
management:
server:
port: 8081
endpoints:
web:
exposure:
include: health, prometheus
A reliable setup is to restrict port 8081 to the cluster’s internal network where Prometheus runs, and only expose port 8080 (the application port) to the outside via a firewall.
If you already have Spring Security configured, you can also use a SecurityFilterChain to restrict access to Actuator endpoints to authenticated users only.
@Bean
public SecurityFilterChain actuatorSecurityFilterChain(HttpSecurity http) throws Exception {
http.securityMatcher("/actuator/**")
.authorizeHttpRequests(auth -> auth
.requestMatchers("/actuator/health").permitAll()
.anyRequest().authenticated()
)
.httpBasic(Customizer.withDefaults())
.csrf(csrf -> csrf.disable());
return http.build();
}
Combining this with port isolation provides defense in depth. The cardinal rule is to list only the necessary endpoints in exposure.include — reserve * (expose all) for development environments only.
Summary
Once the Micrometer → Prometheus → Grafana pipeline is in place, all that remains is growing your dashboards over time. Importing the official dashboards makes JVM metrics visible immediately, and adding custom metrics requires nothing more than injecting MeterRegistry.
Combined with the Docker containerization article, you can reproduce a production-equivalent monitored container environment locally. For log visualization, see the Logback and SLF4J article as well.