Have you ever noticed slow responses because you’re repeatedly fetching the same user data from the database? Hitting the DB every time is wasteful. Spring Boot’s caching feature lets you solve this problem with ease.
This article walks through practical examples of how to improve application performance using Spring Cache Abstraction.
What Is Spring Cache Abstraction?
Spring Cache Abstraction is a unified caching interface provided by the Spring Framework. It lets you easily cache method return values using annotations.
Key features include:
- Easy provider switching — swap implementations like Caffeine or Redis without changing your code
- Annotation-based — just add
@Cacheableand it works - Spring Boot integration — auto-configured by simply adding the dependency
Since you can swap cache providers later, you can start with a lightweight in-memory cache and migrate to a distributed cache down the line.
Enabling Caching
First, add the dependency to your pom.xml.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
Then add @EnableCaching to your configuration class.
@SpringBootApplication
@EnableCaching
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
That’s all it takes to enable caching backed by the default ConcurrentHashMap.
Caching Methods with @Cacheable
@Cacheable automatically caches the return value of a method. On subsequent calls with the same arguments, the value is returned from cache without hitting the DB.
@Service
public class UserService {
@Autowired
private UserRepository userRepository;
@Cacheable("users")
public User findById(Long id) {
System.out.println("DB accessed for user: " + id);
return userRepository.findById(id).orElse(null);
}
}
The log statement is printed on the first call, but not on subsequent ones — because the value is being returned from cache.
The "users" in @Cacheable("users") is the cache name, used to distinguish between multiple caches. When there is a single argument, that value becomes the cache key. When there are multiple arguments, a SimpleKey object is automatically generated and used as the key.
Cases Where @Cacheable Does Not Work
@Cacheable is implemented using Spring’s AOP proxy, so it will not work when a method is called from within the same class.
// This will NOT work
@Service
public class UserService {
@Cacheable("users")
public User findById(Long id) {
return userRepository.findById(id).orElse(null);
}
public User getUserWithCache(Long id) {
// Cache does not apply when called from within the same class
return this.findById(id);
}
}
The workaround is to move the cached method to a separate class, or to retrieve your own Bean from the ApplicationContext and call it through that.
Customizing Cache Keys
When a method has multiple parameters, or when you want to use only specific fields as the key, you can customize the key using SpEL (Spring Expression Language).
@Cacheable(value = "users", key = "#email")
public User findByEmail(String email) {
return userRepository.findByEmail(email);
}
@Cacheable(value = "users", key = "#user.id + '-' + #user.email")
public User findByUser(User user) {
return userRepository.findById(user.getId()).orElse(null);
}
If you don’t explicitly specify a key for a method with multiple arguments, a SimpleKey combining all arguments is used automatically.
// Without key attribute, the combination of id and name becomes the key
@Cacheable("users")
public User findByIdAndName(Long id, String name) {
return userRepository.findByIdAndName(id, name);
}
// With explicit key
@Cacheable(value = "users", key = "#id + '-' + #name")
public User findByIdAndName(Long id, String name) {
return userRepository.findByIdAndName(id, name);
}
You can also apply caching conditionally.
@Cacheable(value = "users", condition = "#id > 10")
public User findById(Long id) {
return userRepository.findById(id).orElse(null);
}
In this example, the result is only cached when the ID is greater than 10.
Clearing Cache on Updates with @CacheEvict
When data is updated, you need to clear the stale cache. Use @CacheEvict for this.
@CacheEvict(value = "users", key = "#id")
public void updateUser(Long id, String name) {
User user = userRepository.findById(id).orElse(null);
if (user != null) {
user.setName(name);
userRepository.save(user);
}
}
@CacheEvict(value = "users", key = "#id")
public void deleteUser(Long id) {
userRepository.deleteById(id);
}
To clear all entries in a cache, use allEntries = true.
@CacheEvict(value = "users", allEntries = true)
public void deleteAllUsers() {
userRepository.deleteAll();
}
By default, the cache is cleared after the method executes, so if the method throws an exception, the cache entry remains. To clear the cache before execution, specify beforeInvocation = true.
@CacheEvict(value = "users", key = "#id", beforeInvocation = true)
public void updateUser(Long id, String name) {
// Cache is cleared even if an exception occurs
User user = userRepository.findById(id).orElseThrow();
user.setName(name);
userRepository.save(user);
}
Updating Cache with @CachePut
@CachePut always executes the method and updates the cache with its result. Unlike @Cacheable, the method is always invoked regardless of whether a cached value exists.
@CachePut(value = "users", key = "#user.id")
public User updateUser(User user) {
return userRepository.save(user);
}
While @Cacheable skips method execution on a cache hit, @CachePut always runs the method and stores the result in the cache. This is useful when you want update methods to push fresh data into the cache.
Boosting Performance with Caffeine
The default ConcurrentHashMap has limited functionality. Switching to Caffeine lets you configure TTL (expiration time) and maximum cache size.
First, add the dependency.
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>
Configure it in application.yml.
spring:
cache:
type: caffeine
caffeine:
spec: maximumSize=1000,expireAfterWrite=10m
This caches up to 1,000 entries and automatically evicts them 10 minutes after being written.
To use different settings per cache, configure them in a @Configuration class.
@Configuration
public class CacheConfig {
@Bean
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager("users", "products");
cacheManager.setCaffeine(Caffeine.newBuilder()
.maximumSize(1000)
.expireAfterWrite(10, TimeUnit.MINUTES));
return cacheManager;
}
}
Caffeine is high-performance and widely used as the successor to Google Guava.
Distributed Caching with Redis
When running an application across multiple servers, each server having its own independent cache can cause data inconsistencies. Redis lets all servers share the same cache.
Add the dependency.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Configure the Redis connection in application.yml.
spring:
data:
redis:
host: localhost
port: 6379
cache:
type: redis
redis:
time-to-live: 10m
Preparing Entities for Redis Caching
Since Redis serializes data before storing it, entity classes you want to cache must implement Serializable.
@Entity
public class User implements Serializable {
private static final long serialVersionUID = 1L;
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private String email;
// getter/setter
}
To use different TTLs per cache, customize the RedisCacheManager.
@Configuration
public class RedisCacheConfig {
@Bean
public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheConfiguration defaultConfig = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofMinutes(10));
Map<String, RedisCacheConfiguration> cacheConfigurations = new HashMap<>();
cacheConfigurations.put("users", defaultConfig.entryTtl(Duration.ofMinutes(30)));
cacheConfigurations.put("products", defaultConfig.entryTtl(Duration.ofMinutes(5)));
return RedisCacheManager.builder(connectionFactory)
.cacheDefaults(defaultConfig)
.withInitialCacheConfigurations(cacheConfigurations)
.build();
}
}
Choosing Between Caffeine and Redis
Use the following criteria to decide which to use.
Choose Caffeine when:
- Running on a single server
- Latency must be minimized (even a few milliseconds of overhead is unacceptable)
- Data is small enough to fit in memory
Choose Redis when:
- Running on multiple servers (e.g., behind a load balancer)
- Cache data is large and exceeds the memory of a single server
- Cache persistence or replication is required
The recommended approach is to start with Caffeine and migrate to Redis when you need to scale out.
Measuring Cache Effectiveness and Practical Strategies
Let’s verify how much your cache is actually helping. The simplest method is to add logging inside the method.
@Cacheable("users")
public User findById(Long id) {
logger.info("Cache miss - loading user from DB: {}", id);
return userRepository.findById(id).orElse(null);
}
Each time this log appears, a DB access is occurring. If the log stops appearing on subsequent calls, your cache is working.
Spring Boot Actuator provides more detailed metrics. See the Spring Boot Actuator article for details.
Here is a simple benchmark example (results will vary significantly by environment and implementation).
[Without cache]
1st call: 152ms
2nd call: 148ms
3rd call: 151ms
Average: 150ms
[With cache]
1st call: 153ms (cache miss)
2nd call: 2ms (cache hit)
3rd call: 1ms (cache hit)
Average: 52ms
On a cache hit, performance can be 50–100x faster.
Practical Caching Strategies
When choosing which methods to cache, consider the following points.
Methods well-suited for caching:
- Frequently read methods (user info, product info, etc.)
- Methods with high computation cost
- Methods that make external API calls
Methods not suited for caching:
- Data that changes frequently
- Data that requires real-time accuracy
- Data that differs per user (authentication info, etc.)
Tune cache size based on the characteristics of your data. Monitor memory usage and set an appropriate maximum size.
For solving the N+1 problem, JPA fetch strategies are just as important as caching. See the Spring Data JPA performance optimization article for details.
Summary
With Spring Cache Abstraction, you can introduce caching with nothing but annotations. Start with the basic pattern: use @Cacheable to cache frequently read data, and @CacheEvict to clear it on updates.
Caffeine is sufficient to begin with. If you need to run on multiple servers, consider migrating to Redis — very little code change is required.
Caching is powerful, but it involves a trade-off with data freshness. Set an appropriate TTL and tune it to fit your application’s requirements.