implementation("io.micronaut.cache:micronaut-cache-core")
Table of Contents
Micronaut Cache
Cache support for Micronaut
Version: 5.0.1
1 Introduction
This project brings additional cache implementations to Micronaut.
To get started, you need to declare the following dependency:
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-core</artifactId>
</dependency>
The configuration implementations in this module require at least Micronaut version 1.3.0. Each implementation is a separate dependency. |
To use the BUILD-SNAPSHOT
version of this library, check the
documentation to use snapshots.
2 Release History
For this project, you can find a list of releases (with release notes) here:
3 Cache Abstraction
Similar to Spring and Grails, Micronaut provides a set of caching annotations within the io.micronaut.cache package.
The CacheManager interface allows different cache implementations to be plugged in as necessary.
The SyncCache interface provides a synchronous API for caching, whilst the AsyncCache API allows non-blocking operation.
4 Cache Annotations
The following cache annotations are supported:
-
@Cacheable - Indicates a method is cacheable within the given cache name
-
@CachePut - Indicates that the return value of a method invocation should be cached. Unlike
@Cacheable
the original operation is never skipped. -
@CacheInvalidate - Indicates the invocation of a method should cause the invalidation of one or many caches.
By using one of the annotations the CacheInterceptor is activated which in the case of @Cacheable
will cache the return result of the method.
If the return type of the method is a non-blocking type (either CompletableFuture or an instance of Publisher the emitted result will be cached.
In addition if the underlying Cache implementation supports non-blocking cache operations then cache values will be read from the cache without blocking, resulting in the ability to implement completely non-blocking cache operations.
4.1 Conditional Caching
Since Micronaut Cache 4.2.0, the above annotations can be conditionally disabled via an Expression Language expression in the condition
attribute.
For example, we can cache the result of a method invocation only if the id
parameters value is greater than 5:
public record Id(Integer value) {
}
@Cacheable(condition = "#{id.value > 5}")
public String get(Id id) {
return repository.get(id);
}
@Immutable
static class Id {
int value
}
@Cacheable(condition = "#{id.value > 5}")
String get(Id id) {
return repository.get(id)
}
data class Id(val value: Int)
@Cacheable(condition = "#{id.value > 5}")
open fun get(id: Id) = repository[id]
5 Caching with Caffeine
To cache using Caffeine add the following dependency to your application:
This module is built and tested with Caffeine 3.1.8 |
implementation("io.micronaut.cache:micronaut-cache-caffeine")
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-caffeine</artifactId>
</dependency>
Then configure one or many caches. For example with application.yml
:
micronaut.caches.my-cache.maximum-size=20
micronaut:
caches:
my-cache:
maximum-size: 20
[micronaut]
[micronaut.caches]
[micronaut.caches.my-cache]
maximum-size=20
micronaut {
caches {
myCache {
maximumSize = 20
}
}
}
{
micronaut {
caches {
my-cache {
maximum-size = 20
}
}
}
}
{
"micronaut": {
"caches": {
"my-cache": {
"maximum-size": 20
}
}
}
}
The above example will configure a cache called "my-cache" with a maximum size of 20.
micronaut.caches.my-cache.listen-to-removals=true
micronaut.caches.my-cache.listen-to-evictions=true
micronaut:
caches:
my-cache:
listen-to-removals: true
listen-to-evictions: true
[micronaut]
[micronaut.caches]
[micronaut.caches.my-cache]
listen-to-removals=true
listen-to-evictions=true
micronaut {
caches {
myCache {
listenToRemovals = true
listenToEvictions = true
}
}
}
{
micronaut {
caches {
my-cache {
listen-to-removals = true
listen-to-evictions = true
}
}
}
}
{
"micronaut": {
"caches": {
"my-cache": {
"listen-to-removals": true,
"listen-to-evictions": true
}
}
}
}
This example is a cache with the removal/eviction listeners. To be able to use them just implement the com.github.benmanes.caffeine.cache.RemovalListener
interface as shown in the example.
@Singleton
public class RemovalListenerImpl implements RemovalListener<String, Integer> {
private final MyRemovalHandler handler;
RemovalListenerImpl(MyRemovalHandler handler) {
this.handler = handler;
}
@Override
public void onRemoval(@Nullable String key, @Nullable Integer value, @NonNull RemovalCause cause) {
handler.handle(key, value, cause);
}
}
@Singleton
class RemovalListenerImpl implements RemovalListener<String, Integer> {
private final MyRemovalHandler handler
RemovalListenerImpl(MyRemovalHandler handler) {
this.handler = handler
}
@Override
public void onRemoval(@Nullable String key, @Nullable Integer value, @NonNull RemovalCause cause) {
handler.handle(key, value, cause)
}
}
@Singleton
class RemovalListenerImpl internal constructor(private val handler: MyRemovalHandler) : RemovalListener<String?, Int?> {
override fun onRemoval(key: String?, value: Int?, cause: RemovalCause) {
handler.handle(key!!, value!!, cause)
}
}
Naming Caches
Names of caches under |
To configure a weigher to be used with the maximumWeight
configuration, create a bean that implements com.github.benmanes.caffeine.cache.Weigher
. To associate a given weigher with only a specific cache, annotate the bean with @Named(<cache name>)
. Weighers without a named qualifier will apply to all caches that don’t have a named weigher. If no beans are found, a default implementation will be used.
Native compilation
When using Caffeine with Native Compilation, the most commonly used caches will be automatically registered. If you require additional caches, you will need to register them with Graal yourself as shown in the guide. |
6 JCache API support
When there is a JSR 107 (JCache) implementation in the classpath (Ehcache, Hazelcast, Infinispan, etc), the caching abstraction will use the JCache API internally by default. If you want Micronaut to use the concrete implementation API, JCache needs to be disabled:
micronaut.jcache.enabled=false
micronaut:
jcache:
enabled: false
[micronaut]
[micronaut.jcache]
enabled=false
micronaut {
jcache {
enabled = false
}
}
{
micronaut {
jcache {
enabled = false
}
}
}
{
"micronaut": {
"jcache": {
"enabled": false
}
}
}
7 Redis Support
Using the CLI
If you are creating your project using the Micronaut CLI, supply the $ mn create-app my-app --features redis-lettuce |
If you wish to use Redis to cache results the Micronaut Redis module provides a CacheManager implementation that allows using Redis as a backing cache.
8 Ehcache Support
To use Ehcache as the caching implementation, add it as a dependency to your application:
This module is built and tested with Ehcache 3.10.8 |
implementation("io.micronaut.cache:micronaut-cache-ehcache")
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-ehcache</artifactId>
</dependency>
To have Micronaut create a cache, the minimum configuration is:
ehcache.caches.my-cache.enabled=true
ehcache:
caches:
my-cache:
enabled: true
[ehcache]
[ehcache.caches]
[ehcache.caches.my-cache]
enabled=true
ehcache {
caches {
myCache {
enabled = true
}
}
}
{
ehcache {
caches {
my-cache {
enabled = true
}
}
}
}
{
"ehcache": {
"caches": {
"my-cache": {
"enabled": true
}
}
}
}
Then, you can use any of the caching annotations with my-cache
as cache name.
See the configuration reference to check all possible configuration options.
Tiering options
Ehcache supports the concept of tiered caching. This library allows you to configure tiering caching options on a per-cache basis.
If no tier is explicitly configured, the cache will be configured with a heap tier of 100 entries maximum.
Heap tier
It can be sized by number of entries:
ehcache.caches.my-cache.heap.max-entries=5000
ehcache:
caches:
my-cache:
heap:
max-entries: 5000
[ehcache]
[ehcache.caches]
[ehcache.caches.my-cache]
[ehcache.caches.my-cache.heap]
max-entries=5000
ehcache {
caches {
myCache {
heap {
maxEntries = 5000
}
}
}
}
{
ehcache {
caches {
my-cache {
heap {
max-entries = 5000
}
}
}
}
}
{
"ehcache": {
"caches": {
"my-cache": {
"heap": {
"max-entries": 5000
}
}
}
}
}
Or by size:
ehcache.caches.my-cache.heap.max-size=200Mb
ehcache:
caches:
my-cache:
heap:
max-size: 200Mb
[ehcache]
[ehcache.caches]
[ehcache.caches.my-cache]
[ehcache.caches.my-cache.heap]
max-size="200Mb"
ehcache {
caches {
myCache {
heap {
maxSize = "200Mb"
}
}
}
}
{
ehcache {
caches {
my-cache {
heap {
max-size = "200Mb"
}
}
}
}
}
{
"ehcache": {
"caches": {
"my-cache": {
"heap": {
"max-size": "200Mb"
}
}
}
}
}
Off-heap tier
ehcache.caches.my-cache.offheap.max-size=1Gb
ehcache:
caches:
my-cache:
offheap:
max-size: 1Gb
[ehcache]
[ehcache.caches]
[ehcache.caches.my-cache]
[ehcache.caches.my-cache.offheap]
max-size="1Gb"
ehcache {
caches {
myCache {
offheap {
maxSize = "1Gb"
}
}
}
}
{
ehcache {
caches {
my-cache {
offheap {
max-size = "1Gb"
}
}
}
}
}
{
"ehcache": {
"caches": {
"my-cache": {
"offheap": {
"max-size": "1Gb"
}
}
}
}
}
Do not forget to define in the java options the -XX:MaxDirectMemorySize
option, according to the off-heap size you
intend to use.
Disk tier
ehcache.storage-path=/var/caches
ehcache.caches.my-cache.disk.max-size=10Gb
ehcache:
storage-path: /var/caches
caches:
my-cache:
disk:
max-size: 10Gb
[ehcache]
storage-path="/var/caches"
[ehcache.caches]
[ehcache.caches.my-cache]
[ehcache.caches.my-cache.disk]
max-size="10Gb"
ehcache {
storagePath = "/var/caches"
caches {
myCache {
disk {
maxSize = "10Gb"
}
}
}
}
{
ehcache {
storage-path = "/var/caches"
caches {
my-cache {
disk {
max-size = "10Gb"
}
}
}
}
}
{
"ehcache": {
"storage-path": "/var/caches",
"caches": {
"my-cache": {
"disk": {
"max-size": "10Gb"
}
}
}
}
}
Clustered tier
Ehcache supports distributed caching with Terracotta
This is a complete example configuration:
ehcache.cluster.uri=terracotta://localhost/my-application
ehcache.cluster.default-server-resource=offheap-1
ehcache.cluster.resource-pools.resource-pool-a.max-size=8Mb
ehcache.cluster.resource-pools.resource-pool-a.server-resource=offheap-2
ehcache.cluster.resource-pools.resource-pool-b.max-size=10Mb
ehcache.caches.clustered-cache.clustered-dedicated.server-resource=offheap-1
ehcache.caches.clustered-cache.clustered-dedicated.max-size=8Mb
ehcache.caches.shared-cache-1.clustered-shared.server-resource=resource-pool-a
ehcache.caches.shared-cache-3.clustered-shared.server-resource=resource-pool-b
ehcache:
cluster:
uri: terracotta://localhost/my-application
default-server-resource: offheap-1
resource-pools:
resource-pool-a:
max-size: 8Mb
server-resource: offheap-2
resource-pool-b:
max-size: 10Mb
caches:
clustered-cache:
clustered-dedicated:
server-resource: offheap-1
max-size: 8Mb
shared-cache-1:
clustered-shared:
server-resource: resource-pool-a
shared-cache-3:
clustered-shared:
server-resource: resource-pool-b
[ehcache]
[ehcache.cluster]
uri="terracotta://localhost/my-application"
default-server-resource="offheap-1"
[ehcache.cluster.resource-pools]
[ehcache.cluster.resource-pools.resource-pool-a]
max-size="8Mb"
server-resource="offheap-2"
[ehcache.cluster.resource-pools.resource-pool-b]
max-size="10Mb"
[ehcache.caches]
[ehcache.caches.clustered-cache]
[ehcache.caches.clustered-cache.clustered-dedicated]
server-resource="offheap-1"
max-size="8Mb"
[ehcache.caches.shared-cache-1]
[ehcache.caches.shared-cache-1.clustered-shared]
server-resource="resource-pool-a"
[ehcache.caches.shared-cache-3]
[ehcache.caches.shared-cache-3.clustered-shared]
server-resource="resource-pool-b"
ehcache {
cluster {
uri = "terracotta://localhost/my-application"
defaultServerResource = "offheap-1"
resourcePools {
resourcePoolA {
maxSize = "8Mb"
serverResource = "offheap-2"
}
resourcePoolB {
maxSize = "10Mb"
}
}
}
caches {
clusteredCache {
clusteredDedicated {
serverResource = "offheap-1"
maxSize = "8Mb"
}
}
sharedCache1 {
clusteredShared {
serverResource = "resource-pool-a"
}
}
sharedCache3 {
clusteredShared {
serverResource = "resource-pool-b"
}
}
}
}
{
ehcache {
cluster {
uri = "terracotta://localhost/my-application"
default-server-resource = "offheap-1"
resource-pools {
resource-pool-a {
max-size = "8Mb"
server-resource = "offheap-2"
}
resource-pool-b {
max-size = "10Mb"
}
}
}
caches {
clustered-cache {
clustered-dedicated {
server-resource = "offheap-1"
max-size = "8Mb"
}
}
shared-cache-1 {
clustered-shared {
server-resource = "resource-pool-a"
}
}
shared-cache-3 {
clustered-shared {
server-resource = "resource-pool-b"
}
}
}
}
}
{
"ehcache": {
"cluster": {
"uri": "terracotta://localhost/my-application",
"default-server-resource": "offheap-1",
"resource-pools": {
"resource-pool-a": {
"max-size": "8Mb",
"server-resource": "offheap-2"
},
"resource-pool-b": {
"max-size": "10Mb"
}
}
},
"caches": {
"clustered-cache": {
"clustered-dedicated": {
"server-resource": "offheap-1",
"max-size": "8Mb"
}
},
"shared-cache-1": {
"clustered-shared": {
"server-resource": "resource-pool-a"
}
},
"shared-cache-3": {
"clustered-shared": {
"server-resource": "resource-pool-b"
}
}
}
}
}
Multiple tier setup
A cache can be configured with multiple tiers. Read the Ehcache documentation on the valid configuration options.
For example, to configure a heap + offheap + disk cache:
ehcache.storage-path=/var/caches
ehcache.caches.my-cache.heap.max-size=200Mb
ehcache.caches.my-cache.offheap.max-size=1Gb
ehcache.caches.my-cache.disk.max-size=10Gb
ehcache:
storage-path: /var/caches
caches:
my-cache:
heap:
max-size: 200Mb
offheap:
max-size: 1Gb
disk:
max-size: 10Gb
[ehcache]
storage-path="/var/caches"
[ehcache.caches]
[ehcache.caches.my-cache]
[ehcache.caches.my-cache.heap]
max-size="200Mb"
[ehcache.caches.my-cache.offheap]
max-size="1Gb"
[ehcache.caches.my-cache.disk]
max-size="10Gb"
ehcache {
storagePath = "/var/caches"
caches {
myCache {
heap {
maxSize = "200Mb"
}
offheap {
maxSize = "1Gb"
}
disk {
maxSize = "10Gb"
}
}
}
}
{
ehcache {
storage-path = "/var/caches"
caches {
my-cache {
heap {
max-size = "200Mb"
}
offheap {
max-size = "1Gb"
}
disk {
max-size = "10Gb"
}
}
}
}
}
{
"ehcache": {
"storage-path": "/var/caches",
"caches": {
"my-cache": {
"heap": {
"max-size": "200Mb"
},
"offheap": {
"max-size": "1Gb"
},
"disk": {
"max-size": "10Gb"
}
}
}
}
}
9 Hazelcast Support
Hazelcast caching is supported. Micronaut will create a Hazelcast client instance to connect to an existing Hazelcast server cluster or create an standalone embedded Hazelcast member instance.
This module is built and tested with Hazelcast 5.3.7 |
Add the Micronaut Hazelcast module as a dependency:
implementation("io.micronaut.cache:micronaut-cache-hazelcast")
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-hazelcast</artifactId>
</dependency>
You can also add Hazelcast module to your project using cli feature as below:
$ mn create-app hello-world -f hazelcast
The minimal configuration to use Hazelcast is to simply declare hazelcast:
with a network configuration for addresses of the
Hazelcast cluster (example below).
hazelcast.client.network.addresses[0]=121.0.0.1:5701
hazelcast:
client:
network:
addresses: ['121.0.0.1:5701']
[hazelcast]
[hazelcast.client]
[hazelcast.client.network]
addresses=[
"121.0.0.1:5701"
]
hazelcast {
client {
network {
addresses = ["121.0.0.1:5701"]
}
}
}
{
hazelcast {
client {
network {
addresses = ["121.0.0.1:5701"]
}
}
}
}
{
"hazelcast": {
"client": {
"network": {
"addresses": ["121.0.0.1:5701"]
}
}
}
}
If you provide a Hazelcast configuration file (ex.: hazelcast.xml
, hazelcast.yml
, hazelcast-client.xml
, or hazelcast-client.yml
) in the working directory or classpath, Micronaut will use this configuration file to configure Hazelcast instance.
When using the @Cacheable and other Cache Annotations, Micronaut will create the Hazelcast client and use the underlying IMap Cache Datastore on the server.
The full list of configurable options is below.
Property | Type | Description |
---|---|---|
|
java.util.Properties |
|
|
boolean |
|
|
int |
|
|
java.util.List |
|
|
boolean |
|
|
java.util.Collection |
|
|
java.util.Collection |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.util.Set |
|
|
java.util.concurrent.ConcurrentMap |
|
|
java.lang.String |
Returns the path to a Hazelcast XML or YAML configuration file. <p>If non-null, the contents of the file will override this configuration. This path will be used to set system property {@code hazelcast.client.config}.</p> |
|
int |
|
|
int |
|
|
boolean |
|
|
boolean |
|
|
boolean |
|
|
int |
|
|
int |
For settings not in the above list, a BeanCreatedEventListener can be registered for HazelcastClientConfiguration or HazelcastMemberConfiguration. The listener will allow all properties to be set directly on the configuration instance.
@Singleton
public class HazelcastAdditionalSettings implements BeanCreatedEventListener<HazelcastClientConfiguration> {
@Override
public HazelcastClientConfiguration onCreated(@NonNull BeanCreatedEvent<HazelcastClientConfiguration> event) {
HazelcastClientConfiguration configuration = event.getBean();
// Set anything on the configuration
configuration.setClusterName("dev");
return configuration;
}
}
@Singleton
class HazelcastAdditionalSettings implements BeanCreatedEventListener<HazelcastClientConfiguration> {
@Override
HazelcastClientConfiguration onCreated(@NonNull BeanCreatedEvent<HazelcastClientConfiguration> event) {
event.bean.tap {
// Set anything on the configuration
clusterName = "dev"
}
}
}
@Singleton
class HazelcastAdditionalSettings : BeanCreatedEventListener<HazelcastClientConfiguration> {
override fun onCreated(event: BeanCreatedEvent<HazelcastClientConfiguration>) = event.bean.apply {
// Set anything on the configuration
clusterName = "dev"
}
}
Alternatively, the HazelcastClientConfiguration
or HazelcastMemberConfiguration
bean may be replaced with your own implementation.
To disable Hazelcast:
hazelcast.enabled=false
hazelcast:
enabled: false
[hazelcast]
enabled=false
hazelcast {
enabled = false
}
{
hazelcast {
enabled = false
}
}
{
"hazelcast": {
"enabled": false
}
}
10 Infinispan Support
Infinispan caching is supported. Micronaut will create an Infinispan client instance to connect to an existing Infinispan server using the HotRod protocol.
This module is built and tested with Infinispan 15.0.8.Final |
To get started, add the Micronaut Infinispan module as a dependency:
implementation("io.micronaut.cache:micronaut-cache-infinispan")
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-infinispan</artifactId>
</dependency>
By default, Micronaut will setup a
RemoteCacheManager
over 127.0.0.1:11222
. To define custom addresses:
infinispan.client.hotrod.server.host=infinispan.example.com
infinispan.client.hotrod.server.port=10222
infinispan:
client:
hotrod:
server:
host: infinispan.example.com
port: 10222
[infinispan]
[infinispan.client]
[infinispan.client.hotrod]
[infinispan.client.hotrod.server]
host="infinispan.example.com"
port=10222
infinispan {
client {
hotrod {
server {
host = "infinispan.example.com"
port = 10222
}
}
}
}
{
infinispan {
client {
hotrod {
server {
host = "infinispan.example.com"
port = 10222
}
}
}
}
}
{
"infinispan": {
"client": {
"hotrod": {
"server": {
"host": "infinispan.example.com",
"port": 10222
}
}
}
}
}
Micronaut will attempt by default to read a /hotrod-client.properties
file from the classpath, and if found, use it.
This file is expected to be in
Infinispan configuration format, for example:
# Hot Rod client configuration
infinispan.client.hotrod.server_list = 127.0.0.1:11222
infinispan.client.hotrod.marshaller = org.infinispan.commons.marshall.ProtoStreamMarshaller
infinispan.client.hotrod.async_executor_factory = org.infinispan.client.hotrod.impl.async.DefaultAsyncExecutorFactory
infinispan.client.hotrod.default_executor_factory.pool_size = 1
infinispan.client.hotrod.hash_function_impl.2 = org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV2
infinispan.client.hotrod.tcp_no_delay = true
infinispan.client.hotrod.tcp_keep_alive = false
infinispan.client.hotrod.request_balancing_strategy = org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy
infinispan.client.hotrod.key_size_estimate = 64
infinispan.client.hotrod.value_size_estimate = 512
infinispan.client.hotrod.force_return_values = false
## Connection pooling configuration
maxActive = -1
maxIdle = -1
whenExhaustedAction = 1
minEvictableIdleTimeMillis=300000
minIdle = 1
To read this file from a different classpath location:
infinispan.client.hotrod.config-file=classpath:my-infinispan.properties
infinispan:
client:
hotrod:
config-file: classpath:my-infinispan.properties
[infinispan]
[infinispan.client]
[infinispan.client.hotrod]
config-file="classpath:my-infinispan.properties"
infinispan {
client {
hotrod {
configFile = "classpath:my-infinispan.properties"
}
}
}
{
infinispan {
client {
hotrod {
config-file = "classpath:my-infinispan.properties"
}
}
}
}
{
"infinispan": {
"client": {
"hotrod": {
"config-file": "classpath:my-infinispan.properties"
}
}
}
}
You can use both an Infinispan’s property file and Micronaut configuration properties. The latter will complement/override values from the former. |
The full list of configurable options via Micronaut properties is below.
Property | Type | Description |
---|---|---|
|
java.lang.String |
|
|
int |
|
|
boolean |
|
|
boolean |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
int |
|
|
long |
|
|
int |
|
|
long |
|
|
int |
|
|
java.lang.Class |
|
|
boolean |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
boolean |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
char |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
char |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
int |
|
|
boolean |
|
|
java.lang.String |
|
|
java.lang.String |
|
|
int |
|
|
boolean |
|
|
boolean |
|
|
int |
|
|
int |
To disable Infinispan:
infinispan.enabled=false
infinispan:
enabled: false
[infinispan]
enabled=false
infinispan {
enabled = false
}
{
infinispan {
enabled = false
}
}
{
"infinispan": {
"enabled": false
}
}
11 MicroStream Support
To use MicroStream as the caching implementation, set up the Micronaut MicroStream module.
12 No Operation Cache Support
Dependent on the environment or when testing it might be undesirable to actually cache items. In such situations a no operation cache manager can be used that will simply accept any items into the cache without actually storing them.
Add the Micronaut no operation cache module as a dependency:
implementation("io.micronaut.cache:micronaut-cache-noop")
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-noop</artifactId>
</dependency>
The no operation cache manager needs to be enabled explicitly:
noop-cache.enabled=true
noop-cache.enabled: true
"noop-cache.enabled"=true
noopCache.enabled = true
{
"noop-cache.enabled" = true
}
{
"noop-cache.enabled": true
}
13 Endpoint
The caches endpoint returns information about the caches in the application and permits invalidating them.
To use this endpoint, you need the following dependency:
implementation("io.micronaut.cache:micronaut-cache-management")
<dependency>
<groupId>io.micronaut.cache</groupId>
<artifactId>micronaut-cache-management</artifactId>
</dependency>
Also note it is disabled by default. To enable it:
endpoints.caches.enabled=true
endpoints:
caches:
enabled: true
[endpoints]
[endpoints.caches]
enabled=true
endpoints {
caches {
enabled = true
}
}
{
endpoints {
caches {
enabled = true
}
}
}
{
"endpoints": {
"caches": {
"enabled": true
}
}
}
To get a collection of all caches by name with their configuration, send a GET request to /caches.
$ curl http://localhost:8080/caches
To get the configuration of a particular cache, include the cache name in your GET request. For example, to access the configuration of the cache 'book-cache':
$ curl http://localhost:8080/caches/book-cache
To retrieve a specific cache entry within a single cache, include both cache name and key in your GET request. For example, to access the entry under key '123' in cache 'book-cache':
$ curl http://localhost:8080/caches/book-cache/123
To invalidate a specific cache entry within a single cache, send a DELETE request to the named cache URL with the desired key.
This only works for caches which have keys of type String .
|
$ curl -X DELETE http://localhost:8080/caches/book-cache/key
To invalidate all cached values within a single cache, send a DELETE request to the named cache URL.
$ curl -X DELETE http://localhost:8080/caches/book-cache
To invalidate all caches, send a DELETE request to /caches.
$ curl -X DELETE http://localhost:8080/caches
Configuration
To configure the caches endpoint, supply configuration through endpoints.caches
.
endpoints.caches.enabled=Boolean
endpoints.caches.sensitive=Boolean
endpoints:
caches:
enabled: Boolean
sensitive: Boolean
[endpoints]
[endpoints.caches]
enabled="Boolean"
sensitive="Boolean"
endpoints {
caches {
enabled = "Boolean"
sensitive = "Boolean"
}
}
{
endpoints {
caches {
enabled = "Boolean"
sensitive = "Boolean"
}
}
}
{
"endpoints": {
"caches": {
"enabled": "Boolean",
"sensitive": "Boolean"
}
}
}
See the section on Built-in endpoints in the user guide for more information. |
Customization
The caches endpoint is composed of a cache data collector and a cache data implementation. The cache data collector (CacheDataCollector) is responsible for returning a publisher that will return the data used in the response. The cache data (CacheData) is responsible for returning data about an individual cache.
To override the default behavior for either of the helper classes, either extend the default implementations (RxJavaRouteDataCollector, DefaultRouteData), or implement the relevant interface directly. To ensure your implementation is used instead of the default, add the @Replaces annotation to your class with the value being the default implementation.
14 Guides
See the guide for Micronaut Cache to learn more.
15 Repository
You can find the source code of this project in this repository: