@Documented
@Retention(value=RUNTIME)
@Target(value={TYPE,METHOD,ANNOTATION_TYPE})
@MessageListener
public @interface KafkaListener
Annotation applied at the class level to indicate that a bean is a Kafka Consumer
.
Modifier and Type | Optional Element and Description |
---|---|
boolean |
autoStartup
Setting to allow start the consumer in paused mode.
|
boolean |
batch
By default each listener will consume a single
ConsumerRecord . |
java.lang.String |
clientId
Sets the client id of the Kafka consumer.
|
ErrorStrategy |
errorStrategy
Setting the error strategy allows you to resume at the next offset
or to seek the consumer (stop on error) to the failed offset so that
it can retry if an error occurs
The consumer bean is still able to implement a custom exception handler to replace
DefaultKafkaListenerExceptionHandler
and set the error strategy. |
java.lang.String |
groupId
The same as
value() . |
java.lang.String |
heartbeatInterval
The heart beat interval for the consumer.
|
org.apache.kafka.common.IsolationLevel |
isolation
Kafka consumer isolation level to control how to read messages written transactionally.
|
OffsetReset |
offsetReset |
OffsetStrategy |
offsetStrategy
The
OffsetStrategy to use for the consumer. |
java.lang.String |
pollTimeout
The timeout to use for calls to
Consumer.poll(java.time.Duration) . |
java.lang.String |
producerClientId
The client id of the producer that is used for
SendTo . |
java.lang.String |
producerTransactionalId
This setting applies only for the producer that is used for
SendTo . |
io.micronaut.context.annotation.Property[] |
properties
Additional properties to configure with for Consumer.
|
boolean |
redelivery
For listeners that return reactive types, message offsets are committed without blocking the consumer.
|
java.lang.String |
sessionTimeout
The session timeout for a consumer.
|
int |
threads
Kafka consumers are by default single threaded.
|
boolean |
uniqueGroupId
A unique string (UUID) can be appended to the group ID.
|
java.lang.String |
value
Sets the consumer group id of the Kafka consumer.
|
@AliasFor(member="groupId") public abstract java.lang.String value
ApplicationConfiguration.getName()
otherwise
the name of the class is used.@AliasFor(member="value") public abstract java.lang.String groupId
value()
.public abstract boolean uniqueGroupId
public abstract java.lang.String clientId
ApplicationConfiguration.getName()
.public abstract OffsetStrategy offsetStrategy
OffsetStrategy
to use for the consumer.OffsetStrategy
public abstract OffsetReset offsetReset
public abstract java.lang.String producerClientId
SendTo
.public abstract java.lang.String producerTransactionalId
SendTo
.
The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer
sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions.
If no TransactionalId is provided, then the producer is limited to idempotent delivery.
If a TransactionalId is configured, enable.idempotence
is implied.
By default, the TransactionId is not configured, which means transactions cannot be used.public abstract org.apache.kafka.common.IsolationLevel isolation
ConsumerConfig.ISOLATION_LEVEL_CONFIG
.public abstract ErrorStrategy errorStrategy
DefaultKafkaListenerExceptionHandler
and set the error strategy.public abstract int threads
NOTE: When using this setting if your bean is Singleton
then local state will be s
shared between invocations from different consumer threads
public abstract java.lang.String pollTimeout
Consumer.poll(java.time.Duration)
.public abstract java.lang.String sessionTimeout
session.timeout.ms
.ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG
public abstract java.lang.String heartbeatInterval
heartbeat.interval.ms
.ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG
public abstract boolean redelivery
public abstract boolean batch
ConsumerRecord
.
By setting this value to true
and specifying a container type in the method signatures you can indicate that the method should instead receive all the records at once in a batch.