package example;
import io.micronaut.context.annotation.Parameter;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.ParameterExpression;
import io.micronaut.data.annotation.Query;
import io.micronaut.data.annotation.QueryHint;
import io.micronaut.data.annotation.Repository;
import io.micronaut.data.model.Page;
import io.micronaut.data.model.Pageable;
import io.micronaut.data.model.Slice;
import io.micronaut.data.repository.CrudRepository;
import java.util.List;
@Repository // (1)
interface BookRepository extends CrudRepository<Book, Long> { // (2)
Book find(String title);
}
Table of Contents
Micronaut Data
Data Repository Support for Micronaut
Version: 4.8.1
1 Introduction
Micronaut Data is a database access toolkit that uses Ahead of Time (AoT) compilation to pre-compute queries for repository interfaces that are then executed by a thin, lightweight runtime layer.
Micronaut Data is inspired by GORM and Spring Data, however improves on those solutions in the following ways:
-
No runtime model - Both GORM and Spring Data maintain a runtime metamodel that uses reflection to model relationships between entities. This model consumes significant memory and memory requirements grow as your application size grows. The problem is worse when combined with Hibernate which maintains its own metamodel as you end up with duplicate meta-models.
-
No query translation - Both GORM and Spring Data use regular expressions and pattern matching in combination with runtime generated proxies to translate a method definition on a Java interface into a query at runtime. No such runtime translation exists in Micronaut Data and this work is carried out by the Micronaut compiler at compilation time.
-
No Reflection or Runtime Proxies - Micronaut Data uses no reflection or runtime proxies, resulting in better performance, smaller stack traces and reduced memory consumption due to a complete lack of reflection caches (Note that the backing implementation, for example Hibernate, may use reflection).
-
Type Safety - Micronaut Data will actively check at compile time that a repository method can be implemented and fail compilation if it cannot.
Micronaut Data provides a general API for translating a compile time Query model into a query at compilation time and provides runtime support for the following backends:
Further implementations for other databases are planned in the future.
The following sections will take you through the basics of querying and using Micronaut Data, if you wish to understand more detail about how Micronaut Data works check out the How Micronaut Data Works section.
At a fundamental level however what Micronaut Data does can be summed up in the following snippets. Given the following interface:
package example
import io.micronaut.context.annotation.Executable
import io.micronaut.context.annotation.Parameter
import io.micronaut.data.annotation.*
import io.micronaut.data.model.*
import io.micronaut.data.repository.CrudRepository
@Repository // (1)
interface BookRepository extends CrudRepository<Book, Long> { // (2)
@Executable
Book find(String title)
}
package example
import io.micronaut.context.annotation.Executable
import io.micronaut.context.annotation.Parameter
import io.micronaut.data.annotation.*
import io.micronaut.data.model.*
import io.micronaut.data.repository.CrudRepository
@Repository // (1)
interface BookRepository : CrudRepository<Book, Long> { // (2)
@Executable
fun find(title: String): Book
}
1 | The @Repository annotation designates BookRepository as a data repository. Since, it is an interface, the @Repository annotation provides implementations at compilation time. |
2 | By extending CrudRepository you enable automatic generation of CRUD (Create, Read, Update, Delete) operations. |
Micronaut Data computes the query for the find
method automatically at compilation time making it available at runtime via annotation metadata:
@Inject
BeanContext beanContext;
@Test
void testAnnotationMetadata() {
String query = beanContext.getBeanDefinition(BookRepository.class) // (1)
.getRequiredMethod("find", String.class) // (2)
.getAnnotationMetadata().stringValue(Query.class) // (3)
.orElse(null);
assertEquals( // (4)
"SELECT book_ FROM example.Book AS book_ WHERE (book_.title = :p1)", query);
}
@Inject
BeanContext beanContext
void "test annotation metadata"() {
given:"The value of the Query annotation"
String query = beanContext.getBeanDefinition(BookRepository.class) // (1)
.getRequiredMethod("find", String.class) // (2)
.getAnnotationMetadata()
.stringValue(Query.class) // (3)
.orElse(null)
expect:"The JPA-QL query to be correct" // (4)
query == "SELECT book_ FROM example.Book AS book_ WHERE (book_.title = :p1)"
}
@Inject
lateinit var beanContext: BeanContext
@Test
fun testAnnotationMetadata() {
val query = beanContext.getBeanDefinition(BookRepository::class.java) // (1)
.getRequiredMethod<Any>("find", String::class.java) // (2)
.annotationMetadata
.stringValue(Query::class.java) // (3)
.orElse(null)
assertEquals( // (4)
"SELECT book_ FROM example.Book AS book_ WHERE (book_.title = :p1)",
query
)
}
1 | The BeanDefinition is retrieved from the BeanContext |
2 | The find method is retrieved |
3 | The value of the @Query annotation is retrieved |
4 | The JPA-QL query for the method is correct |
1.1 What's New?
Micronaut Data 4.2
-
Procedure invocations in repositories for Data JPA and Data JDBC/R2DBC
-
Added possibility to have associations (JOINs) in DTOs
-
Support for inserts, updates and deletes with
RETURNING
clause in repositories -
MongoDB: Support
arrayFilters
-
Kotlin: New coroutine variations of connection / transaction operations:
-
io.micronaut.data.connection.kotlin.CoroutineConnectionOperations
-
io.micronaut.transaction.kotlin.CoroutineTransactionOperations
-
-
R2DBC: New connection status callback. Corrected cancellation.
Micronaut Data 4.1
-
Support NESTED transaction propagation
-
Bugfixes
Micronaut Data 4.0
-
Hibernate Reactive 2 (Hibernate 6 compatible)
-
New implementation of the transaction and connection management
-
JPA repository
merge
method -
Oracle JSON-Relational Duality Views Support
Micronaut Data 3.5
-
Hibernate Reactive
-
Type-safe Java Criteria
-
Type-safe Kotlin Criteria and builders
-
Improved transaction handling
Micronaut Data 3.4
-
New async, reactive and coroutines repositories to support pagination
-
Propagating synchronous transaction state in Kotlin’s coroutines
-
R2DBC upgraded to
1.0.0.RELEASE
Micronaut Data 3.3
-
Support for MongoDB repositories
-
R2DBC upgraded to Arabba-SR12 and OracleDB R2DBC 0.4.0
-
Propagating JDBC transaction context in Kotlin’s coroutines
Micronaut Data 3.2
-
Repositories with JPA Criteria API specification for Micronaut JDBC/R2DBC
Micronaut Data 3.1
-
Kotlin’s coroutines support. New repository interface
CoroutineCrudRepository
-
Support for
AttributeConverter
-
R2DBC upgraded to
Arabba-SR11
-
JPA Criteria specifications
Micronaut Data 3.0
-
Micronaut 3.0
-
Hibernate optimizations
Micronaut Data 2.5.0
-
Repositories now support batch insert/update/delete even with a custom query
-
Rewritten entity mapper allows more complex mapping for JDBC/R2DBC entities
-
Support for
@JoinTable
and@JoinColumn
annotations
Micronaut Data 2.4.0
-
Full support for immutable entities. You can use Java 16 records or Kotlin immutable data classes
-
Integrated support for R2DBC, now the
data-r2dbc
module is a part of the data project and shares the same code with JDBC -
Optimistic locking for JDBC/R2DBC
1.2 Breaking Changes
This section documents breaking changes between Micronaut versions
4.0.0
Repositories validation
Default repository interfaces no longer have Jakarta Validation annotations to validate the entity and the ID. To add the validation, annotate the repository’s generic type argument with Jakarta Validation annotations:
@Repository
public interface BookRepository implements CrudRepository<@jakarta.validation.Valid Book, @jakarta.validation.constraints.NotNull Long> {
}
Repositories now return List
Find all return type changed to be List
instead of Iterable
Hibernate transaction manager
The signature of the Hibernate transaction manager has changed to include org.hibernate.Session
instead of a data source connection:
@Inject
public TransactionOperations<org.hibernate.Session> hibernateTransactionOperations;
Transaction manager
Micronaut 4 comes with a rewritten transaction propagation and management. It replaces the previous implementation forked form Spring Framework.
The new implementation has a newly added connection management, allowing to share a connection between multiple repositories and services without an open transaction. The new method supporting extracting the current transaction status TransactionOperations#findTransactionStatus
. The TransactionStatus
now includes information about the connection and the transaction definition.
Async and Reactive repositories
Async and reactive repositories are no longer throw EmptyResultException
if the entity is not found.
1.3 Release History
For this project, you can find a list of releases (with release notes) here:
2 Build Configuration
Since Micronaut Data is a build time tool, it will not work correctly unless your build is configured correctly.
There are two important aspects to Micronaut Data:
-
The build time annotation processors
-
The runtime APIs
The build time processor is added by adding the micronaut-data-processor
module to your annotation processor configuration in either Gradle or Maven:
annotationProcessor("io.micronaut.data:micronaut-data-processor")
<annotationProcessorPaths>
<path>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-processor</artifactId>
</path>
</annotationProcessorPaths>
For document databases like MongoDB or Azure Cosmos Data instead of dependency above you need to use:
annotationProcessor("io.micronaut.data:micronaut-data-document-processor")
<annotationProcessorPaths>
<path>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-document-processor</artifactId>
</path>
</annotationProcessorPaths>
For Kotlin, add the micronaut-data-processor or micronaut-data-document-processor dependency in kapt or ksp scope, and for Groovy add micronaut-data-processor or micronaut-data-document-processor in compileOnly scope.
|
You can use Micronaut Launch to create a pre-configured project:
Gradle |
Maven |
|
Java |
||
Kotlin |
||
Groovy |
Micronaut Data and Lombok
If you intend to use Lombok with Micronaut Data then you must place the Lombok annotation processor before the Micronaut processors in your build configuration since Micronaut needs to see the mutations to the AST that Lombok applies.
Lombok plugins like the Gradle plugin io.franzbecker.gradle-lombok are not supported as they place the annotation processors in an incorrect order.
|
3 Shared Concepts
The following sections describe shared concepts of all Micronaut Data modules:
-
Repositories - use existing or create a custom repository
-
Querying - define a repository method to access your data
-
Data access - data access operations
-
Transactions - transactional access support
3.1 Repository Interfaces
Micronaut Data repositories are defined as interfaces that are annotated with the @Repository annotation.
The @Repository annotation accepts an optional string value which represents the name of the connection or datasource in a multiple datasource scenario. By default, Micronaut Data will look for the default datasource.
It’s possible to annotate the repository injection point with @Repository and set the data source name. Note that you cannot inject generic repositories, each repository needs to be bound to an entity.
The entity to treat as the root entity for the purposes of querying is established either from the method signature or from the generic type parameter specified to the GenericRepository interface.
If no root entity can be established then a compilation error will occur.
The following table summarizes the repository interfaces that come with Micronaut Data:
Interface |
Description |
A root interface that features no methods but defines the entity type and ID type as generic arguments |
|
Extends GenericRepository and adds methods to perform CRUD |
|
Extends CrudRepository and adds JPA specific methods like |
|
Extends CrudRepository and adds methods for pagination |
|
Extends GenericRepository and adds methods for asynchronous CRUD execution |
|
Extends AsyncCrudRepository and adds methods for pagination |
|
Extends GenericRepository and adds CRUD methods that return Publisher |
|
Extends ReactiveStreamsCrudRepository and adds methods for pagination |
|
Extends ReactiveStreamsCrudRepository and is using Reactor return types |
|
Extends ReactorCrudRepository and adds methods for pagination |
|
Extends GenericRepository and adds CRUD methods that return RxJava 2 types |
|
Extends GenericRepository and is using Kotlin coroutines for reactive CRUD operations |
|
Extends CoroutineCrudRepository and adds methods for pagination |
Note that in addition to interfaces you can also define repositories as abstract classes:
package example;
import io.micronaut.data.annotation.Repository;
import io.micronaut.data.repository.CrudRepository;
import jakarta.persistence.EntityManager;
import java.util.List;
@Repository
public abstract class AbstractBookRepository implements CrudRepository<Book, Long> {
private final EntityManager entityManager;
public AbstractBookRepository(EntityManager entityManager) {
this.entityManager = entityManager;
}
public List<Book> findByTitle(String title) {
return entityManager.createQuery("FROM Book AS book WHERE book.title = :title", Book.class)
.setParameter("title", title)
.getResultList();
}
}
package example
import io.micronaut.data.annotation.Repository
import io.micronaut.data.repository.CrudRepository
import jakarta.persistence.EntityManager
@Repository
abstract class AbstractBookRepository implements CrudRepository<Book, Long> {
private final EntityManager entityManager
AbstractBookRepository(EntityManager entityManager) {
this.entityManager = entityManager
}
List<Book> findByTitle(String title) {
return entityManager.createQuery("FROM Book AS book WHERE book.title = :title", Book)
.setParameter("title", title)
.getResultList()
}
}
package example
import io.micronaut.data.annotation.Repository
import io.micronaut.data.repository.CrudRepository
import jakarta.persistence.EntityManager
@Repository
abstract class AbstractBookRepository(private val entityManager: EntityManager) : CrudRepository<Book, Long> {
fun findByTitle(title: String): List<Book> {
return entityManager.createQuery("FROM Book AS book WHERE book.title = :title", Book::class.java)
.setParameter("title", title)
.resultList
}
}
As you can see from the above example, using abstract classes can be useful as it allows you to combine custom code that interacts with a repository interface implemented automatically by Micronaut Data.
3.2 Validation
Repositories can have the entity and the ID values validated. To add the validation, annotate the repository’s generic type argument with Jakarta Validation annotations:
package example;
import io.micronaut.data.annotation.Repository;
import io.micronaut.data.repository.CrudRepository;
import jakarta.validation.Valid;
import jakarta.validation.constraints.Min;
@Repository
public interface AccountRepository extends CrudRepository<@Valid Account, @Min(0) Long> {
}
package example
import io.micronaut.data.annotation.Repository
import io.micronaut.data.repository.CrudRepository
@Repository
interface AccountRepository extends CrudRepository<@jakarta.validation.Valid Account, @jakarta.validation.constraints.Min(0) Long> {
}
package example
import io.micronaut.data.annotation.Repository
import io.micronaut.data.repository.CrudRepository
@Repository
interface AccountRepository : CrudRepository<@jakarta.validation.Valid Account, @jakarta.validation.constraints.Min(0) Long>
3.3 Writing Queries
The implementation of querying in Micronaut Data is based on the dynamic finders in GORM.
A pattern matching approach is taken at compilation time. The general pattern of query methods is:
As shown in Figure 1, the most common query stem is find
, but you can also use search
, query
, get
, read
or retrieve
.
The projection and ordering parts of the query pattern are optional (more on those later). The following snippet demonstrates 3 simple queries that use a different stem but perform the same query:
Book findByTitle(String title);
Book getByTitle(String title);
Book retrieveByTitle(String title);
Book findByTitle(String title)
Book getByTitle(String title)
Book retrieveByTitle(String title)
fun findByTitle(title: String): Book
fun getByTitle(title: String): Book
fun retrieveByTitle(title: String): Book
The above examples return a single instance of an entity, the supported return types are described in the following table:
Return Type |
Description |
|
If |
|
A |
|
A Java 8 |
|
An optional value |
|
An instance of Page for pagination. |
|
An instance of Slice for pagination. |
|
A |
|
A Reactive Streams compatible type |
|
A Kotlin reactive type. Requires |
Primitive/Simple Types |
In the case of projections primitive/basic types can be returned |
Methods with Stream<Book> results need to be used with a 'try-with-resources' block and should be executed within a transaction.
|
In addition, to the standard findBy*
pattern, a few other patterns exist that have special return type requirements.
The following table summarizes the possible alternative patterns, behaviour and expected return types:
Method Prefix |
Supported Return Types |
Description |
|
An entity or any common |
Find one or many records matching criteria |
|
An entity or any common |
Find one or many records matching properties (every method parameter should have a name after the property it wants to match) |
|
A primitive number of an instance of |
Counts the number of records matching criteria |
|
A primitive number of an instance of |
Counts the number of records matching properties |
|
A primitive or wrapper |
Checks whether a record exists matching criteria |
|
A primitive or wrapper |
Checks whether a record exists matching properties |
|
A |
Inserts one or many instances |
|
A |
Delete one or many entries |
|
A |
Batch delete matching criteria |
|
A |
Batch delete where parameter/s represents an entity’s property (must be named the same) |
|
A |
Update one or many entities |
|
A |
Batch update by properties |
|
A |
Batch update where parameter/s represents an entity’s property (must be named the same) |
|
An entity type or any types that can be returned by the query |
Insert with a returning clause (Might not be supported by a DIALECT or an implementation) |
|
An entity type or any types that can be returned by the query |
Update with a returning clause (Might not be supported by a DIALECT or an implementation) |
|
An entity type or any types that can be returned by the query |
Delete with a returning clause (Might not be supported by a DIALECT or an implementation) |
Note that every method prefix can have One
or All
suffix: findOneByTitle
, countAllByTitle
etc.
More details about the batch update variants of these methods is covered in the Data Updates section. |
Finally, as an alternative to the By
syntax you also define simple finders that use the parameter names to match properties to query. This syntax is less flexible, but is more readable in certain circumstances. For example the following can be used as an alternative to findByTitle
:
Book find(String title);
@Executable
Book find(String title)
@Executable
fun find(title: String): Book
Note that in this case if the title
parameter does not exist as a property in the entity being queried or the type does not match up a compilation error will occur. Also, you can specify more than one parameter to perform a logical AND
.
3.3.1 Query Criteria
The previous example presented a simple findByTitle
query which searches for all Book
instances that have a title
property equal to the given value.
This is the simplest type of query supported by Micronaut Data, but you can use an optional suffix on the property name to modify the type of criterion to apply.
For example the following query pattern will execute a query that finds only Book
instances that have a page count greater than the given value:
List<Book> findByPagesGreaterThan(int pageCount);
List<Book> findByPagesGreaterThan(int pageCount)
fun findByPagesGreaterThan(pageCount: Int): List<Book>
The following table summarizes the possible expressions and behaviour:
Example Suffix |
Description |
Sample |
|
Find results where the property is after the given value |
|
|
Find results where the property is before the given value |
|
|
Find results where the property contains the given value |
|
|
Find results where the property starts with the given value |
|
|
Find results where the property ends with the given value |
|
|
Find results equal to the given value |
|
|
Find results not equal to the given value |
|
|
Find results where the property is greater than the given value |
|
|
Find results where the property is greater than or equal to the given value |
|
|
Find results where the property is less than the given value |
|
|
Find results where the property is less than or equal to the given value |
|
|
Finds string values "like" the given expression |
|
|
Case insensitive "like" query |
|
|
Find results where the property is that are contained within the given list |
|
|
Find results where the property is between the given values |
|
|
Finds results where the property is |
|
|
Finds results where the property is not |
|
|
Finds results where the property is empty or |
|
|
Finds results where the property is not empty or |
|
|
Finds results where the property is true |
|
|
Finds results where the property is false |
|
|
Finds results where the property which is an array or list contains given element. Supported only by Micronaut Data MongoDB and Azure Cosmos Db. |
|
Any of these criterion expressions can be negated by adding the word Not before the expression (for example NotInList ).
|
You can combine multiple criterion by separating them with And
or Or
logical operators. For example:
List<Book> findByPagesGreaterThanOrTitleLike(int pageCount, String title);
List<Book> findByPagesGreaterThanOrTitleLike(int pageCount, String title)
fun findByPagesGreaterThanOrTitleLike(pageCount: Int, title: String): List<Book>
The above example uses Or
to express a greater than condition and a like condition.
You can also negate any of the aforementioned expressions by adding Not
prior the name of the expression (example NotTrue
or NotContain
).
3.3.2 Pagination
Typically, when returning multiple records you need some control over paging the data. Micronaut Data includes the ability to specify pagination requirements with the Pageable type (inspired by GORM’s PagedResultList and Spring Data’s Pageable).
In addition, methods can return a Page object which includes the execution of an additional query to obtain the total number of results for a given query.
The following are some example signatures:
List<Book> findByPagesGreaterThan(int pageCount, Pageable pageable);
Page<Book> findByTitleLike(String title, Pageable pageable);
Slice<Book> list(Pageable pageable);
List<Book> findByPagesGreaterThan(int pageCount, Pageable pageable)
Page<Book> findByTitleLike(String title, Pageable pageable)
Slice<Book> list(Pageable pageable)
fun findByPagesGreaterThan(pageCount: Int, pageable: Pageable): List<Book>
fun findByTitleLike(title: String, pageable: Pageable): Page<Book>
fun list(pageable: Pageable): Slice<Book>
And some test data:
bookRepository.saveAll(Arrays.asList(new Book("The Stand", 1000), new Book("The Shining", 600),
new Book("The Power of the Dog", 500), new Book("The Border", 700),
new Book("Along Came a Spider", 300), new Book("Pet Cemetery", 400), new Book("A Game of Thrones", 900),
new Book("A Clash of Kings", 1100)));
bookRepository.saveAll(Arrays.asList(
new Book("The Stand", 1000),
new Book("The Shining", 600),
new Book("The Power of the Dog", 500),
new Book("The Border", 700),
new Book("Along Came a Spider", 300),
new Book("Pet Cemetery", 400),
new Book("A Game of Thrones", 900),
new Book("A Clash of Kings", 1100)
))
bookRepository.saveAll(Arrays.asList(
Book(0,"The Stand", 1000),
Book(0,"The Shining", 600),
Book(0,"The Power of the Dog", 500),
Book(0,"The Border", 700),
Book(0,"Along Came a Spider", 300),
Book(0,"Pet Cemetery", 400),
Book(0,"A Game of Thrones", 900),
Book(0,"A Clash of Kings", 1100)
))
You can execute queries and return paginated data using the from
method of Pageable and specifying an appropriate return type:
Slice<Book> slice = bookRepository.list(Pageable.from(0, 3));
List<Book> resultList = bookRepository.findByPagesGreaterThan(500, Pageable.from(0, 3));
Page<Book> page = bookRepository.findByTitleLike("The%", Pageable.from(0, 3));
Slice<Book> slice = bookRepository.list(Pageable.from(0, 3))
List<Book> resultList =
bookRepository.findByPagesGreaterThan(500, Pageable.from(0, 3))
Page<Book> page = bookRepository.findByTitleLike("The%", Pageable.from(0, 3))
val slice = bookRepository.list(Pageable.from(0, 3))
val resultList = bookRepository.findByPagesGreaterThan(500, Pageable.from(0, 3))
val page = bookRepository.findByTitleLike("The%", Pageable.from(0, 3))
The from
method accepts index
and size
arguments which are the page number to begin from and the number of records to return per page.
A Slice is the same as a Page but results in one less query as it excludes the total number of pages calculation.
3.3.3 Cursored Pagination
Micronaut Data includes the ability to specify cursored pagination with the CursoredPageable type. For cursored page methods return a CursoredPage type (inspired by CursoredPage in Jakarta Data).
Cursored pagination is currently only supported with Micronaut Data JDBC and R2DBC. |
The following are some example signatures:
CursoredPage<Book> find(CursoredPageable pageable); // (1)
CursoredPage<Book> findByPagesBetween(int minPageCount, int maxPageCount, Pageable pageable); // (2)
Page<Book> findByTitleStartingWith(String title, Pageable pageable); // (3)
CursoredPage<Book> find(CursoredPageable pageable) // (1)
CursoredPage<Book> findByPagesBetween(int minPageCount, int maxPageCount, Pageable pageable) // (2)
Page<Book> findByTitleStartingWith(String title, Pageable pageable) // (3)
fun find(pageable: CursoredPageable): CursoredPage<Book> // (1)
fun findByPagesBetween(minPageCount: Int, maxPageCount: Int, pageable: Pageable): CursoredPage<Book> // (2)
fun findByTitleStartingWith(title: String, pageable: Pageable): Page<Book> // (3)
1 | The signature defines a CursoredPageable parameter and CursoredPage return type. |
2 | The signature of method defines a CursoredPage return type, therefore method will throw an error if the request is not for the first page or is not cursored. |
3 | The method will return a CursoredPage only whenever a CursoredPageable is supplied. |
Therefore, you can use the repository methods to retrieve data with cursored pagination using the following queries:
CursoredPage<Book> page = // (1)
bookRepository.find(CursoredPageable.from(5, Sort.of(Order.asc("title"))));
CursoredPage<Book> page2 = bookRepository.find(page.nextPageable()); // (2)
CursoredPage<Book> pageByPagesBetween = // (3)
bookRepository.findByPagesBetween(400, 700, Pageable.from(0, 3));
Page<Book> pageByTitleStarts = // (4)
bookRepository.findByTitleStartingWith("The", CursoredPageable.from( 3, Sort.unsorted()));
CursoredPage<Book> page = // (1)
bookRepository.find(CursoredPageable.from(5, Sort.of(Sort.Order.asc("title"))))
CursoredPage<Book> page2 = bookRepository.find(page.nextPageable()) // (2)
CursoredPage<Book> pageByPagesBetween = // (3)
bookRepository.findByPagesBetween(400, 700, Pageable.from(0, 3))
Page<Book> pageByTitleStarts = // (4)
bookRepository.findByTitleStartingWith("The", CursoredPageable.from( 3, Sort.unsorted()))
val page = // (1)
bookRepository.find(CursoredPageable.from(5, Sort.of(Sort.Order.asc("title"))))
val page2 = bookRepository.find(page.nextPageable()) // (2)
val pageByPagesBetween = // (3)
bookRepository.findByPagesBetween(400, 700, Pageable.from(0, 3))
val pageByTitleStarts = // (4)
bookRepository.findByTitleStartingWith("The", CursoredPageable.from(3, Sort.unsorted()))
1 | Create a cursored pageable with a desired size and sorting and get a cursored page. |
2 | Get the next cursored pageable by calling CursoredPage.getNextPageable() . |
3 | Request first cursored page. |
4 | Supply a CursoredPageable to the repository method and a CursoredPage will be returned. |
The cursor of pagination is based on the supplied sorting. If the supplied Sort in pageable does not produce a unique sorting, Micronaut Data internally will additionally sort by the identity column and extend the cursor with the column value to make sure pagination works correctly. |
3.3.4 Ordering
You can control ordering of results by appending an OrderBy*
expression to the end of the method name:
List<Book> listOrderByTitle();
List<Book> listOrderByTitleDesc();
List<Book> listOrderByTitle()
List<Book> listOrderByTitleDesc()
fun listOrderByTitle(): List<Book>
fun listOrderByTitleDesc(): List<Book>
The OrderBy*
expression refer to the property name to order by and can optionally be appended with either Asc
or Desc
to control ascending or descending order. Multiple conditions can be used by joining them with And
like findByTypeOrderByNameAndDate
.
3.3.5 Query Projections
Frequently, rather than retrieving all the data for a particular entity, you may only want a single property or association of an entity or to perform some kind of computation and obtain just that result. This is where query projections come in.
The simplest form of projection is to retrieve a property or association. For example:
List<String> findTitleByPagesGreaterThan(int pageCount);
List<String> findTitleByPagesGreaterThan(int pageCount)
fun findTitleByPagesGreaterThan(pageCount: Int): List<String>
In the above example the findTitleByPagesGreaterThan
method is resolving the title
property of the Book
entity and returning the data as a List
of String
.
If the projected property type and the return generic type do not match up then Micronaut Data will fail to compile the method. |
You can also use projections on association paths, for example if an author
association were present you could write findAuthorNameByPagesGreaterThan
to retrieve the names of all the authors.
In addition to this, Micronaut Data also supports projection expressions. The following table summarizes the possible expressions with an example and description:
Expression |
Example |
Description |
|
|
Counts the values |
|
|
Counts the distinct values |
|
|
Finds the distinct property values |
|
|
Finds the maximum property value |
|
|
Finds the minimum property value |
|
|
Finds the sum of all the property values |
|
|
Finds the average of all the property values |
You can also use top
or first
to limit the results returned (as a simple alternative to pagination)
List<Book> findTop3ByTitleLike(String title);
List<Book> findTop3ByTitleLike(String title)
fun findTop3ByTitleLike(title: String): List<Book>
The above query will return the first 3 results for the given query expression.
3.3.6 DTO Projections
Micronaut Data supports reflection-free Data Transfer Object (DTO) projections if the return type is annotated with @Introspected
.
For example if you wanted to project on an entity called Book
you could define a DTO as follows:
package example;
import io.micronaut.core.annotation.Introspected;
@Introspected
public class BookDTO {
private String title;
private int pages;
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
public int getPages() {
return pages;
}
public void setPages(int pages) {
this.pages = pages;
}
}
package example
import io.micronaut.core.annotation.Introspected
@Introspected
class BookDTO {
String title
int pages
}
package example
import io.micronaut.core.annotation.Introspected
@Introspected
data class BookDTO(
var title: String,
var pages: Int
)
The DTO should include properties that match the property names you wish to project on (in this case title
and pages
). If any properties do not match then a compilation error will occur.
You can then use the DTO object as return type in query methods:
BookDTO findOne(String title);
BookDTO findOne(String title);
fun findOne(title: String): BookDTO
Micronaut Data will optimize the query to only select the necessary properties from the database.
You can use @NamingStrategy annotation to override the default naming strategy. |
3.3.7 Explicit Queries
If you want to have more control over the JPA-QL query then you can use the @Query annotation to specify an explicit query:
@Query("FROM Book b WHERE b.title = :t ORDER BY b.title")
List<Book> listBooks(String t);
@Query("FROM Book b WHERE b.title = :t ORDER BY b.title")
List<Book> listBooks(String t)
@Query("FROM Book b WHERE b.title = :t ORDER BY b.title")
fun listBooks(t: String): List<Book>
You specify named parameters using colon (:
) followed by the name and these must match a parameter specified to the method otherwise a compilation error will occur, use backslash \:
to escape the colon that is not a parameter specification.
Currently, Micronaut Data does not parse the JPA-QL AST and perform any further type checking hence greater care should be taken when using explicit queries. This may change in a future version of Micronaut Data. |
Note that if the method returns a Page for pagination then you must additionally specify a query that performs the equivalent count using the countQuery
member of the @Query annotation.
3.3.8 Modifying Queries with @Where
You can use the @Where annotation to modify compile time generated query with additional query criterion.
A common use case for this is to implement soft delete. For example considering the following User
entity which declares an enabled
property:
package example;
import io.micronaut.data.annotation.*;
import io.micronaut.data.model.naming.NamingStrategies;
@MappedEntity(namingStrategy = NamingStrategies.Raw.class)
@Where("@.userEnabled = true") // (1)
public class User {
@GeneratedValue
@Id
private Long id;
private String userName;
private boolean userEnabled = true; // (2)
public User(String userName) {
this.userName = userName;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getUserName() {
return userName;
}
public void setUserName(String userName) {
this.userName = userName;
}
public boolean isUserEnabled() {
return userEnabled;
}
public void setUserEnabled(boolean userEnabled) {
this.userEnabled = userEnabled;
}
}
package example
import groovy.transform.EqualsAndHashCode
import io.micronaut.data.annotation.*
@MappedEntity
@Where("@.enabled = true") // (1)
@EqualsAndHashCode(includes = "name")
class User {
@GeneratedValue
@Id
Long id
String name
boolean enabled = true // (2)
User(String name) {
this.name = name
}
}
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import io.micronaut.data.annotation.Where
@MappedEntity
@Where("@.enabled = true") // (1)
data class User(
@GeneratedValue
@field:Id
var id: Long,
val name: String,
val enabled: Boolean // (2)
)
1 | The @Where annotation is used to declare that all queries should include enabled = true and @ is a placeholder for the query’s alias. |
2 | An enabled property exists on the entity |
You can then easily modify the delete
operations to instead issue an update. For example, consider the following repository implementation:
package example;
import io.micronaut.core.annotation.NonNull;
import io.micronaut.data.annotation.Query;
import io.micronaut.data.jdbc.annotation.JdbcRepository;
import io.micronaut.data.model.query.builder.sql.Dialect;
import io.micronaut.data.repository.CrudRepository;
import jakarta.validation.constraints.NotNull;
import java.util.List;
@JdbcRepository(dialect = Dialect.H2)
public interface UserRepository extends CrudRepository<User, Long> { // (1)
@Override
@Query("UPDATE user SET userEnabled = false WHERE id = :id") // (2)
void deleteById(@NonNull @NotNull Long id);
@Query("SELECT * FROM user WHERE userEnabled = false") // (3)
List<User> findDisabled();
}
package example
import io.micronaut.core.annotation.NonNull
import io.micronaut.data.annotation.Query
import io.micronaut.data.jdbc.annotation.JdbcRepository
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.repository.CrudRepository
import jakarta.validation.constraints.NotNull
@JdbcRepository(dialect = Dialect.H2)
interface UserRepository extends CrudRepository<User, Long> { // (1)
@Override
@Query("UPDATE user SET enabled = false WHERE id = :id") // (2)
void deleteById(@NonNull @NotNull Long id)
@Query("SELECT * FROM user WHERE enabled = false") // (3)
List<User> findDisabled()
}
package example
import io.micronaut.data.annotation.Query
import io.micronaut.data.jdbc.annotation.JdbcRepository
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.repository.CrudRepository
@JdbcRepository(dialect = Dialect.H2)
interface UserRepository : CrudRepository<User, Long> { // (1)
@Query("UPDATE user SET enabled = false WHERE id = :id") // (2)
override fun deleteById(id: Long)
@Query("SELECT * FROM user WHERE enabled = false") // (3)
fun findDisabled(): List<User>
}
1 | The interface extends CrudRepository |
2 | The deleteById is overridden to perform a soft delete by setting enabled to false. |
3 | An additional method is added to return disabled entities if needed using an explicit query. |
All other queries performed on the entity will include enabled = true
in the query statement.
It is also possible to override an entities @Where annotation by annotating a repository method with it.
The findDisabled
example would then be:
package example;
import io.micronaut.data.annotation.Where;
import java.util.List;
public interface UserRepositoryWithWhere {
// ...
@Where("@.enabled = false")
List<User> findDisabled();
}
package example
import io.micronaut.data.annotation.Where
interface UserRepositoryWithWhere {
// ...
@Where("@.enabled = false")
List<User> findDisabled()
}
package example
import io.micronaut.data.annotation.Where
interface UserRepositoryWithWhere {
// ...
@Where("@.enabled = false")
fun findDisabled(): List<User>
}
If you want to remove a @Where criteria from a particular repository method, you can use @IgnoreWhere.
3.3.9 Asynchronous Queries
Micronaut Data supports asynchronous query execution by defining methods that return either CompletionStage
, CompletableFuture
or Future
.
In the case of asynchronous execution and if the backing implementation is blocking, Micronaut Data will use the Configured I/O thread pool to schedule the query execution on a different thread.
The following is an example of a couple of asynchronous methods:
@Repository
public interface ProductRepository extends CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Join("manufacturer")
CompletableFuture<Product> findByNameContains(String str);
CompletableFuture<Long> countByManufacturerName(String name);
}
@Repository
abstract class ProductRepository implements CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Join("manufacturer")
abstract CompletableFuture<Product> findByNameContains(String str)
abstract CompletableFuture<Long> countByManufacturerName(String name)
}
@Repository
interface ProductRepository : CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Join("manufacturer")
fun findByNameContains(str: String): CompletableFuture<Product>
fun countByManufacturerName(name: String): CompletableFuture<Long>
}
The above example defines two methods that use CompletableFuture
as return type, the API for which you can use to compose query operations:
long total = productRepository.findByNameContains("o")
.thenCompose(product -> productRepository.countByManufacturerName(product.getManufacturer().getName()))
.get(1000, TimeUnit.SECONDS);
Assertions.assertEquals(
2,
total
);
when:"A result is retrieved using async composition"
long total = productRepository.findByNameContains("o")
.thenCompose { product -> productRepository.countByManufacturerName(product.manufacturer.name) }
.get(1000, TimeUnit.SECONDS)
then:"the result is correct"
total == 2
val total = productRepository.findByNameContains("o")
.thenCompose { product -> productRepository.countByManufacturerName(product.manufacturer.name) }
.get(1000, TimeUnit.SECONDS)
assertEquals(
2,
total
)
In the case of JPA each operation will run with its own transaction and session, hence care needs to be taken to fetch the correct data and avoid detached objects. In addition, for more complex operations it may be more efficient to write custom code that uses a single session. |
3.3.10 Reactive Queries
Micronaut Data supports reactive query execution by defining methods that return either Publisher, Reactor or a RxJava 2 type. If you use Kotlin, you can use coroutines and Flow
.
In the case of reactive execution and if the backing implementation is blocking, Micronaut Data will use the Configured I/O thread pool to schedule the query execution on a different thread.
If the backing implementation natively supports reactive types at the driver level then the I/O thread pool is not used, and instead it is assumed the driver will handle the query in a non-blocking manner.
The following is an example of a couple of reactive methods:
@Join("manufacturer")
Maybe<Product> queryByNameContains(String str);
Single<Long> countDistinctByManufacturerName(String name);
@Join("manufacturer")
abstract Maybe<Product> queryByNameContains(String str)
abstract Single<Long> countDistinctByManufacturerName(String name)
@Join("manufacturer")
fun queryByNameContains(str: String): Maybe<Product>
fun countDistinctByManufacturerName(name: String): Single<Long>
The above example defines two methods that use reactive return types from RxJava 2, the API for which you can use to compose query operations:
long total = productRepository.queryByNameContains("o")
.flatMap(product -> productRepository.countDistinctByManufacturerName(product.getManufacturer().getName())
.toMaybe())
.defaultIfEmpty(0L)
.blockingGet();
Assertions.assertEquals(
2,
total
);
when:"A result is retrieved with reactive composition"
long total = productRepository.queryByNameContains("o")
.flatMap { product -> productRepository.countDistinctByManufacturerName(product.manufacturer.name).toMaybe() }
.defaultIfEmpty(0L)
.blockingGet()
then:"The result is correct"
total == 2
val total = productRepository.queryByNameContains("o")
.flatMap { product ->
productRepository.countDistinctByManufacturerName(product.manufacturer.name)
.toMaybe()
}
.defaultIfEmpty(0L)
.blockingGet()
assertEquals(
2,
total
)
In the case of JPA each operation will run with its own transaction and session, hence care needs to be taken to fetch the correct data and avoid detached objects.
In addition, for more complex operations it may be more efficient to write custom code that uses a single session.
3.4 Accessing data
There are various ways to perform read/write operations with Micronaut Data interfaces:
-
extend one of the builtin repository interfaces
-
simply define a new method with a criteria query naming convention
-
define a new method with a custom query using @Query annotation
3.4.1 Inserting
To insert data the simplest form is to define a method that accepts the type of the entity, the same way as the CrudRepository interface does:
@Override
Book save(Book entity);
Book save(Book entity)
fun save(entity: Book): Book
The method must accept a single argument that is the entity and start with either save
, persist
, insert
or store
, to persist multiple entities the method needs to accept java.lag.Iterable
of the entity.
Alternatively you can also define a method that features parameter names that match the properties of the entity name:
Book persist(String title, int pages);
Book persist(String title, int pages)
fun persist(title: String, pages: Int): Book
In this case, when update of whole entity is intended, you must specify parameters for all properties other than those that are declared as @Nullable
or as a @GeneratedValue
, if you do not a compilation error will occur.
The insert method can have a custom query defined by @Query annotation:
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
@ParameterExpression(name = "title", expression = "#{book.title + 'ABC'}")
@ParameterExpression(name = "pages", expression = "#{book.pages}")
void insertCustomExp(Book book);
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
void insertOne(Book book);
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
void insertMany(Iterable<Book> books);
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
void insert(String title, int pages)
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
void insertOne(Book entity)
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
void insertMany(Iterable<Book> entities)
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
fun insert(title: String, pages: Int)
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
fun insertOne(book: Book)
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
fun insertMany(books: Iterable<Book>)
It is not possible to use the entity as the return type in partial updates because it would require an additional select to retrieve the additional information. A number type (int, long, etc.) can be returned to indicate the number of rows updated. The updated row count should be checked in most scenarios to ensure the update actually affected the row. |
3.4.2 Updating
To update an entity you can once again pass the entity to the update
method:
@Override
Book update(Book newBook);
Book update(Book newBook)
fun update(newBook: Book): Book
However, generally it is more efficient to use batch updates to only update the properties that have actually changed.
There are a couple of ways to achieve batch updates. One way is to define a method that features an argument annotated with @Id, starts with the stem update
:
void update(@Id Long id, int pages);
void update(@Id Long id, int pages)
fun update(@Id id: Long?, pages: Int)
In this case the ID of the entity will be used to query and perform an update on the entity with all the remaining arguments (in this case pages
). If an argument does not match an existing property of the entity a compilation error will occur.
Another alternative is to use updateBy*
(the method should again return void
or a Number
indicating the number of records that were updated):
void updateByTitle(String title, int pages);
void updateByTitle(String title, int pages)
fun updateByTitle(title: String, pages: Int)
In this case you can use any finder expression to query on arbitrary properties and any remaining arguments that don’t form part of the query expression are used for the update. Once again if one of the remaining arguments does not match an existing property of the entity a compilation error will occur.
You can also specify a custom query for the update methods:
@Query("UPDATE book SET title = :title where id = :id")
void updateOne(Book book);
@Query("UPDATE book SET title = :title where id = :id")
void updateMany(Iterable<Book> books);
@Query("UPDATE book SET title = :title where id = :id")
void updateOne(Book book)
@Query("UPDATE book SET title = :title where id = :id")
void updateMany(Iterable<Book> books)
@Query("UPDATE book SET title = :title where id = :id")
fun updateOne(book: Book)
@Query("UPDATE book SET title = :title where id = :id")
fun updateMany(books: Iterable<Book>)
@Query("UPDATE book SET title = :title where id = :id")
void updateOne(Book book);
@Query("UPDATE book SET title = :title where id = :id")
void updateMany(Iterable<Book> books);
@Query("UPDATE book SET title = :title where id = :id")
void updateOne(Book book)
@Query("UPDATE book SET title = :title where id = :id")
void updateMany(Iterable<Book> books)
@Query("UPDATE book SET title = :title where id = :id")
fun updateOne(book: Book)
@Query("UPDATE book SET title = :title where id = :id")
fun updateMany(books: Iterable<Book>)
3.4.3 Deleting
Deleting can be performed in a number of ways. To delete everything (use with care!) you can use deleteAll
:
@Override
void deleteAll();
void deleteAll()
override fun deleteAll()
deleteAll does not cascade. Delete all foreign key references first or use delete on all individual items.
|
To delete by ID or by the value of a property you can specify a parameter that matches a property of an entity:
void delete(String title);
void delete(String title)
fun delete(title: String)
Finally, you can also use the deleteBy*
pattern (the method must start with delete
, remove
, erase
or eliminate
) and any finder expression, for example:
void deleteByTitleLike(String title);
void deleteByTitleLike(String title)
fun deleteByTitleLike(title: String)
You can also specify A custom query for a delete method:
@Query("DELETE FROM Book WHERE title = :title")
void deleteOne(Book book);
@Query("DELETE FROM Book WHERE title = :title")
void deleteMany(Iterable<Book> books);
@Query("DELETE FROM Book WHERE title = :title")
void deleteOne(Book book)
@Query("DELETE FROM Book WHERE title = :title")
void deleteMany(Iterable<Book> books)
@Query("DELETE FROM Book WHERE title = :title")
fun deleteOne(book: Book)
@Query("DELETE FROM Book WHERE title = :title")
fun deleteMany(books: Iterable<Book>)
3.4.4 Entity Timestamps
It is common to want to add a field that represents the time when an entity was first persisted and the time when it was last updated.
You can annotate a property that is a date type of entity with @DateCreated which will be automatically populated when saving entities and indicates the date a record was created.
You can also annotate a property that is a date type of entity with @DateUpdated which will be automatically populated whenever the entity is updated either via the persist
method or when using one of the batch update methods of Micronaut Data.
If you update the entity with an external SQL statement or custom logic you will need to update the underlying DateUpdated column manually.
|
JPA Hibernate and Entity Timestamps
The @DateCreated and @DateUpdated annotations require a ValidatorFactory
bean to be present in the application context when used with JPA Hibernate.
This can be provided by ensuring one of
implementation("io.micronaut:io.micronaut.validation:micronaut-validation")
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>io.micronaut.validation</artifactId>
<version>micronaut-validation</version>
</dependency>
or
implementation("io.micronaut:io.micronaut.beanvalidation:micronaut-hibernate-validator")
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>io.micronaut.beanvalidation</artifactId>
<version>micronaut-hibernate-validator</version>
</dependency>
is added to your project.
3.4.5 Entity Events
Since 2.3, Micronaut Data supports defining entity event listeners for either JPA or JDBC using either annotations or by implementation the EntityEventListener interface.
The following table lists the available event annotations:
Annotation |
Description |
Triggered prior to persisting an object |
|
Triggered after persisting an object |
|
Triggered prior to deleting an object (note: doesn’t apply to batch deletes) |
|
Triggered after to deleting an object (note: doesn’t apply to batch deletes) |
|
Triggered prior to updating an object (note: doesn’t apply to batch updates) |
|
Triggered after updating an object (note: doesn’t apply to batch updates) |
You can also use the JPA annotations in the javax.persistence package if you prefer.
|
Each event listener annotation can be applied to an instance method of an entity class (a JPA entity or a class annotated with ann:data.annotation.MappedEntity) in which case the method must return void
and have zero arguments for example:
package example;
import jakarta.persistence.Column;
import jakarta.persistence.Convert;
import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.Id;
import jakarta.persistence.PrePersist;
import java.nio.charset.StandardCharsets;
import java.time.MonthDay;
import java.util.Base64;
@Entity
public class Account {
@GeneratedValue
@Id
private Long id;
private String username;
private String password;
@Column(columnDefinition = "date")
@Convert(converter = MonthDayDateAttributeConverter.class)
private MonthDay paymentDay;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public MonthDay getPaymentDay() {
return paymentDay;
}
public void setPaymentDay(MonthDay paymentDay) {
this.paymentDay = paymentDay;
}
@PrePersist
void encodePassword() {
this.password = Base64.getEncoder()
.encodeToString(this.password.getBytes(StandardCharsets.UTF_8));
}
}
package example
import jakarta.persistence.*
import java.nio.charset.StandardCharsets
@Entity
class Account {
@GeneratedValue
@Id
Long id
String username
String password
@PrePersist
void encodePassword() {
this.password = Base64.encoder
.encodeToString(this.password.getBytes(StandardCharsets.UTF_8))
}
}
package example
import java.nio.charset.StandardCharsets
import java.util.*
import jakarta.persistence.*
@Entity
data class Account(@GeneratedValue @Id
var id: Long? = null,
val username: String,
var password: String) {
@PrePersist
fun encodePassword() {
password = Base64.getEncoder()
.encodeToString(password.toByteArray(StandardCharsets.UTF_8))
}
}
The above example defines a @PrePersist
listener that encodes the password (in a not very secure base64 format, clearly not recommended!) prior to inserting into the database.
In addition, the annotations can be applied to any instance method of a Micronaut bean, in which case the method must return void
and have a single argument that is the entity type (note the type can be Object
to listener for all events). For example:
package example;
import io.micronaut.data.annotation.event.PrePersist;
import jakarta.inject.Singleton;
@Singleton
public class AccountUsernameValidator {
@PrePersist
void validateUsername(Account account) {
final String username = account.getUsername();
if (username == null || !username.matches("[a-z0-9]+")) {
throw new IllegalArgumentException("Invalid username");
}
}
}
package example
import io.micronaut.data.annotation.event.PrePersist
import jakarta.inject.Singleton
@Singleton
class AccountUsernameValidator {
@PrePersist
void validateUsername(Account account) {
final String username = account.username
if (!username || !(username ==~ /[a-z0-9]+/)) {
throw new IllegalArgumentException("Invalid username")
}
}
}
package example
import io.micronaut.data.annotation.event.PrePersist
import jakarta.inject.Singleton
@Singleton
class AccountUsernameValidator {
@PrePersist
fun validateUsername(account: Account) {
val username: String = account.username
require(username.matches("[a-z0-9]+".toRegex())) { "Invalid username" }
}
}
The above listener serves to validate the account username prior to any insert.
Finally, it is also possible to define a Micronaut bean that implements the EntityEventListener interface or one of the functional interfaces that are sub-interfaces of the EntityEventListener listed in the following table:
Interface |
Description |
Triggered prior to persisting an object |
|
Triggered after persisting an object |
|
Triggered prior to deleting an object (note: doesn’t apply to batch deletes) |
|
Triggered after to deleting an object (note: doesn’t apply to batch deletes) |
|
Triggered prior to updating an object (note: doesn’t apply to batch updates) |
|
Triggered after updating an object (note: doesn’t apply to batch updates) |
For example the following Micronaut factory bean defines listeners that are executed before and after the Book
entity is persisted:
package example;
import io.micronaut.context.annotation.Factory;
import io.micronaut.data.event.listeners.PostPersistEventListener;
import io.micronaut.data.event.listeners.PrePersistEventListener;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import jakarta.inject.Singleton;
@Factory
public class BookListeners {
private static final Logger LOG = LoggerFactory.getLogger(BookListeners.class);
@Singleton
PrePersistEventListener<Book> beforeBookPersist() { // (1)
return (book) -> {
LOG.debug("Inserting book: {}", book.getTitle() );
return true; // (2)
};
}
@Singleton
PostPersistEventListener<Book> afterBookPersist() { // (3)
return (book) -> LOG.debug("Book inserted: {}", book.getTitle() );
}
}
package example
import io.micronaut.context.annotation.Factory
import io.micronaut.data.event.listeners.PostPersistEventListener
import io.micronaut.data.event.listeners.PrePersistEventListener
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import jakarta.inject.Singleton
@Factory
class BookListeners {
private static final Logger LOG = LoggerFactory.getLogger(BookListeners)
@Singleton
PrePersistEventListener<Book> beforeBookPersist() { // (1)
return (book) -> {
LOG.debug "Inserting book: ${book.title}"
return true // (2)
}
}
@Singleton
PostPersistEventListener<Book> afterBookPersist() { // (3)
return (book) -> LOG.debug("Book inserted: ${book.title}")
}
}
package example
import io.micronaut.context.annotation.Factory
import io.micronaut.data.event.listeners.PostPersistEventListener
import io.micronaut.data.event.listeners.PrePersistEventListener
import org.slf4j.LoggerFactory
import jakarta.inject.Singleton
@Factory
class BookListeners {
@Singleton
fun beforeBookPersist(): PrePersistEventListener<Book> { // (1)
return PrePersistEventListener { book: Book ->
LOG.debug("Inserting book: ${book.title}")
true // (2)
}
}
@Singleton
fun afterBookPersist(): PostPersistEventListener<Book> { // (3)
return PostPersistEventListener { book: Book ->
LOG.debug("Book inserted: ${book.title}")
}
}
companion object {
private val LOG = LoggerFactory.getLogger(BookListeners::class.java)
}
}
1 | The factory returns a bean of type PrePersistListener that includes Book as the generic argument |
2 | The PrePersistListener can return false if the operation should not proceed, if this case true is returned |
3 | An additional @PostPersistListener event listener is defined |
3.5 Transactions
Micronaut Data will automatically manage transactions for you. You can simply declare a method as transactional with the jakarta.transaction.Transactional
annotation.
Micronaut Data maps the declared transaction annotation to the correct underlying semantics and compilation time.
Starting Micronaut Data 4 repositories are no longer executed using a new transaction and will create a new connection if none is present. |
If you prefer Spring-managed transactions for Hibernate or JDBC you can add the micronaut-data-spring dependency and Spring-managed transactions will be used instead. See the section on Spring Support for more information.
|
3.5.1 Programmatic Transactions
You can use the TransactionOperations API to perform programmatic transactions.
The following demonstrates an example:
package example;
import io.micronaut.transaction.TransactionOperations;
import jakarta.inject.Singleton;
import jakarta.persistence.EntityManager;
import org.hibernate.Session;
@Singleton
public class ProductManager {
private final EntityManager entityManager;
private final TransactionOperations<Session> transactionManager;
public ProductManager(EntityManager entityManager,
TransactionOperations<Session> transactionManager) { // (1)
this.entityManager = entityManager;
this.transactionManager = transactionManager;
}
Product save(String name, Manufacturer manufacturer) {
return transactionManager.executeWrite(status -> { // (2)
final Product product = new Product(name, manufacturer);
entityManager.persist(product);
return product;
});
}
Product find(String name) {
return transactionManager.executeRead(status -> // (3)
status.getConnection().createQuery("from Product p where p.name = :name", Product.class)
.setParameter("name", name)
.getSingleResult()
);
}
}
package example
import io.micronaut.transaction.TransactionOperations
import jakarta.inject.Singleton
import jakarta.persistence.EntityManager
import org.hibernate.Session
@Singleton
class ProductManager {
private final EntityManager entityManager
private final TransactionOperations<Session> transactionManager
ProductManager(EntityManager entityManager,
TransactionOperations<Session> transactionManager) { // (1)
this.entityManager = entityManager
this.transactionManager = transactionManager
}
Product save(String name, Manufacturer manufacturer) {
return transactionManager.executeWrite { // (2)
Product product = new Product(name, manufacturer)
entityManager.persist(product)
return product
}
}
Product find(String name) {
return transactionManager.executeRead { status -> // (3)
status.getConnection().createQuery("from Product p where p.name = :name", Product)
.setParameter("name", name)
.singleResult
}
}
}
package example
import io.micronaut.transaction.TransactionOperations
import jakarta.inject.Singleton
import jakarta.persistence.EntityManager
import org.hibernate.Session
@Singleton
class ProductManager(
private val entityManager: EntityManager,
private val transactionManager: TransactionOperations<Session> // (1)
) {
fun save(name: String, manufacturer: Manufacturer): Product {
return transactionManager.executeWrite { // (2)
val product = Product(null, name, manufacturer)
entityManager.persist(product)
product
}
}
fun find(name: String): Product {
return transactionManager.executeRead { status -> // (3)
status.connection.createQuery("from Product p where p.name = :name", Product::class.java)
.setParameter("name", name)
.singleResult
}
}
}
1 | The constructor is injected with the TransactionOperations and a session-aware EntityManager |
2 | The save method uses the executeWrite method to execute a write transaction within the context of the passed lambda. |
3 | The find method uses the executeRead method to execute a read-only transaction within the context of the passed lambda. This example is accessing the session using the status provided by the transaction manager. |
Note that if you are using Micronaut Data JDBC then instead of an EntityManager
you should inject a contextual-connection-aware JDBC Connection
object.
The following presents an example:
package example;
import io.micronaut.transaction.TransactionOperations;
import jakarta.inject.Singleton;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
@Singleton
public class ProductManager {
private final Connection connection;
private final TransactionOperations<Connection> transactionManager;
private final ProductRepository productRepository;
public ProductManager(Connection connection,
TransactionOperations<Connection> transactionManager, // (1)
ProductRepository productRepository) {
this.connection = connection;
this.transactionManager = transactionManager;
this.productRepository = productRepository;
}
Product save(String name, Manufacturer manufacturer) {
return transactionManager.executeWrite(status -> { // (2)
final Product product = new Product(name, manufacturer);
try (PreparedStatement ps = connection.prepareStatement("insert into product (name, manufacturer_id) values (?, ?)")) {
ps.setString(1, name);
ps.setLong(2, manufacturer.getId());
ps.execute();
}
return product;
});
}
Product find(String name) {
return transactionManager.executeRead(status -> { // (3)
try (PreparedStatement ps = status.getConnection().prepareStatement("select * from product p where p.name = ?")) {
ps.setString(1, name);
try (ResultSet rs = ps.executeQuery()) {
if (rs.next()) {
return new Product(rs.getString("name"), null);
}
return null;
}
}
});
}
/**
* Creates new product using transaction operations and product repository.
*
* @param name the product name
* @param manufacturer the manufacturer
* @return the created product instance
*/
Product saveUsingRepo(String name, Manufacturer manufacturer) {
return transactionManager.executeWrite(status -> { // (4)
return productRepository.save(new Product(name, manufacturer));
});
}
/**
* Finds product by name using transaction manager and product repository.
*
* @param name the product name
* @return found product or null if none product found matching by name
*/
Product findUsingRepo(String name) {
return transactionManager.executeRead(status -> { // (5)
return productRepository.findByName(name).orElse(null);
});
}
}
package example
import io.micronaut.transaction.TransactionOperations
import jakarta.inject.Singleton
import java.sql.Connection
import java.sql.PreparedStatement
import java.sql.ResultSet
@Singleton
class ProductManager {
private final Connection connection
private final TransactionOperations<Connection> transactionManager
private final ProductRepository productRepository
ProductManager(Connection connection,
TransactionOperations<Connection> transactionManager, // (1)
ProductRepository productRepository) {
this.connection = connection
this.transactionManager = transactionManager
this.productRepository = productRepository
}
Product save(String name, Manufacturer manufacturer) {
return transactionManager.executeWrite { // (2)
final Product product = new Product(name, manufacturer)
connection.prepareStatement("insert into product (name, manufacturer_id) values (?, ?)")
.withCloseable { PreparedStatement ps ->
ps.setString(1, name)
ps.setLong(2, manufacturer.getId())
ps.execute()
}
return product
}
}
Product find(String name) {
return transactionManager.executeRead { status -> // (3)
status.getConnection().prepareStatement("select * from product p where p.name = ?").withCloseable {
PreparedStatement ps ->
ps.setString(1, name)
ps.executeQuery().withCloseable { ResultSet rs ->
if (rs.next()) {
return new Product(rs.getString("name"), null)
}
return null
}
}
}
}
/**
* Creates new product using transaction operations and product repository.
*
* @param name the product name
* @param manufacturer the manufacturer
* @return the created product instance
*/
Product saveUsingRepo(String name, Manufacturer manufacturer) {
return transactionManager.executeWrite(status -> { // (4)
return productRepository.save(new Product(name, manufacturer));
})
}
/**
* Finds product by name using transaction manager and product repository.
*
* @param name the product name
* @return found product or null if none product found matching by name
*/
Product findUsingRepo(String name) {
return transactionManager.executeRead(status -> { // (5)
return productRepository.findByName(name).orElse(null);
})
}
}
package example
import io.micronaut.data.exceptions.EmptyResultException
import io.micronaut.transaction.TransactionOperations
import jakarta.inject.Singleton
import java.sql.Connection
@Singleton
class ProductManager(
private val connection: Connection,
private val transactionManager: TransactionOperations<Connection>, // (1)
private val productRepository: ProductRepository
) {
fun save(name: String, manufacturer: Manufacturer): Product {
return transactionManager.executeWrite { // (2)
val product = Product(0, name, manufacturer)
connection.prepareStatement("insert into product (name, manufacturer_id) values (?, ?)").use { ps ->
ps.setString(1, name)
ps.setLong(2, manufacturer.id!!)
ps.execute()
}
product
}
}
fun find(name: String): Product {
return transactionManager.executeRead { status -> // (3)
status.connection.prepareStatement("select * from product p where p.name = ?").use { ps ->
ps.setString(1, name)
ps.executeQuery().use { rs ->
if (rs.next()) {
return@executeRead Product(
rs.getLong("id"), rs.getString("name"), null
)
}
throw EmptyResultException()
}
}
}
}
fun saveUsingRepo(name: String, manufacturer: Manufacturer): Product {
return transactionManager.executeWrite { // (4)
productRepository.save(Product(0, name, manufacturer))
}
}
fun findUsingRepo(name: String): Product? {
return transactionManager.executeRead { status -> // (5)
productRepository.findByName(name).orElse(null)
}
}
}
1 | The constructor is injected with the TransactionOperations and a contextual-connection-aware Connection . Additional parameter is productRepository to demonstrate that Micronaut Data JDBC repository can also work with programmatic transactions. |
2 | The save method uses the executeWrite method to execute a write transaction within the context of the passed lambda. |
3 | The find method uses the executeRead method to execute a read-only transaction within the context of the passed lambda. This example is accessing the connection using the status provided by the transaction manager. |
4 | The saveUsingRepo method uses executeWrite method to execute a write transaction within the context of the passed lambda and saves data using Micronaut Data JDBC repository. Please note that Micronaut Data JPA Repository can be used the same way. |
5 | The findUsingRepo uses the executeRead method to execute a read-only transaction within the context of the passed lambda and finds data using Micronaut Data JDBC repository. |
Note that it is important that you always use the injected connection as Micronaut Data makes available a transaction-aware implementation that uses the connection associated with the underlying transaction.
If a transaction is not active when using this connection then a NoTransactionException will be thrown indicating you should either provide a programmatic transaction or use @Transactional
.
For Kotlin suspended methods use CoroutineTransactionOperations |
3.5.2 Transactional Events
You can write event listeners that are transaction aware using the @TransactionalEventListener annotation.
The following demonstrates an example:
package example;
import io.micronaut.context.event.ApplicationEventPublisher;
import io.micronaut.transaction.annotation.TransactionalEventListener;
import jakarta.inject.Singleton;
import jakarta.transaction.Transactional;
@Singleton
public class BookManager {
private final BookRepository bookRepository;
private final ApplicationEventPublisher<NewBookEvent> eventPublisher;
public BookManager(BookRepository bookRepository, ApplicationEventPublisher<NewBookEvent> eventPublisher) { // (1)
this.bookRepository = bookRepository;
this.eventPublisher = eventPublisher;
}
@Transactional
void saveBook(String title, int pages) {
final Book book = new Book(title, pages);
bookRepository.save(book);
eventPublisher.publishEvent(new NewBookEvent(book)); // (2)
}
@TransactionalEventListener
void onNewBook(NewBookEvent event) {
System.out.println("book = " + event.book); // (3)
}
static class NewBookEvent {
final Book book;
public NewBookEvent(Book book) {
this.book = book;
}
}
}
package example
import io.micronaut.context.event.ApplicationEventPublisher
import io.micronaut.transaction.annotation.TransactionalEventListener
import jakarta.inject.Singleton
import jakarta.transaction.Transactional
@Singleton
class BookManager {
private final BookRepository bookRepository
private final ApplicationEventPublisher<NewBookEvent> eventPublisher
BookManager(BookRepository bookRepository, ApplicationEventPublisher<NewBookEvent> eventPublisher) { // (1)
this.bookRepository = bookRepository
this.eventPublisher = eventPublisher
}
@Transactional
void saveBook(String title, int pages) {
final Book book = new Book(title, pages)
bookRepository.save(book)
eventPublisher.publishEvent(new NewBookEvent(book)) // (2)
}
@TransactionalEventListener
void onNewBook(NewBookEvent event) {
println("book = $event.book") // (3)
}
static class NewBookEvent {
final Book book
NewBookEvent(Book book) {
this.book = book
}
}
}
package example
import io.micronaut.context.event.ApplicationEventPublisher
import io.micronaut.transaction.annotation.TransactionalEventListener
import jakarta.inject.Singleton
import jakarta.transaction.Transactional
@Singleton
open class BookManager(
private val bookRepository: BookRepository, private val eventPublisher: ApplicationEventPublisher<NewBookEvent>) { // (1)
@Transactional
open fun saveBook(title: String, pages: Int) {
val book = Book(0, title, pages)
bookRepository.save(book)
eventPublisher.publishEvent(NewBookEvent(book)) // (2)
}
@TransactionalEventListener
open fun onNewBook(event: NewBookEvent) {
println("book = ${event.book}") // (3)
}
class NewBookEvent(val book: Book)
}
1 | The BookManager class receives an instance of ApplicationEventPublisher . |
2 | When the event is published if there is a running transaction then it will only trigger the listener once the transaction is committed. |
3 | The listener itself is annotated with @TransactionalEventListener |
You can set the value of the @TransactionalEventListener annotation to bind the listener to a particular transaction phase. |
Using @TransactionalEventListener annotations for transaction events is not supported for reactive transactions. |
3.6 Kotlin Criteria API extensions
Micronaut Data includes experimental extensions and query builders for Jakarta Criteria API which simplifies writing queries with Kotlin.
Extensions and builders are located in io.micronaut.data.runtime.criteria.KCriteriaBuilderExt
file.
There are simple extension methods that simplify working with the criteria API:
-
KProperty.asPath(jakarta.persistence.criteria.Root): jakarta.persistence.criteria.Path
- Extension on theKProperty
allowing to get type-safe property path:Person::name.asPath(root)
-
operator Path.get(KProperty1): Path
chain property access:root[Person::parent][Parent::name]
-
From.joinMany(KProperty1, JoinType): Join
join*-to-many
relationship -
From.joinOne(KProperty1, JoinType): Join
join*-to-one
relationship
Predicate builder
To implement a simple predicate query a function where
can be used:
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
There are contextual extension functions added to jakarta.persistence.criteria.Expression
allowing to use predicate methods from jakarta.persistence.criteria.CriteriaBuilder
directly on an expression instance. Most of them are infix functions allowing to use the syntax: root[Person::name] eq "Xyz"
.
It’s possible to use and
, or
for conjunction/disjunction and not
for the negation:
fun nameOrAgeMatches(age: Int, name: String?) = where<Person> {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
It’s possible to use where
predicate builder with following methods in JpaSpecificationExecutor:
-
findOne(io.micronaut.data.repository.jpa.criteria.PredicateSpecification)
-
findAll(io.micronaut.data.repository.jpa.criteria.PredicateSpecification)
-
findAll(io.micronaut.data.repository.jpa.criteria.PredicateSpecification, io.micronaut.data.model.Sort)
-
findAll(io.micronaut.data.repository.jpa.criteria.PredicateSpecification, io.micronaut.data.model.Pageable)
-
count(io.micronaut.data.repository.jpa.criteria.PredicateSpecification)
-
deleteAll(io.micronaut.data.repository.jpa.criteria.PredicateSpecification)
personRepository.findOne(where {
val manufacturer = root.joinOne(Product::manufacturer)
manufacturer[Manufacturer::name] eq name
})
val recordsDeleted = personRepository.deleteAll(where {
root[Person::name] eq "Denis"
})
Update builder
To implement an update query a function update
can be used:
val updateQuery = update<Person> {
set(Person::name, "Frank")
where {
root[Person::name] eq "Denis"
}
}
personRepository.updateAll(updateQuery)
3.7 Multi-tenancy
Micronaut Data supports multi-tenancy to allow the use of multiple databases or schemas by a single micronaut application.
Multitenancy Mode | Description |
---|---|
DATASOURCE |
A separate database with a separate connection pool is used to store each tenants data. Internally different repository operations / transaction manager instance will be used for each tenant. |
SCHEMA |
The same database, but different schemas are used to store each tenants data. Only supported by JDBC/R2DBC/MongoDB (collections) |
DISCRIMINATOR |
A single database/schema stores all tenants' data, but a discriminator column separates the data. |
3.7.1 Discriminator Mode
The DISCRIMINATOR mode uses a single entity’s property to store the tenant id.
micronaut.data.multi-tenancy.mode=DISCRIMINATOR
micronaut.multitenancy.tenantresolver.httpheader.enabled=true
datasources.default.url=jdbc:h2:mem:db
datasources.default.driverClassName=org.h2.Driver
datasources.default.username=sa
datasources.default.password=
datasources.default.dialect=H2
datasources.default.schema-generate=CREATE_DROP
micronaut:
data:
multi-tenancy:
mode: DISCRIMINATOR
multitenancy:
tenantresolver:
httpheader:
enabled: true
datasources:
default:
url: jdbc:h2:mem:db
driverClassName: org.h2.Driver
username: sa
password: ''
dialect: H2
schema-generate: CREATE_DROP
[micronaut]
[micronaut.data]
[micronaut.data.multi-tenancy]
mode="DISCRIMINATOR"
[micronaut.multitenancy]
[micronaut.multitenancy.tenantresolver]
[micronaut.multitenancy.tenantresolver.httpheader]
enabled=true
[datasources]
[datasources.default]
url="jdbc:h2:mem:db"
driverClassName="org.h2.Driver"
username="sa"
password=""
dialect="H2"
schema-generate="CREATE_DROP"
micronaut {
data {
multiTenancy {
mode = "DISCRIMINATOR"
}
}
multitenancy {
tenantresolver {
httpheader {
enabled = true
}
}
}
}
datasources {
'default' {
url = "jdbc:h2:mem:db"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
dialect = "H2"
schemaGenerate = "CREATE_DROP"
}
}
{
micronaut {
data {
multi-tenancy {
mode = "DISCRIMINATOR"
}
}
multitenancy {
tenantresolver {
httpheader {
enabled = true
}
}
}
}
datasources {
default {
url = "jdbc:h2:mem:db"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
dialect = "H2"
schema-generate = "CREATE_DROP"
}
}
}
{
"micronaut": {
"data": {
"multi-tenancy": {
"mode": "DISCRIMINATOR"
}
},
"multitenancy": {
"tenantresolver": {
"httpheader": {
"enabled": true
}
}
}
},
"datasources": {
"default": {
"url": "jdbc:h2:mem:db",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"dialect": "H2",
"schema-generate": "CREATE_DROP"
}
}
}
The entity with multitenancy enabled requires a tenant property to be annotated with TenantId:
@MappedEntity
public class Book {
@Id
@GeneratedValue
private Long id;
private String title;
private int pages;
@TenantId
private String tenant;
// ...
}
There are specific annotations to alter the behaviour of repositories with TenantId property and its methods:
Annotation |
Description |
The method’s query will not have implicit predicate to include the tenant id |
|
Modify the tenant id of the query |
The tenancy annotations are only supported for the discriminator multitenancy |
3.7.2 DataSource Mode
The DATASOURCE mode is used in combination with the micronaut-multitenancy library in order to resolve the tenant name. In the below example, the tenant resolver is set to use a http header. See Micronaut Multitenancy for more information.
micronaut.data.multi-tenancy.mode=DATASOURCE
micronaut.multitenancy.tenantresolver.httpheader.enabled=true
datasources.foo.url=jdbc:h2:mem:dbTenantFoo
datasources.foo.driverClassName=org.h2.Driver
datasources.foo.username=sa
datasources.foo.password=
datasources.foo.schema-generate=CREATE_DROP
datasources.foo.dialect=H2
datasources.bar.url=jdbc:h2:mem:dbTenantBar
datasources.bar.driverClassName=org.h2.Driver
datasources.bar.username=sa
datasources.bar.password=
datasources.bar.schema-generate=CREATE_DROP
datasources.bar.dialect=H2
micronaut:
data:
multi-tenancy:
mode: DATASOURCE
multitenancy:
tenantresolver:
httpheader:
enabled: true
datasources:
foo:
url: jdbc:h2:mem:dbTenantFoo
driverClassName: org.h2.Driver
username: sa
password: ''
schema-generate: CREATE_DROP
dialect: H2
bar:
url: jdbc:h2:mem:dbTenantBar
driverClassName: org.h2.Driver
username: sa
password: ''
schema-generate: CREATE_DROP
dialect: H2
[micronaut]
[micronaut.data]
[micronaut.data.multi-tenancy]
mode="DATASOURCE"
[micronaut.multitenancy]
[micronaut.multitenancy.tenantresolver]
[micronaut.multitenancy.tenantresolver.httpheader]
enabled=true
[datasources]
[datasources.foo]
url="jdbc:h2:mem:dbTenantFoo"
driverClassName="org.h2.Driver"
username="sa"
password=""
schema-generate="CREATE_DROP"
dialect="H2"
[datasources.bar]
url="jdbc:h2:mem:dbTenantBar"
driverClassName="org.h2.Driver"
username="sa"
password=""
schema-generate="CREATE_DROP"
dialect="H2"
micronaut {
data {
multiTenancy {
mode = "DATASOURCE"
}
}
multitenancy {
tenantresolver {
httpheader {
enabled = true
}
}
}
}
datasources {
foo {
url = "jdbc:h2:mem:dbTenantFoo"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schemaGenerate = "CREATE_DROP"
dialect = "H2"
}
bar {
url = "jdbc:h2:mem:dbTenantBar"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schemaGenerate = "CREATE_DROP"
dialect = "H2"
}
}
{
micronaut {
data {
multi-tenancy {
mode = "DATASOURCE"
}
}
multitenancy {
tenantresolver {
httpheader {
enabled = true
}
}
}
}
datasources {
foo {
url = "jdbc:h2:mem:dbTenantFoo"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schema-generate = "CREATE_DROP"
dialect = "H2"
}
bar {
url = "jdbc:h2:mem:dbTenantBar"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schema-generate = "CREATE_DROP"
dialect = "H2"
}
}
}
{
"micronaut": {
"data": {
"multi-tenancy": {
"mode": "DATASOURCE"
}
},
"multitenancy": {
"tenantresolver": {
"httpheader": {
"enabled": true
}
}
}
},
"datasources": {
"foo": {
"url": "jdbc:h2:mem:dbTenantFoo",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"schema-generate": "CREATE_DROP",
"dialect": "H2"
},
"bar": {
"url": "jdbc:h2:mem:dbTenantBar",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"schema-generate": "CREATE_DROP",
"dialect": "H2"
}
}
}
The following HTTP clients will access a different tenant datasource:
@Header(name = "tenantId", value = "foo")
@Client("/books")
interface FooBookClient extends BookClient {
}
@Header(name = "tenantId", value = "bar")
@Client("/books")
interface BarBookClient extends BookClient {
}
3.7.3 Schema Mode
The SCHEMA mode uses a single datasource and set the active schema based on the tenant resolved.
micronaut.data.multi-tenancy.mode=SCHEMA
micronaut.multitenancy.tenantresolver.httpheader.enabled=true
datasources.default.url=jdbc:h2:mem:db
datasources.default.driverClassName=org.h2.Driver
datasources.default.username=sa
datasources.default.password=
datasources.default.dialect=H2
datasources.default.schema-generate=CREATE_DROP
datasources.default.schema-generate-names[0]=foo
datasources.default.schema-generate-names[1]=bar
micronaut:
data:
multi-tenancy:
mode: SCHEMA
multitenancy:
tenantresolver:
httpheader:
enabled: true
datasources:
default:
url: jdbc:h2:mem:db
driverClassName: org.h2.Driver
username: sa
password: ''
dialect: H2
schema-generate: CREATE_DROP
schema-generate-names:
- foo
- bar
[micronaut]
[micronaut.data]
[micronaut.data.multi-tenancy]
mode="SCHEMA"
[micronaut.multitenancy]
[micronaut.multitenancy.tenantresolver]
[micronaut.multitenancy.tenantresolver.httpheader]
enabled=true
[datasources]
[datasources.default]
url="jdbc:h2:mem:db"
driverClassName="org.h2.Driver"
username="sa"
password=""
dialect="H2"
schema-generate="CREATE_DROP"
schema-generate-names=[
"foo",
"bar"
]
micronaut {
data {
multiTenancy {
mode = "SCHEMA"
}
}
multitenancy {
tenantresolver {
httpheader {
enabled = true
}
}
}
}
datasources {
'default' {
url = "jdbc:h2:mem:db"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
dialect = "H2"
schemaGenerate = "CREATE_DROP"
schemaGenerateNames = ["foo", "bar"]
}
}
{
micronaut {
data {
multi-tenancy {
mode = "SCHEMA"
}
}
multitenancy {
tenantresolver {
httpheader {
enabled = true
}
}
}
}
datasources {
default {
url = "jdbc:h2:mem:db"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
dialect = "H2"
schema-generate = "CREATE_DROP"
schema-generate-names = ["foo", "bar"]
}
}
}
{
"micronaut": {
"data": {
"multi-tenancy": {
"mode": "SCHEMA"
}
},
"multitenancy": {
"tenantresolver": {
"httpheader": {
"enabled": true
}
}
}
},
"datasources": {
"default": {
"url": "jdbc:h2:mem:db",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"dialect": "H2",
"schema-generate": "CREATE_DROP",
"schema-generate-names": ["foo", "bar"]
}
}
}
You can use property schema-generate-names to specify multiple schemas to be created and initialized for testing.
|
4 Micronaut Data JPA Hibernate
Micronaut Data JPA adds a support to have repositories with compile-time generated queries, JPA Criteria and a transaction management.
4.1 JPA Annotations
Micronaut Data JPA as of version 4.0.0 supports Hibernate 6 while earlier versions support Hibernate 5. You can use javax.persistence
annotations such as javax.persistence.Entity
to map your entities.
For Hibernate 6 Micronaut Data JPA supports jakarta.persistence
annotations such as jakarta.persistence.Entity
to map your entities.
4.2 Quick Start
The quickest way to get started is to create a new Micronaut application with Micronaut Launch and choose the data-jpa
, a database driver, pooling and a database migration framework features.
You can also find a great guide on building Micronaut Data JPA applications including sample code in a variety of languages in the Micronaut Guide: Access a Database with Micronaut Data JPA |
Clicking on one of the links in the table below will take you to Micronaut Launch with the appropriate options already pre-configured with your selected language and build tool:
Gradle |
Maven |
|
Java |
||
Kotlin |
||
Groovy |
# For Maven add: --build maven
$ mn create-app --lang java example --features data-jpa,flyway,mysql,jdbc-hikari
Or via curl
:
curl
# For Maven add to the URL: &build=maven
$ curl https://launch.micronaut.io/demo.zip?lang=java&features=data-jpa,flyway,mysql,jdbc-hikari -o demo.zip && unzip demo.zip -d demo && cd demo
When working with JDBC drivers it’s required to add a JDBC Connection Pool Module (Hikari, Tomcat JDBC or DBCP ) from the Micronaut SQL project. |
Use Micronaut SQL project documentation for more information regarding configuring Hibernate, JDBC and pooling.
You need to configure the data source in the application configuration file. For example for H2:
datasources.default.url=jdbc:h2:mem:devDb
datasources.default.driverClassName=org.h2.Driver
datasources.default.username=sa
datasources.default.password=
datasources.default.schema-generate=CREATE_DROP
datasources.default.dialect=H2
datasources:
default:
url: jdbc:h2:mem:devDb
driverClassName: org.h2.Driver
username: sa
password: ''
schema-generate: CREATE_DROP
dialect: H2
[datasources]
[datasources.default]
url="jdbc:h2:mem:devDb"
driverClassName="org.h2.Driver"
username="sa"
password=""
schema-generate="CREATE_DROP"
dialect="H2"
datasources {
'default' {
url = "jdbc:h2:mem:devDb"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schemaGenerate = "CREATE_DROP"
dialect = "H2"
}
}
{
datasources {
default {
url = "jdbc:h2:mem:devDb"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schema-generate = "CREATE_DROP"
dialect = "H2"
}
}
}
{
"datasources": {
"default": {
"url": "jdbc:h2:mem:devDb",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"schema-generate": "CREATE_DROP",
"dialect": "H2"
}
}
}
And add the following configuration in the application configuration file.
jpa.default.entity-scan.packages=example.domain
jpa:
default:
entity-scan:
packages: 'example.domain'
[jpa]
[jpa.default]
[jpa.default.entity-scan]
packages="example.domain"
jpa {
'default' {
entityScan {
packages = "example.domain"
}
}
}
{
jpa {
default {
entity-scan {
packages = "example.domain"
}
}
}
}
{
"jpa": {
"default": {
"entity-scan": {
"packages": "example.domain"
}
}
}
}
Where jpa.default.entity-scan.packages
references the root package where your @Entity
classes are located.
And ensure the implementation is configured correctly.
You can then define an @Entity
:
package example;
import io.micronaut.serde.annotation.Serdeable;
import jakarta.persistence.*;
@Serdeable
@Entity
public class Book {
@Id
@GeneratedValue
private Long id;
private String title;
private int pages;
public Book(String title, int pages) {
this.title = title;
this.pages = pages;
}
public Book() {
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
public int getPages() {
return pages;
}
public void setPages(int pages) {
this.pages = pages;
}
}
package example
import jakarta.persistence.Entity
import jakarta.persistence.GeneratedValue
import jakarta.persistence.Id
@Entity
class Book {
@Id
@GeneratedValue
Long id
String title
int pages
Book(String title, int pages) {
this.title = title
this.pages = pages
}
Book() {
}
}
package example
import jakarta.persistence.Entity
import jakarta.persistence.GeneratedValue
import jakarta.persistence.Id
@Entity
data class Book(@Id
@GeneratedValue
var id: Long,
var title: String,
var pages: Int = 0)
Followed by an interface that extends from CrudRepository
package example;
import io.micronaut.context.annotation.Parameter;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.ParameterExpression;
import io.micronaut.data.annotation.Query;
import io.micronaut.data.annotation.QueryHint;
import io.micronaut.data.annotation.Repository;
import io.micronaut.data.model.Page;
import io.micronaut.data.model.Pageable;
import io.micronaut.data.model.Slice;
import io.micronaut.data.repository.CrudRepository;
import java.util.List;
@Repository // (1)
interface BookRepository extends CrudRepository<Book, Long> { // (2)
Book find(String title);
}
package example
import io.micronaut.context.annotation.Executable
import io.micronaut.context.annotation.Parameter
import io.micronaut.data.annotation.*
import io.micronaut.data.model.*
import io.micronaut.data.repository.CrudRepository
@Repository // (1)
interface BookRepository extends CrudRepository<Book, Long> { // (2)
@Executable
Book find(String title)
}
package example
import io.micronaut.context.annotation.Executable
import io.micronaut.context.annotation.Parameter
import io.micronaut.data.annotation.*
import io.micronaut.data.model.*
import io.micronaut.data.repository.CrudRepository
@Repository // (1)
interface BookRepository : CrudRepository<Book, Long> { // (2)
@Executable
fun find(title: String): Book
}
1 | The interface is annotated with @Repository |
2 | The CrudRepository interface take 2 generic arguments, the entity type (in this case Book ) and the ID type (in this case Long ) |
You can now perform CRUD (Create, Read, Update, Delete) operations on the entity. The implementation of example.BookRepository
is created at compilation time. To obtain a reference to it simply inject the bean:
@Inject
BookRepository bookRepository;
@Inject BookRepository bookRepository
@Inject
lateinit var bookRepository: BookRepository
Saving an Instance (Create)
To save an instance use the save
method of the CrudRepository
interface:
Book book = new Book();
book.setTitle("The Stand");
book.setPages(1000);
bookRepository.save(book);
Book book = new Book(title:"The Stand", pages:1000)
bookRepository.save(book)
var book = Book(0,"The Stand", 1000)
bookRepository.save(book)
Retrieving an Instance (Read)
To read a book back use findById
:
book = bookRepository.findById(id).orElse(null);
book = bookRepository.findById(id).orElse(null)
book = bookRepository.findById(id).orElse(null)
Updating an Instance (Update)
To update an instance use save
again:
book.setTitle("Changed");
bookRepository.save(book);
book.title = "Changed"
bookRepository.save(book)
book.title = "Changed"
bookRepository.save(book)
For partial entity updates, custom update method like this can be used:
@QueryHint(name = "jakarta.persistence.FlushModeType", value = "AUTO")
void updatePages(@Id Long id, @Parameter("pages") int pages);
@QueryHint(name = "jakarta.persistence.FlushModeType", value = "AUTO")
void updatePages(@Id Long id, @Parameter("pages") int pages)
@QueryHint(name = "jakarta.persistence.FlushModeType", value = "AUTO")
fun updatePages(@Id id: Long?, @Parameter("pages") pages: Int)
In this example, in order for update to be propagated in the current session you can add QueryHint
annotation to force session flush.
For Hibernate 6 instead of javax.persistence.FlushModeType
need to use jakarta.persistence.FlushModeType
.
Deleting an Instance (Delete)
To delete an instance use deleteById
:
bookRepository.deleteById(id);
bookRepository.deleteById(id)
bookRepository.deleteById(id)
4.3 Logging SQL with Micronaut Data JPA Hibernate
You can log your queries by setting jpa.default.properties.hibernate.show_sql
and jpa.default.properties.hibernate.format_sql
to true
in your application’s configuration.
4.4 Join queries
To optimize your queries you may need to alter joins to fetch exactly the data you need in the result set.
If a LazyInitializationException occurs this is not a bug in Micronaut Data or Hibernate, but instead an indication that you should alter your query joins to fetch the associated data you need to implement your use case.
|
Consider a Product
entity:
import jakarta.persistence.*;
@Entity
class Product {
@Id
@GeneratedValue
private Long id;
private String name;
@ManyToOne(optional = false, fetch = FetchType.LAZY)
private Manufacturer manufacturer;
public Product(String name, Manufacturer manufacturer) {
this.name = name;
this.manufacturer = manufacturer;
}
public Product() {
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Manufacturer getManufacturer() {
return manufacturer;
}
public void setManufacturer(Manufacturer manufacturer) {
this.manufacturer = manufacturer;
}
}
import jakarta.persistence.*
@Entity
class Product {
@Id
@GeneratedValue
Long id
String name
@ManyToOne(optional = false, fetch = FetchType.LAZY)
Manufacturer manufacturer
Product(String name, Manufacturer manufacturer) {
this.name = name
this.manufacturer = manufacturer
}
Product() {
}
}
import jakarta.persistence.*
@Entity
data class Product(
@Id
@GeneratedValue
var id: Long?,
var name: String,
@ManyToOne(optional = false, fetch = FetchType.LAZY)
var manufacturer: Manufacturer
)
That has an association to a Manufacturer
entity:
package example;
import io.micronaut.configuration.hibernate.jpa.proxy.GenerateProxy;
import org.hibernate.annotations.BatchSize;
import jakarta.persistence.*;
@Entity
@GenerateProxy
@BatchSize(size = 10)
public class Manufacturer {
@Id
@GeneratedValue
private Long id;
private String name;
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
package example
import jakarta.persistence.*
@Entity
class Manufacturer {
@Id
@GeneratedValue
Long id
String name
}
package example
import jakarta.persistence.*
@Entity
data class Manufacturer(
@Id
@GeneratedValue
var id: Long?,
var name: String
)
In this case when you read each Product
from the database an additional select is required to retrieve the Manufacturer
for each Product
. This leads to N + 1
queries.
To resolve this you can use the @Join annotation on your repository interface to specify that a JOIN FETCH
should be executed to retrieve the associated Manufacturer
.
@Repository
public interface ProductRepository extends CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Join(value = "manufacturer", type = Join.Type.FETCH) // (1)
List<Product> list();
}
@Repository
abstract class ProductRepository implements CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Join(value = "manufacturer", type = Join.Type.FETCH) // (1)
abstract List<Product> list()
}
@Repository
interface ProductRepository : CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Join(value = "manufacturer", type = Join.Type.FETCH) // (1)
fun list(): List<Product>
}
1 | The @Join is used to indicate a JOIN FETCH clause should be included. |
Note that the @Join annotation is repeatable and hence can be specified multiple times for different associations. In addition, the type
member of the annotation can be used to specify the join type, for example LEFT
, INNER
or RIGHT
.
JPA 2.1 Entity Graphs
A JPA-specific alternative to specifying the joins to a query is to use JPA 2.1 entity graphs. With entity graphs you defer to the JPA implementation to pick the appropriate join type to use:
@Override
@EntityGraph(attributePaths = {"manufacturer", "title"}) // (1)
List<Product> findAll();
@EntityGraph(attributePaths = ["manufacturer", "title"]) // (1)
abstract List<Product> findAll()
@EntityGraph(attributePaths = ["manufacturer", "title"]) // (1)
override fun findAll(): List<Product>
1 | The attributePaths member is used to specify the paths to include in the Entity graph. |
Tests
Please note that in tests using join collections, to make sure joins are consistently fetched the test might need to be made non-transactional using @MicronautTest(transactional = false)
.
4.5 Explicit queries
If you want to have more control over the query generated at the compile-time then you can use the @Query annotation to specify an explicit query:
@Query("FROM Book b WHERE b.title = :t ORDER BY b.title")
List<Book> listBooks(String t);
@Query("FROM Book b WHERE b.title = :t ORDER BY b.title")
List<Book> listBooks(String t)
@Query("FROM Book b WHERE b.title = :t ORDER BY b.title")
fun listBooks(t: String): List<Book>
You specify named parameters using colon (:
) followed by the name and these must match a parameter specified to the method otherwise a compilation error will occur, use backslash \:
to escape the colon that is not a parameter specification.
Note that if the method returns a Page for pagination then you must additionally specify a query that performs the equivalent count using the countQuery
member of the @Query annotation.
4.6 Native queries
When using Micronaut Data with JPA you can execute native SQL queries by setting nativeQuery
to true in the @Query annotation:
@Query(value = "select * from books b where b.title like :title limit 5",
nativeQuery = true)
List<Book> findNativeBooks(String title);
@Query(value = "select * from books b where b.title like :title limit 5",
nativeQuery = true)
List<Book> findNativeBooks(String title)
@Query(value = "select * from books b where b.title like :title limit 5", nativeQuery = true)
fun findNativeBooks(title: String): List<Book>
The above example will execute the raw SQL against the database.
For Pagination queries that return a Page you also need to specify a native countQuery .
|
4.7 Procedures
It’s possible to execute a stored procedure using the JPA provider:
Repository with procedure methods:
@Repository
public interface ProductRepository extends CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Procedure(named = "calculateSum")
long calculateSum(Long productId); // (1)
@Procedure("calculateSumInternal")
long calculateSumCustom(Long productId); // (2)
}
@Repository
abstract class ProductRepository implements CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Procedure(named = "calculateSum")
abstract long calculateSum(Long productId);
@Procedure("calculateSumInternal")
abstract long calculateSumCustom(Long productId);
}
@Repository
interface ProductRepository : CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Procedure(named = "calculateSum")
fun calculateSum(productId: Long): Long
@Procedure("calculateSumInternal")
fun calculateSumCustom(productId: Long): Long
}
An entity with named procedures defined:
import jakarta.persistence.*;
@NamedStoredProcedureQuery(name = "calculateSum",
procedureName = "calculateSumInternal",
parameters = {
@StoredProcedureParameter(name = "productId", mode = ParameterMode.IN, type = Long.class),
@StoredProcedureParameter(name = "result", mode = ParameterMode.OUT, type = Long.class)
}
)
@Entity
class Product {
import jakarta.persistence.*
@NamedStoredProcedureQuery(name = "calculateSum",
procedureName = "calculateSumInternal",
parameters = [
@StoredProcedureParameter(name = "productId", mode = ParameterMode.IN, type = Long.class),
@StoredProcedureParameter(name = "result", mode = ParameterMode.OUT, type = Long.class)
]
)
@Entity
class Product {
import jakarta.persistence.*
@NamedStoredProcedureQuery(
name = "calculateSum",
procedureName = "calculateSumInternal",
parameters = [StoredProcedureParameter(
name = "productId",
mode = ParameterMode.IN,
type = Long::class
), StoredProcedureParameter(name = "result", mode = ParameterMode.OUT, type = Long::class)]
)
@Entity
data class Product(
1 | The definition is referencing the named stored procedure defined in Product entity |
2 | The definition is referencing the native database procedure name |
The output parameter, if present, is defined as the last output parameter of the procedure |
4.8 JPA specifications
Based on the same concept as Spring Data, when you need to create queries dynamically by composing JPA criteria then you can implement the JpaSpecificationExecutor interface which provides multiple methods that receive an instance of Specification which can be used in combination with existing repository interfaces.
The Specification interface represents a simple Criteria-based API entry point:
public interface Specification<T> {
@Nullable
Predicate toPredicate(@NonNull Root<T> root,
@NonNull CriteriaQuery<?> query,
@NonNull CriteriaBuilder criteriaBuilder);
}
The following example implementation demonstrates custom entity filtering using specifications:
class Specifications {
public static Specification<Product> nameEquals(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(root.get("name"), name);
}
public static Specification<Product> nameEqualsCaseInsensitive(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(criteriaBuilder.lower(root.get("name")), name.toLowerCase());
}
}
static class Specifications {
static Specification<Product> nameEquals(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(root.get("name"), name)
}
static Specification<Product> nameEqualsCaseInsensitive(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(criteriaBuilder.lower(root.get("name")), name.toLowerCase());
}
}
object Specifications {
fun nameEquals(name: String) = Specification<Product> { root, _, criteriaBuilder ->
criteriaBuilder.equal(root.get<String>("name"), name)
}
fun nameEqualsCaseInsensitive(name: String) = Specification<Product> { root, _, criteriaBuilder ->
criteriaBuilder.equal(criteriaBuilder.lower(root.get("name")), name.toLowerCase())
}
}
You can create default methods in your repository class and provide dynamic implementation with a combination of multiple specifications:
@Repository
public interface ProductRepository extends CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Transactional
default List<Product> findByName(String name, boolean caseInsensitive, boolean includeBlank) {
Specification<Product> specification;
if (caseInsensitive) {
specification = Specifications.nameEqualsCaseInsensitive(name);
} else {
specification = Specifications.nameEquals(name);
}
if (includeBlank) {
specification = specification.or(Specifications.nameEquals(""));
}
return findAll(specification);
}
class Specifications {
public static Specification<Product> nameEquals(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(root.get("name"), name);
}
public static Specification<Product> nameEqualsCaseInsensitive(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(criteriaBuilder.lower(root.get("name")), name.toLowerCase());
}
}
}
@Repository
abstract class ProductRepository implements CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Transactional
List<Product> findByName(String name, boolean caseInsensitive, boolean includeBlank) {
Specification<Product> specification
if (caseInsensitive) {
specification = Specifications.nameEqualsCaseInsensitive(name)
} else {
specification = Specifications.nameEquals(name)
}
if (includeBlank) {
specification = specification | Specifications.nameEquals("")
}
return findAll(specification)
}
static class Specifications {
static Specification<Product> nameEquals(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(root.get("name"), name)
}
static Specification<Product> nameEqualsCaseInsensitive(String name) {
return (root, query, criteriaBuilder)
-> criteriaBuilder.equal(criteriaBuilder.lower(root.get("name")), name.toLowerCase());
}
}
}
@Repository
interface ProductRepository : CrudRepository<Product, Long>, JpaSpecificationExecutor<Product> {
@Transactional
fun findByName(name: String, caseInsensitive: Boolean, includeBlank: Boolean): List<Product> {
var specification = if (caseInsensitive) {
Specifications.nameEqualsCaseInsensitive(name)
} else {
Specifications.nameEquals(name)
}
if (includeBlank) {
specification = specification.or(Specifications.nameEquals(""))
}
return findAll(specification)
}
object Specifications {
fun nameEquals(name: String) = Specification<Product> { root, _, criteriaBuilder ->
check(criteriaBuilder.javaClass.getName().startsWith("org.hibernate"))
criteriaBuilder.equal(root.get<String>("name"), name)
}
fun nameEqualsCaseInsensitive(name: String) = Specification<Product> { root, _, criteriaBuilder ->
criteriaBuilder.equal(criteriaBuilder.lower(root.get("name")), name.toLowerCase())
}
}
}
In Micronaut Data, the preferred way is to have build-time generated queries. It’s recommended to use Criteria-based API only for queries that need to be generated dynamically at the runtime. |
5 Micronaut Data Hibernate Reactive
Hibernate Reactive brings reactive to the traditional JPA.
By using Hibernate Reactive in combination with Micronaut Data you can use the same features as repositories, JPA criteria etc. but in a reactive way.
For more information about Hibernate Reactive refer to the official documentation.
Include Hibernate reactive Micronaut Data support:
implementation("io.micronaut.data:micronaut-data-hibernate-reactive")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-hibernate-reactive</artifactId>
</dependency>
Hibernate Reactive in Micronaut Data requires Hibernate 6 |
The configuration differs from the ordinary Hibernate quick start since Hibernate Reactive does not use traditional JDBC drivers but instead custom drivers provided by the Vertx project. You need to select an appropriate driver for your database:
For MySQL:
implementation("io.vertx:vertx-mysql-client")
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-mysql-client</artifactId>
</dependency>
For Postgres:
implementation("io.vertx:vertx-pg-client")
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-pg-client</artifactId>
</dependency>
For Microsoft SQLServer:
implementation("io.vertx:vertx-mssql-client")
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-mssql-client</artifactId>
</dependency>
For Oracle:
implementation("io.vertx:vertx-oracle-client")
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-oracle-client</artifactId>
</dependency>
And configure it based on Micronaut SQL Hibernate Reactive support.
jpa.default.reactive=true
jpa.default.properties.hibernate.hbm2ddl.auto=create-drop
jpa.default.properties.hibernate.show_sql=true
jpa.default.properties.hibernate.connection.url=jdbc:mysql://localhost:3307/my_db
jpa.default.properties.hibernate.connection.username=myUser
jpa.default.properties.hibernate.connection.password=myPassword
jpa:
default:
reactive: true
properties:
hibernate:
hbm2ddl:
auto: create-drop
show_sql: true
connection:
url: jdbc:mysql://localhost:3307/my_db
username: myUser
password: myPassword
[jpa]
[jpa.default]
reactive=true
[jpa.default.properties]
[jpa.default.properties.hibernate]
show_sql=true
[jpa.default.properties.hibernate.hbm2ddl]
auto="create-drop"
[jpa.default.properties.hibernate.connection]
url="jdbc:mysql://localhost:3307/my_db"
username="myUser"
password="myPassword"
jpa {
'default' {
reactive = true
properties {
hibernate {
hbm2ddl {
auto = "create-drop"
}
show_sql = true
connection {
url = "jdbc:mysql://localhost:3307/my_db"
username = "myUser"
password = "myPassword"
}
}
}
}
}
{
jpa {
default {
reactive = true
properties {
hibernate {
hbm2ddl {
auto = "create-drop"
}
show_sql = true
connection {
url = "jdbc:mysql://localhost:3307/my_db"
username = "myUser"
password = "myPassword"
}
}
}
}
}
}
{
"jpa": {
"default": {
"reactive": true,
"properties": {
"hibernate": {
"hbm2ddl": {
"auto": "create-drop"
},
"show_sql": true,
"connection": {
"url": "jdbc:mysql://localhost:3307/my_db",
"username": "myUser",
"password": "myPassword"
}
}
}
}
}
}
Hibernate reactive is non-blocking that repository interfaces and classes you define extend one of the reactive repositories:
Interface |
Description |
Extends GenericRepository and adds CRUD methods that return Publisher |
|
Extends ReactiveStreamsCrudRepository and is using Reactor return types |
|
Extends GenericRepository and adds CRUD methods that return RxJava 2 types |
|
Extends GenericRepository and is using Kotlin coroutines for reactive CRUD operations |
|
Reactive JPA Criteria executor |
|
Reactive JPA Criteria executor that exposes methods using Reactor |
The following is an example Hibernate Reactive repository:
@Repository // (1)
interface BookRepository extends ReactorCrudRepository<Book, Long> { // (2)
Mono<Book> find(String title);
Mono<BookDTO> findOne(String title);
Flux<Book> findByPagesGreaterThan(int pageCount, Pageable pageable);
Mono<Page<Book>> findByTitleLike(String title, Pageable pageable);
Mono<Slice<Book>> list(Pageable pageable);
@Transactional
default Mono<Void> findByIdAndUpdate(Long id, Consumer<Book> bookConsumer) {
return findById(id).map(book -> {
bookConsumer.accept(book);
return book;
}).then();
}
@Override
Mono<Book> save(Book entity);
@Override
Mono<Book> update(Book newBook);
Mono<Void> update(@Id Long id, int pages);
@Override
Mono<Long> deleteAll();
Mono<Void> delete(String title);
}
@Repository // (1)
abstract class BookRepository implements ReactorCrudRepository<Book, Long> { // (2)
abstract Mono<Book> find(String title);
abstract Mono<Page<Book>> findByTitleLike(String title, Pageable pageable);
abstract Mono<BookDTO> findOne(String title);
abstract Flux<Book> findByPagesGreaterThan(int pageCount, Pageable pageable);
abstract Mono<Slice<Book>> list(Pageable pageable);
abstract Mono<Book> save(Book entity);
@Transactional
Mono<Void> findByIdAndUpdate(Long id, Consumer<Book> bookConsumer) {
return findById(id).map(book -> {
bookConsumer.accept(book)
return book
}).then()
}
abstract Mono<Book> update(Book newBook);
abstract Mono<Void> update(@Id Long id, int pages);
@Override
abstract Mono<Long> deleteAll();
abstract Mono<Void> delete(String title);
}
@Repository // (1)
interface BookRepository : CoroutineCrudRepository<Book, Long> { // (2)
suspend fun find(title: String): Book
suspend fun findOne(title: String): BookDTO
suspend fun findByPagesGreaterThan(pageCount: Int, pageable: Pageable): List<Book>
suspend fun findByTitleLike(title: String, pageable: Pageable): Page<Book>
suspend fun list(pageable: Pageable): Slice<Book>
suspend fun save(entity: Book): Book
@Query("INSERT INTO Book(title, pages) VALUES (:title, :pages)")
suspend fun insert(title: String, pages: Int)
@Transactional
suspend fun findByIdAndUpdate(id: Long, bookConsumer: Consumer<Book?>) {
bookConsumer.accept(findById(id))
}
suspend fun update(newBook: Book): Book
suspend fun update(@Id id: Long?, pages: Int)
suspend fun delete(title: String)
}
1 | The interface is annotated with @Repository |
2 | The ReactorCrudRepository interface take 2 generic arguments, the entity type (in this case Book ) and the ID type (in this case Long ) |
Saving an Instance (Create)
To save an instance use the save
method of the ReactorCrudRepository
interface:
Book book = new Book();
book.setTitle("The Stand");
book.setPages(1000);
bookRepository.save(book).block();
Book book = new Book(title:"The Stand", pages:1000)
bookRepository.save(book).block()
var book = Book(0, "The Stand", 1000)
bookRepository.save(book)
Retrieving an Instance (Read)
To read a book back use findById
:
book = bookRepository.findById(id).block();
book = bookRepository.findById(id).block()
book = bookRepository.findById(id)!!
Updating an Instance (Update)
To update an instance we use a custom method to do an update in a transaction:
bookRepository.findByIdAndUpdate(id, foundBook -> {
foundBook.setTitle("Changed");
}).block();
bookRepository.findByIdAndUpdate(id) {
it.title = "Changed"
}.block()
bookRepository.findByIdAndUpdate(id) {
it!!.title = "Changed"
}
Deleting an Instance (Delete)
To delete an instance use deleteById
:
bookRepository.deleteById(id).block();
bookRepository.deleteById(id).block()
bookRepository.deleteById(id)
The examples are using block to retrieve the result, in your application you should never block the reactive repository as it can lead to performance problems, and it might not be supported by the backing implementation.
|
See the guide for Access a Database with Micronaut Data and Hibernate Reactive to learn more. |
6 Micronaut Data JDBC and R2DBC
Micronaut Data JDBC / R2DBC is an implementation that pre-computes native SQL queries (given a particular database dialect) and provides a repository implementation that is a simple data mapper between a native result set and an entity.
Micronaut Data JDBC / R2DBC supports all the features of Micronaut Data for JPA including dynamic finders, pagination, projections, Data Transfer Objects (DTO), Batch Updates, Optimistic locking and so on.
However, Micronaut Data JDBC / R2DBC is not an Object Relational Mapping (ORM) implementation and does not and will not include any of the following concepts:
-
Lazy Loading or Proxying of Associations
-
Dirty Checking
-
Persistence Contexts / Sessions
-
First Level Caching and Entity Proxies
Micronaut Data JDBC / R2DBC is designed for users who prefer a lower-level experience and working directly with SQL.
Micronaut Data JDBC / R2DBC is useful for implementing the majority of the simple SQL queries that exist in a typical application and does not include any runtime query building DSLs. For more complex queries Micronaut Data JDBC / R2DBC can be paired with one of the many great existing Java SQL DSLs out there like JOOQ, QueryDSL, Requery or even JPA. |
6.1 JDBC
Micronaut Data JDBC is designed for users who prefer a lower-level experience and working directly with SQL.
Following sections contains JDBC specific configuration and documentation.
6.1.1 Quick Start
The quickest way to get started is to create a new Micronaut application with Micronaut Launch and choose the data-jdbc
, a database driver and a database migration framework features. This can also be done via CLI.
You can also find a great guide on building Micronaut Data JDBC applications including sample code in a variety of languages in the Micronaut Guide: Access a Database with Micronaut Data JDBC |
Clicking on one of the links in the table below will take you to Micronaut Launch with the appropriate options already pre-configured with your selected language and build tool:
Gradle |
Maven |
|
Java |
||
Kotlin |
||
Groovy |
# For Maven add: --build maven
$ mn create-app --lang java example --features data-jdbc,flyway,mysql,jdbc-hikari
Or via curl
:
curl
# For Maven add to the URL: &build=maven
$ curl https://launch.micronaut.io/demo.zip?lang=java&features=data-jdbc,flyway,mysql,jdbc-hikari -o demo.zip && unzip demo.zip -d demo && cd demo
The generated application will have a compile-scoped dependency on the micronaut-data-jdbc
module and will use MySQL since we passed the mysql
feature adding dependency on the JDBC driver for MySQL:
implementation("io.micronaut.data:micronaut-data-jdbc")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-jdbc</artifactId>
</dependency>
You should also ensure you have the JDBC driver and connection pool dependencies configured:
runtimeOnly("io.micronaut.sql:micronaut-jdbc-hikari")
<dependency>
<groupId>io.micronaut.sql</groupId>
<artifactId>micronaut-jdbc-hikari</artifactId>
<scope>runtime</scope>
</dependency>
The annotation processor needs to have the Micronaut Data processor dependency properly setup to enable compile-time generation and evaluation:
annotationProcessor("io.micronaut.data:micronaut-data-processor")
<annotationProcessorPaths>
<path>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-processor</artifactId>
</path>
</annotationProcessorPaths>
For Kotlin, add the micronaut-data-processor dependency in kapt or ksp scope, and for Groovy add micronaut-data-processor in compileOnly scope.
|
Next up you need to configure at least one data source. The following snippet from the application configuration file is an example of configuring the default JDBC data source:
datasources.default.url=jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER
datasources.default.driverClassName=org.h2.Driver
datasources.default.username=sa
datasources.default.password=
datasources.default.schema-generate=CREATE_DROP
datasources.default.dialect=H2
datasources:
default:
url: jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER
driverClassName: org.h2.Driver
username: sa
password: ''
schema-generate: CREATE_DROP
dialect: H2
[datasources]
[datasources.default]
url="jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER"
driverClassName="org.h2.Driver"
username="sa"
password=""
schema-generate="CREATE_DROP"
dialect="H2"
datasources {
'default' {
url = "jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schemaGenerate = "CREATE_DROP"
dialect = "H2"
}
}
{
datasources {
default {
url = "jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schema-generate = "CREATE_DROP"
dialect = "H2"
}
}
}
{
"datasources": {
"default": {
"url": "jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"schema-generate": "CREATE_DROP",
"dialect": "H2"
}
}
}
The schema-generate setting is only useful for demos and testing trivial examples, for production usage it is recommended you pair Micronaut Data with a SQL migration tool such as Flyway or Liquibase.
|
To retrieve objects from the database you need to define a class annotated with @MappedEntity. Note that this is a meta annotation and in fact if you prefer you can use JPA annotations (only a subset are supported, more on that later). If you wish to use JPA annotations include the following compileOnly
scoped dependency:
compileOnly("jakarta.persistence:jakarta.persistence-api")
<dependency>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
<scope>provided</scope>
</dependency>
To use JPA annotations in the javax.persistence
package use:
compileOnly("jakarta.persistence:jakarta.persistence-api")
<dependency>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
<scope>provided</scope>
</dependency>
If you want to use JPA annotations in your entities with Micronaut Data JDBC, we strongly recommend you use jakarta.persistence annotations. Micronaut Data will remove support for javax.persistence annotations in the future.
|
As above since only the annotations are used the dependency can be included only for compilation and not at runtime, so you don’t drag along the rest of the API, reducing your JAR file size.
You can then define an @Entity
:
package example;
import jakarta.persistence.*;
@Entity
public class Book {
@Id
@GeneratedValue
private Long id;
private String title;
private int pages;
public Book(String title, int pages) {
this.title = title;
this.pages = pages;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getTitle() {
return title;
}
public int getPages() {
return pages;
}
}
package example
import jakarta.persistence.*
@Entity
class Book {
@Id
@GeneratedValue
Long id
private String title
private int pages
Book(String title, int pages) {
this.title = title
this.pages = pages
}
String getTitle() {
return title
}
int getPages() {
return pages
}
}
package example
import jakarta.persistence.Entity
import jakarta.persistence.GeneratedValue
import jakarta.persistence.Id
@Entity
data class Book(@Id
@GeneratedValue
var id: Long,
var title: String,
var pages: Int = 0)
Followed by an interface that extends from CrudRepository
package example;
import io.micronaut.core.annotation.NonNull;
import io.micronaut.data.annotation.*;
import io.micronaut.data.annotation.sql.Procedure;
import io.micronaut.data.jdbc.annotation.JdbcRepository;
import io.micronaut.data.model.*;
import io.micronaut.data.model.query.builder.sql.Dialect;
import io.micronaut.data.repository.CrudRepository;
import java.util.List;
@JdbcRepository(dialect = Dialect.H2) // (1)
interface BookRepository extends CrudRepository<Book, Long> { // (2)
Book find(String title);
}
package example
import io.micronaut.core.annotation.NonNull
import io.micronaut.data.annotation.*
import io.micronaut.data.annotation.sql.Procedure
import io.micronaut.data.jdbc.annotation.JdbcRepository
import io.micronaut.data.model.*
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.repository.CrudRepository
import java.util.List
@JdbcRepository(dialect = Dialect.H2) // (1)
interface BookRepository extends CrudRepository<Book, Long> { // (2)
Book find(String title);
}
package example
import io.micronaut.context.annotation.Executable
import io.micronaut.core.annotation.NonNull
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.Query
import io.micronaut.data.annotation.sql.Procedure
import io.micronaut.data.jdbc.annotation.JdbcRepository
import io.micronaut.data.model.*
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.repository.CrudRepository
import jakarta.transaction.Transactional
@JdbcRepository(dialect = Dialect.H2) // (1)
interface BookRepository : CrudRepository<Book, Long> { // (2)
@Executable
fun find(title: String): Book
}
1 | The interface is annotated with @JdbcRepository and specifies a dialect of H2 used to generate queries |
2 | The CrudRepository interface take 2 generic arguments, the entity type (in this case Book ) and the ID type (in this case Long ) |
You can now perform CRUD (Create, Read, Update, Delete) operations on the entity. The implementation of example.BookRepository
is created at compilation time. To obtain a reference to it simply inject the bean:
@Inject BookRepository bookRepository;
@Inject @Shared BookRepository bookRepository
@Inject
lateinit var bookRepository: BookRepository
Saving an Instance (Create)
To save an instance use the save
method of the CrudRepository
interface:
Book book = new Book("The Stand", 1000);
bookRepository.save(book);
Book book = new Book("The Stand", 1000)
bookRepository.save(book)
var book = Book(0,"The Stand", 1000)
bookRepository.save(book)
Unlike the JPA implementation there is no dirty checking so save always performs a SQL INSERT . For batch updates use an update method (see following section).
|
Retrieving an Instance (Read)
To read a book back use findById
:
book = bookRepository.findById(id).orElse(null);
book = bookRepository.findById(id).orElse(null)
book = bookRepository.findById(id).orElse(null)
Updating an Instance (Update)
With Micronaut Data JDBC, you must manually implement an update
method since the JDBC implementation doesn’t include any dirty checking or persistence session notion. So you have to define explicit update methods for updates in your repository. For example:
void update(@Id Long id, int pages);
void update(@Id Long id, String title);
void update(@Id Long id, int pages);
void update(@Id Long id, String title);
fun update(@Id id: Long?, pages: Int)
fun update(@Id id: Long?, title: String)
Which can then be called like so:
bookRepository.update(book.getId(), "Changed");
bookRepository.update(book.getId(), "Changed")
bookRepository.update(book.id, "Changed")
Deleting an Instance (Delete)
To delete an instance use deleteById
:
bookRepository.deleteById(id);
bookRepository.deleteById(id)
bookRepository.deleteById(id)
Congratulations you have implemented your first Micronaut Data JDBC repository! Read on to find out more.
6.1.2 Configuration
JDBC driver
Micronaut Data JDBC requires that an appropriate java.sql.DataSource
bean is configured.
You can either do this manually or use the Micronaut JDBC module which provides out-of-the-box support for configuring connection pooling with either Tomcat JDBC, Hikari, Commons DBCP or Oracle UCP.
SQL Logging
You can enable SQL logging by enabling trace logging for the io.micronaut.data.query
logger. For example in logback.xml
:
<logger name="io.micronaut.data.query" level="trace" />
Creating the Schema
To create the database schema it is recommended you pair Micronaut Data with a SQL migration tool such as Flyway or Liquibase.
SQL migration tools provide more complete support for creating and evolving your schema across a range of databases.
If you want to quickly test out Micronaut Data then you can set the schema-generate
option of the data source to create-drop
as well as the appropriate schema name:
Most of the database migration tools use JDBC driver to make DB changes. If you use R2DBC you would need to separately configure JDBC data source. |
schema-generate
datasources.default.url=jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER
datasources.default.driverClassName=org.h2.Driver
datasources.default.username=sa
datasources.default.password=
datasources.default.schema-generate=CREATE_DROP
datasources.default.dialect=H2
datasources:
default:
url: jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER
driverClassName: org.h2.Driver
username: sa
password: ''
schema-generate: CREATE_DROP
dialect: H2
[datasources]
[datasources.default]
url="jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER"
driverClassName="org.h2.Driver"
username="sa"
password=""
schema-generate="CREATE_DROP"
dialect="H2"
datasources {
'default' {
url = "jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schemaGenerate = "CREATE_DROP"
dialect = "H2"
}
}
{
datasources {
default {
url = "jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER"
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
schema-generate = "CREATE_DROP"
dialect = "H2"
}
}
}
{
"datasources": {
"default": {
"url": "jdbc:h2:mem:devDb;LOCK_TIMEOUT=10000;DB_CLOSE_ON_EXIT=FALSE;NON_KEYWORDS=USER",
"driverClassName": "org.h2.Driver",
"username": "sa",
"password": "",
"schema-generate": "CREATE_DROP",
"dialect": "H2"
}
}
}
The schema-generate
option is currently only recommended for simple applications, testing and demos and is not considered production-ready. The dialect set in configuration is the dialect that will be used to generate the schema.
Setting the Dialect
As seen in the configuration above you should also configure the dialect. Although queries are precomputed in the repository some cases (like pagination) still require the dialect to specify. The following table summarizes the supported dialects:
Dialect |
Description |
The H2 database (typically used for in-memory testing) |
|
MySQL 5.5 or above |
|
Postgres 9.5 or above |
|
SQL Server 2012 or above |
|
Oracle 12c or above |
The dialect setting in configuration does not replace the need to ensure the correct dialect is set at the repository. If the dialect is H2 in configuration, the repository should have @JdbcRepository(dialect = Dialect.H2) / @R2dbcRepository(dialect = Dialect.H2) . Because repositories are computed at compile time, the configuration value is not known at that time.
|
See the guide for Access a Database with Micronaut Data JDBC to learn more. |
6.2 R2DBC
Micronaut Data R2DBC is designed for users who prefer a lower-level experience and working directly with SQL and wish to build non-blocking, reactive applications.
Following sections contains R2DBC specific configuration and documentation.
6.2.1 Quick Start
The quickest way to get started is to create a new Micronaut application with Micronaut Launch and data-r2dbc
, a database driver and a database migration framework features. This can also be done via CLI.
Clicking on one of the links in the table below will take you to Micronaut Launch with the appropriate options already pre-configured with your selected language and build tool:
Gradle |
Maven |
|
Java |
||
Kotlin |
||
Groovy |
# For Maven add: --build maven
$ mn create-app --lang java example --features data-r2dbc,flyway,mysql
Or via curl
:
curl
# For Maven add to the URL: &build=maven
$ curl https://launch.micronaut.io/demo.zip?lang=java&features=data-r2dbc,flyway,mysql -o demo.zip && unzip demo.zip -d demo && cd demo
The generated application will use MySQL since we passed the mysql
feature adding dependency on the R2DBC driver for MySQL:
runtimeOnly("dev.miku:r2dbc-mysql")
<dependency>
<groupId>dev.miku</groupId>
<artifactId>r2dbc-mysql</artifactId>
<scope>runtime</scope>
</dependency>
And for flyway the JDBC driver:
runtimeOnly("mysql:mysql-connector-java")
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
To create configurations for other drivers you can select the appropriate feature: oracle , postgres , sqlserver , h2 or mariadb .
|
Now define a SQL script that creates your initial schema in src/main/resources/db/migration
. For example:
V1__create-schema.sql
CREATE TABLE book(id SERIAL NOT NULL PRIMARY KEY, title VARCHAR(255), pages INT, author_id BIGINT NOT NULL);
CREATE TABLE author(id SERIAL NOT NULL PRIMARY KEY, name VARCHAR(255));
You can now configure your application to connect to the database using the application configuration file under src/main/resources
:
flyway.datasources.default.enabled=true
datasources.default.url=jdbc:mysql://localhost:3306/mydatabase
r2dbc.datasources.default.url=r2dbc:mysql:///mydatabase
flyway:
datasources:
default:
enabled: true
datasources:
default:
url: jdbc:mysql://localhost:3306/mydatabase
r2dbc:
datasources:
default: # (3)
url: r2dbc:mysql:///mydatabase
[flyway]
[flyway.datasources]
[flyway.datasources.default]
enabled=true
[datasources]
[datasources.default]
url="jdbc:mysql://localhost:3306/mydatabase"
[r2dbc]
[r2dbc.datasources]
[r2dbc.datasources.default]
url="r2dbc:mysql:///mydatabase"
flyway {
datasources {
'default' {
enabled = true
}
}
}
datasources {
'default' {
url = "jdbc:mysql://localhost:3306/mydatabase"
}
}
r2dbc {
datasources {
'default' {
url = "r2dbc:mysql:///mydatabase"
}
}
}
{
flyway {
datasources {
default {
enabled = true
}
}
}
datasources {
default {
url = "jdbc:mysql://localhost:3306/mydatabase"
}
}
r2dbc {
datasources {
default {
url = "r2dbc:mysql:///mydatabase"
}
}
}
}
{
"flyway": {
"datasources": {
"default": {
"enabled": true
}
}
},
"datasources": {
"default": {
"url": "jdbc:mysql://localhost:3306/mydatabase"
}
},
"r2dbc": {
"datasources": {
"default": {
"url": "r2dbc:mysql:///mydatabase"
}
}
}
}
-
The
enabled
setting ensures the Flyway schema migration is applied. See Micronaut Flyway for more information. -
The Flyway configuration needs a JDBC datasource.
datasources.defaul.url
configures one. See datasource configuration for more information. -
r2dbc.datasources.default.url
is used to configure the default R2DBCConnectionFactory
The R2DBC ConnectionFactory object can be injected anywhere in your code with dependency injection.
|
Now define a @MappedEntity
that maps to the author
table defined in the schema:
package example;
import io.micronaut.data.annotation.*;
import io.micronaut.serde.annotation.Serdeable;
@Serdeable
@MappedEntity
public class Author {
@GeneratedValue
@Id
private Long id;
private final String name;
public Author(String name) {
this.name = name;
}
public String getName() {
return name;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
}
package example
import io.micronaut.data.annotation.*
import io.micronaut.serde.annotation.Serdeable
@Serdeable
@MappedEntity
class Author {
@GeneratedValue
@Id
Long id
final String name
Author(String name) {
this.name = name
}
}
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import io.micronaut.serde.annotation.Serdeable
@Serdeable
@MappedEntity
data class Author(val name: String) {
@GeneratedValue
@Id
var id: Long? = null
}
And a repository interface to access the database that extends from ReactiveStreamsRepository
:
package example;
import io.micronaut.core.annotation.NonNull;
import io.micronaut.data.model.query.builder.sql.Dialect;
import io.micronaut.data.r2dbc.annotation.R2dbcRepository;
import io.micronaut.data.repository.reactive.ReactiveStreamsCrudRepository;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import jakarta.validation.constraints.NotNull;
@R2dbcRepository(dialect = Dialect.POSTGRES) // (1)
public interface AuthorRepository extends ReactiveStreamsCrudRepository<Author, Long> {
@NonNull
@Override
Mono<Author> findById(@NonNull @NotNull Long aLong); // (2)
@NonNull
@Override
Flux<Author> findAll();
}
package example
import io.micronaut.core.annotation.NonNull
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.r2dbc.annotation.R2dbcRepository
import io.micronaut.data.repository.reactive.ReactiveStreamsCrudRepository
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono
import jakarta.validation.constraints.NotNull
@R2dbcRepository(dialect = Dialect.POSTGRES) // (1)
interface AuthorRepository extends ReactiveStreamsCrudRepository<Author, Long> {
@NonNull
@Override
Mono<Author> findById(@NonNull @NotNull Long aLong) // (2)
@NonNull
@Override
Flux<Author> findAll()
}
package example
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.r2dbc.annotation.R2dbcRepository
import io.micronaut.data.repository.reactive.ReactiveStreamsCrudRepository
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono
import jakarta.validation.constraints.NotNull
@R2dbcRepository(dialect = Dialect.MYSQL) // (1)
interface AuthorRepository : ReactiveStreamsCrudRepository<Author, Long> {
override fun findById(id: @NotNull Long): Mono<Author> // (2)
override fun findAll(): Flux<Author>
}
1 | The @R2dbcRepository annotation can be used to specify the datasource and dialect |
2 | You can override methods from the super interface to specialize the default Publisher return type with a concrete implementation |
You can now inject this interface into controllers and use it to perform R2DBC queries:
package example;
import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
@Controller("/authors")
public class AuthorController {
private final AuthorRepository repository;
public AuthorController(AuthorRepository repository) {
this.repository = repository;
}
@Get
Flux<Author> all() { // (1)
return repository.findAll();
}
@Get("/id")
Mono<Author> get(Long id) { // (2)
return repository.findById(id);
}
}
package example
import io.micronaut.http.annotation.Controller
import io.micronaut.http.annotation.Get
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono
@Controller("/authors")
class AuthorController {
private final AuthorRepository repository
AuthorController(AuthorRepository repository) {
this.repository = repository
}
@Get
Flux<Author> all() { // (1)
return repository.findAll()
}
@Get("/id")
Mono<Author> get(Long id) { // (2)
return repository.findById(id)
}
}
package example
import io.micronaut.http.annotation.Controller
import io.micronaut.http.annotation.Get
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono
@Controller("/authors")
class AuthorController(private val repository: AuthorRepository) {
@Get
fun all(): Flux<Author> { // (1)
return repository.findAll()
}
@Get("/id")
fun get(id: Long): Mono<Author> { // (2)
return repository.findById(id)
}
}
1 | By returning a reactive type that emits many items you can stream data (either Flowable or Flux ) |
2 | By returning a reactive type that emits a single item you return the entire response (either Single or Mono ) |
6.2.2 Configuration
R2DBC driver
Micronaut Data R2DBC requires driver configuration using Micronaut R2DBC
Database |
Dependency |
The H2 database |
|
MySQL |
|
MariaDB |
|
Postgres |
|
SQL Server |
|
Oracle |
|
SQL Logging
You can enable SQL logging by enabling trace logging for the io.micronaut.data.query
logger. For example in logback.xml
:
<logger name="io.micronaut.data.query" level="trace" />
Creating the Schema
To create the database schema it is recommended you pair Micronaut Data with a SQL migration tool such as Flyway or Liquibase.
SQL migration tools provide more complete support for creating and evolving your schema across a range of databases.
Most of the database migration tools use JDBC driver to make DB changes. Therefore, it is likely that you will in addition to the R2DBC driver need to include a JDBC driver module for the schema migration to work. |
If you want to quickly test out Micronaut Data R2DBC then you can set the schema-generate
option of the data source to create-drop
as well as the appropriate schema name:
schema-generate
micronaut.application.name=example
r2dbc.datasources.default.db-type=postgresql
r2dbc.datasources.default.schema-generate=CREATE_DROP
r2dbc.datasources.default.dialect=POSTGRES
datasources.default.db-type=postgresql
datasources.default.schema-generate=CREATE_DROP
datasources.default.dialect=POSTGRES
micronaut:
application:
name: example
r2dbc:
datasources:
default:
db-type: postgresql
schema-generate: CREATE_DROP
dialect: POSTGRES
datasources:
default:
db-type: postgresql
schema-generate: CREATE_DROP
dialect: POSTGRES
[micronaut]
[micronaut.application]
name="example"
[r2dbc]
[r2dbc.datasources]
[r2dbc.datasources.default]
db-type="postgresql"
schema-generate="CREATE_DROP"
dialect="POSTGRES"
[datasources]
[datasources.default]
db-type="postgresql"
schema-generate="CREATE_DROP"
dialect="POSTGRES"
micronaut {
application {
name = "example"
}
}
r2dbc {
datasources {
'default' {
dbType = "postgresql"
schemaGenerate = "CREATE_DROP"
dialect = "POSTGRES"
}
}
}
datasources {
'default' {
dbType = "postgresql"
schemaGenerate = "CREATE_DROP"
dialect = "POSTGRES"
}
}
{
micronaut {
application {
name = "example"
}
}
r2dbc {
datasources {
default {
db-type = "postgresql"
schema-generate = "CREATE_DROP"
dialect = "POSTGRES"
}
}
}
datasources {
default {
db-type = "postgresql"
schema-generate = "CREATE_DROP"
dialect = "POSTGRES"
}
}
}
{
"micronaut": {
"application": {
"name": "example"
}
},
"r2dbc": {
"datasources": {
"default": {
"db-type": "postgresql",
"schema-generate": "CREATE_DROP",
"dialect": "POSTGRES"
}
}
},
"datasources": {
"default": {
"db-type": "postgresql",
"schema-generate": "CREATE_DROP",
"dialect": "POSTGRES"
}
}
}
The schema-generate
option is currently only recommended for simple applications, testing and demos and is not considered production-ready. The dialect set in configuration is the dialect that will be used to generate the schema.
Setting the Dialect
As seen in the configuration above you should also configure the dialect. Although queries are precomputed in the repository some cases (like pagination) still require the dialect to specify. The following table summarizes the supported dialects:
Dialect |
Description |
The H2 database (typically used for in-memory testing) |
|
MySQL 5.5 or above |
|
Postgres 9.5 or above |
|
SQL Server 2012 or above |
|
Oracle 12c or above |
The dialect setting in configuration does not replace the need to ensure the correct dialect is set at the repository. If the dialect is H2 in configuration, the repository should have @R2dbcRepository(dialect = Dialect.H2) . Because repositories are computed at compile time, the configuration value is not known at that time.
|
6.2.3 Reactive repositories
The following table summarizes the reactive repository interfaces that come with Micronaut Data and are recommended to be used with R2DBC:
Interface |
Description |
Extends GenericRepository and adds CRUD methods that return Publisher |
|
Extends ReactiveStreamsCrudRepository and is using Reactor return types |
|
Extends GenericRepository and adds CRUD methods that return RxJava 2 types |
|
Extends GenericRepository and is using Kotlin coroutines for reactive CRUD operations |
6.2.4 Transactions
Micronaut Data R2DBC features Reactive transaction management support whereby you can declare jakarta.transaction.Transactional
on your methods and a reactive transaction will be initiated, for example:
package example;
import reactor.core.publisher.Mono;
import jakarta.inject.Singleton;
import jakarta.transaction.Transactional;
import java.util.Arrays;
@Singleton
public class AuthorService {
private final AuthorRepository authorRepository;
private final BookRepository bookRepository;
public AuthorService(AuthorRepository authorRepository, BookRepository bookRepository) { // (1)
this.authorRepository = authorRepository;
this.bookRepository = bookRepository;
}
@Transactional // (2)
Mono<Void> setupData() {
return Mono.from(authorRepository.save(new Author("Stephen King")))
.flatMapMany((author -> bookRepository.saveAll(Arrays.asList(
new Book("The Stand", 1000, author),
new Book("The Shining", 400, author)
))))
.then(Mono.from(authorRepository.save(new Author("James Patterson"))))
.flatMapMany((author ->
bookRepository.save(new Book("Along Came a Spider", 300, author))
)).then();
}
}
package example
import reactor.core.publisher.Mono
import jakarta.inject.Singleton
import jakarta.transaction.Transactional
@Singleton
class AuthorService {
private final AuthorRepository authorRepository
private final BookRepository bookRepository
AuthorService(AuthorRepository authorRepository, BookRepository bookRepository) { // (1)
this.authorRepository = authorRepository
this.bookRepository = bookRepository
}
@Transactional // (2)
Mono<Void> setupData() {
return Mono.from(authorRepository.save(new Author("Stephen King")))
.flatMapMany((author -> bookRepository.saveAll([
new Book("The Stand", 1000, author),
new Book("The Shining", 400, author)
])))
.then(Mono.from(authorRepository.save(new Author("James Patterson"))))
.flatMapMany((author ->
bookRepository.save(new Book("Along Came a Spider", 300, author))
)).then()
}
}
package example
import reactor.core.publisher.Mono
import jakarta.inject.Singleton
import jakarta.transaction.Transactional
@Singleton
open class AuthorService(
private val authorRepository: AuthorRepository,
private val bookRepository: BookReactiveRepository) { // (1)
@Transactional // (2)
open fun setupData(): Mono<Void> {
return Mono.from(authorRepository.save(Author("Stephen King")))
.flatMapMany { author: Author ->
bookRepository.saveAll(listOf(
Book("The Stand", 1000, author),
Book("The Shining", 400, author)
))
}
.then(Mono.from(authorRepository.save(Author("James Patterson"))))
.flatMapMany { author: Author ->
bookRepository.save(Book("Along Came a Spider", 300, author))
}.then()
}
}
1 | Supporting repositories are injected |
2 | @Transactional is used to declare a transaction |
This same declarative logic can be done programmatically as well by injecting the R2dbcOperations interface:
Flux.from(operations.withTransaction(status ->
Flux.from(authorRepository.save(new Author("Stephen King")))
.flatMap((author -> bookRepository.saveAll(Arrays.asList(
new Book("The Stand", 1000, author),
new Book("The Shining", 400, author)
))))
.thenMany(Flux.from(authorRepository.save(new Author("James Patterson"))))
.flatMap((author ->
bookRepository.save(new Book("Along Came a Spider", 300, author))
)).then()
)).collectList().block();
Flux.from(operations.withTransaction(status ->
Flux.from(authorRepository.save(new Author("Stephen King")))
.flatMap((author -> bookRepository.saveAll([
new Book("The Stand", 1000, author),
new Book("The Shining", 400, author)
])))
.thenMany(Flux.from(authorRepository.save(new Author("James Patterson"))))
.flatMap((author ->
bookRepository.save(new Book("Along Came a Spider", 300, author))
)).then()
)).collectList().block()
Flux.from(operations.withTransaction {
Flux.from(authorRepository.save(Author("Stephen King")))
.flatMap { author: Author ->
bookRepository.saveAll(listOf(
Book("The Stand", 1000, author),
Book("The Shining", 400, author)
))
}
.thenMany(Flux.from(authorRepository.save(Author("James Patterson"))))
.flatMap { author: Author -> bookRepository.save(Book("Along Came a Spider", 300, author)) }.then()
}).collectList().block()
In the above case the withTransaction
method is used to initiate a transaction.
Note however, that transaction management is possibly one of the most challenging areas to get right in reactive programming since you need to propagate the transaction across the reactive flow.
Most R2DBC drivers are implemented in Project Reactor which has the ability to propagate a context across reactive operators and Micronaut Data R2DBC will populate this context and ensure the transaction is re-used if it is found within it.
However, it is still pretty easy for the context to be lost since different libraries that implement Reactive Streams don’t propagate contexts between each other so if you include RxJava or any other reactive operator library it is likely the context will be lost.
To ensure this doesn’t happen it is recommended that you annotate write operations that participate within a transaction as MANDATORY
which ensures it is not possible to run these methods without a surrounding transaction present so that if the transaction is somehow lost within the reactive flow it doesn’t cause operations to be run in separate transactions:
@NonNull
@Override
@Transactional(Transactional.TxType.MANDATORY)
<S extends Book> Publisher<S> save(@NonNull @Valid @NotNull S entity);
@NonNull
@Override
@Transactional(Transactional.TxType.MANDATORY)
<S extends Book> Publisher<S> saveAll(@NonNull @Valid @NotNull Iterable<S> entities);
@NonNull
@Override
@Transactional(Transactional.TxType.MANDATORY)
<S extends Book> Publisher<S> save(@NonNull @Valid @NotNull S entity);
@NonNull
@Override
@Transactional(Transactional.TxType.MANDATORY)
<S extends Book> Publisher<S> saveAll(@NonNull @Valid @NotNull Iterable<S> entities);
@Transactional(Transactional.TxType.MANDATORY)
override suspend fun <S : Book> save(entity: S): S
@Transactional(Transactional.TxType.MANDATORY)
override fun <S : Book> saveAll(entities: Iterable<S>): Flow<S>
If the transaction is somehow lost during the reactive flow there are a couple of ways you can solve the problem. One way is to use the withTransaction
method of the R2dbcOperations interface to obtain the current ReactiveTransactionStatus
, you can then pass this instance into another execution of the withTransaction
method or pass it directly as the last argument to any method declared on the repository itself.
An example of the former approach is presented below, using RxJava 2 this time which will cause propagation loss:
Flux.from(operations.withTransaction(status -> // (1)
Flux.from(authorRepository.save(new Author("Michael Crichton")))
.flatMap((author -> operations.withTransaction(status, (s) -> // (2)
bookRepository.saveAll(Arrays.asList(
new Book("Jurassic Park", 300, author),
new Book("Disclosure", 400, author)
)))))
)).collectList().block();
Flux.from(operations.withTransaction(status -> // (1)
Flux.from(authorRepository.save(new Author("Michael Crichton")))
.flatMap((author -> operations.withTransaction(status, (s) -> // (2)
bookRepository.saveAll([
new Book("Jurassic Park", 300, author),
new Book("Disclosure", 400, author)
]))))
)).collectList().block()
Flux.from(operations.withTransaction { status: ReactiveTransactionStatus<Connection> -> // (1)
Flux.from(authorRepository.save(Author("Michael Crichton")))
.flatMap { author: Author ->
operations.withTransaction(status) { // (2)
bookRepository.saveAll(listOf(
Book("Jurassic Park", 300, author),
Book("Disclosure", 400, author)
))
}
}
}).collectList().block()
1 | An outer withTransaction call starts the transaction |
2 | An inner call ensures the existing transaction is propagated |
6.2.5 Reactive Entity Events
Micronaut Data R2DBC supports persistence events introduced in Micronaut Data 2.3 and above however it should be noted that these should not block and should only perform operations that don’t incur any network I/O and if they do a new thread should execute this logic.
Note that persistence events are most commonly used to pre-populate database properties prior to performing an insert (for example encoding a password etc.) these types of operations typically don’t involve blocking I/O and are safe to perform.
See the guide for Access a Database with Micronaut Data R2DBC to learn more. |
6.3 Repositories
As seen in the Quick Start JDBC / R2DBC repositories in Micronaut Data are defined as interfaces that are annotated with the @JdbcRepository annotation, @R2dbcRepository accordingly.
In a multiple datasource scenario, the @Repository and @Transactional annotations can be used to specify the datasource configuration to use. By default, Micronaut Data will look for the default datasource.
For example:
@JdbcRepository(dialect = Dialect.ORACLE, dataSource = "inventoryDataSource") (1)
@io.micronaut.transaction.annotation.Transactional("inventoryDataSource") (2)
public interface PhoneRepository extends CrudRepository<Phone, Integer> {
Optional<Phone> findByAssetId(@NotNull Integer assetId);
}
1 | @JdbcRepository with a specific dialect and data source configuration 'inventoryDataSource' |
2 | @Transactional annotation, pointing to the data source configuration 'inventoryDataSource' |
The entity to treat as the root entity for the purposes of querying is established either from the method signature or from the generic type parameter specified to the GenericRepository interface.
If no root entity can be established then a compilation error will occur.
The same interfaces supported by the JPA implementation are supported by JDBC.
Note that because queries are computed at compilation time the dialect
you use must be specified on the repository.
It is recommended you test against your target dialect. The Test Containers project is a great solution for this. If you must test against another dialect (like H2) then you can define a subinterface that @Replaces the repository with a different dialect for the scope of testing.
|
Note that in addition to interfaces you can also define repositories as abstract classes:
package example;
import io.micronaut.data.jdbc.annotation.JdbcRepository;
import io.micronaut.data.jdbc.runtime.JdbcOperations;
import io.micronaut.data.model.query.builder.sql.Dialect;
import io.micronaut.data.repository.CrudRepository;
import jakarta.transaction.Transactional;
import java.sql.ResultSet;
import java.util.List;
import java.util.stream.Collectors;
@JdbcRepository(dialect = Dialect.H2)
public abstract class AbstractBookRepository implements CrudRepository<Book, Long> {
private final JdbcOperations jdbcOperations;
public AbstractBookRepository(JdbcOperations jdbcOperations) {
this.jdbcOperations = jdbcOperations;
}
@Transactional
public List<Book> findByTitle(String title) {
String sql = "SELECT * FROM Book AS book WHERE book.title = ?";
return jdbcOperations.prepareStatement(sql, statement -> {
statement.setString(1, title);
ResultSet resultSet = statement.executeQuery();
return jdbcOperations.entityStream(resultSet, Book.class).toList();
});
}
}
package example
import io.micronaut.data.jdbc.annotation.JdbcRepository
import io.micronaut.data.jdbc.runtime.JdbcOperations
import io.micronaut.data.model.query.builder.sql.Dialect
import io.micronaut.data.repository.CrudRepository
import jakarta.transaction.Transactional
import java.sql.ResultSet
import java.util.stream.Collectors
@JdbcRepository(dialect = Dialect.H2)
abstract class AbstractBookRepository implements CrudRepository<Book, Long> {
private final JdbcOperations jdbcOperations
AbstractBookRepository(JdbcOperations jdbcOperations) {
this.jdbcOperations = jdbcOperations
}
@Transactional
List<Book> findByTitle(String title) {
String sql = "SELECT * FROM Book AS book WHERE book.title = ?"
return jdbcOperations.prepareStatement(sql, { statement ->
statement.setString(1, title)
ResultSet resultSet = statement.executeQuery()
return jdbcOperations.entityStream(resultSet, Book.class)
.toList()
})
}
}
package example
import io.micronaut.data.annotation.Repository
import io.micronaut.data.jdbc.runtime.JdbcOperations
import io.micronaut.data.repository.CrudRepository
import jakarta.transaction.Transactional
import kotlin.streams.toList
@Repository
abstract class AbstractBookRepository(private val jdbcOperations: JdbcOperations) : CrudRepository<Book, Long> {
@Transactional
open fun findByTitle(title: String): List<Book> {
val sql = "SELECT * FROM Book AS book WHERE book.title = ?"
return jdbcOperations.prepareStatement(sql) { statement ->
statement.setString(1, title)
val resultSet = statement.executeQuery()
jdbcOperations.entityStream(resultSet, Book::class.java)
.toList()
}
}
}
As you can see from the above example, using abstract classes can be useful as it allows you to combine custom code that performs your own SQL queries.
The example above uses the JdbcOperations interface which simplifies executing JDBC queries within the context of transactions.
You can also integrate whichever other tool you wish to use to handle more complex queries, such as QueryDSL, JOOQ, Spring JdbcTemplate etc.
For example, to use Spring JdbcTemplate, add the following dependencies:
implementation("io.micronaut.data:micronaut-data-jdbc")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-jdbc</artifactId>
</dependency>
implementation("io.micronaut.data:micronaut-data-spring-jdbc")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-spring-jdbc</artifactId>
</dependency>
The following code illustrates an example that integrates a JdbcTemplate
instance as part of a JdbcRepository.
@JdbcRepository(dialect = Dialect.H2)
public abstract class AbstractBookRepository implements CrudRepository<@Valid Book, @NotNull Long> {
private final JdbcTemplate jdbcTemplate;
public AbstractBookRepository(DataSource dataSource) { // (1)
this.jdbcTemplate = new JdbcTemplate(DelegatingDataSource.unwrapDataSource(dataSource)); //(2)
}
@Transactional
public List<Book> findByTitle(@NonNull @NotNull String title) {
return jdbcTemplate.queryForList("SELECT * FROM Book AS book WHERE book.title = ?", title) // (3)
.stream()
.map(m -> new Book((Long) m.get("id"), (String) m.get("title"), (Integer) m.get("pages")))
.toList();
}
}
@JdbcRepository(dialect = Dialect.H2)
abstract class AbstractBookRepository implements CrudRepository<@Valid Book, @NotNull Long> {
private final JdbcTemplate jdbcTemplate;
AbstractBookRepository(DataSource dataSource) { // (1)
this.jdbcTemplate = new JdbcTemplate(DelegatingDataSource.unwrapDataSource(dataSource)); //(2)
}
@Transactional
List<Book> findByTitle(@NonNull @NotNull String title) {
return jdbcTemplate.queryForList('SELECT * FROM Book AS book WHERE book.title = ?', title) // (3)
.collect(m -> new Book(m.id as Long, m.title as String, m.pages as Integer))
}
}
@JdbcRepository(dialect = Dialect.H2)
abstract class AbstractBookRepository(dataSource: DataSource) : CrudRepository<@Valid Book, Long> { // (1)
private val jdbcTemplate: JdbcTemplate = JdbcTemplate(DelegatingDataSource.unwrapDataSource(dataSource)) //(2)
@Transactional
open fun findByTitle(title: String) = jdbcTemplate
.queryForList("SELECT * FROM Book AS book WHERE book.title = ?", title) // (3)
.map { m -> Book(m["id"] as Long, m["title"] as String, m["pages"] as Int) }
}
1 | Inject the java.sql.DataSource configured by the application. |
2 | Instantiate a JdbcTemplate object using the injected DataSource . |
3 | Now the JdbcTemplate API can be used to implement repository methods. |
In addition, the transaction manager for Spring JDBC needs to be set in application configuration.
datasources.default.transaction-manager=springJdbc
datasources:
default:
transaction-manager: springJdbc
[datasources]
[datasources.default]
transaction-manager="springJdbc"
datasources {
'default' {
transactionManager = "springJdbc"
}
}
{
datasources {
default {
transaction-manager = "springJdbc"
}
}
}
{
"datasources": {
"default": {
"transaction-manager": "springJdbc"
}
}
}
6.3.1 Accessing data
Unlike JPA/Hibernate, Micronaut Data JDBC / R2DBC is stateless and has no notion of a persistence session that requires state management.
Since there is no session, features like dirty checking are not supported. This has implications when defining repository methods for inserts and updates.
By default, when saving an entity with a method like save(MyEntity)
a SQL INSERT
is always performed since Micronaut Data has no way to know whether the entity is associated to a particular session.
If you wish to update an entity you should instead either use update(MyEntity)
or even better define an appropriate update
method to update only the data you want to update, for example:
void update(@Id Long id, int pages);
void update(@Id Long id, String title);
void update(@Id Long id, int pages);
void update(@Id Long id, String title);
fun update(@Id id: Long?, pages: Int)
fun update(@Id id: Long?, title: String)
By being explicit in defining the method as an update method Micronaut Data knows to execute an UPDATE
.
6.3.2 Optimistic locking
Optimistic locking is a strategy where you note the actual record state’s version and modify the record only when the version is the same.
To enable optimistic locking for your entity add @Version annotated field with one of the types:
-
java.lang.Integer
-
java.lang.Long
-
java.lang.Short
-
Date-time type extending
java.time.Temporal
The field is going to be incremented (for number types) or replaced (for date types) on an update operation.
Micronaut Data will generate UPDATE
/DELETE
SQL queries with a version match: … WHERE rec.version = :currentVersion …
and if the update/delete doesn’t produce any result OptimisticLockException will be thrown.
@Entity
public class Student {
@Id
@GeneratedValue
private Long id;
@Version
private Long version;
@Entity
class Student {
@Id
@GeneratedValue
Long id
@Version
Long version
@Entity
data class Student(
@Id @GeneratedValue
var id: Long?,
@Version
val version: Long,
It’s possible to use @Version in a partial update or a delete method, in this case the version needs to match the version of the stored record.
@Repository
public interface StudentRepository extends CrudRepository<Student, Long> {
void update(@Id Long id, @Version Long version, String name);
void delete(@Id Long id, @Version Long version);
}
@Repository
interface StudentRepository extends CrudRepository<Student, Long> {
void update(@Id Long id, @Version Long version, String name)
void delete(@Id Long id, @Version Long version)
}
@Repository
interface StudentRepository : CrudRepository<Student, Long> {
fun update(@Id id: Long, @Version version: Long, name: String)
fun delete(@Id id: Long, @Version version: Long)
}
6.3.3 Pessimistic Locking
Pessimistic locking is supported through the use of find*ForUpdate
methods.
@JdbcRepository(dialect = Dialect.POSTGRES)
public interface AccountBalanceRepository extends CrudRepository<AccountBalance, Long> {
AccountBalance findByIdForUpdate(Long id); (1)
@Transactional (2)
void addToBalance(Long id, BigInteger amount) {
AccountBalance accountBalance = findByIdForUpdate(id); (3)
accountBalance.addAmount(amount);
update(accountBalance); (4)
}
}
1 | The ForUpdate suffix indicates that the selected record should be locked. |
2 | Both read and write operations are wrapped in a single transaction. |
3 | A locking read is performed, preventing other queries from accessing the record. |
4 | The record is updated safely. |
All find
methods can be declared as ForUpdate
:
@JdbcRepository(dialect = Dialect.POSTGRES)
public interface BookRepository extends CrudRepository<Book, Long> {
@Join("author")
Optional<Book> findByIdForUpdate(Long id);
List<Book> findAllOrderByTotalPagesForUpdate();
List<Book> findByTitleForUpdate(String title);
}
The queries generated for these methods make use of the FOR UPDATE
SQL clause or the UPDLOCK
and ROWLOCK
query hints in the case of SQL Server.
The semantics of the FOR UPDATE clause may vary depending on the database. Make sure to check the relevant documentation for your engine.
|
6.4 Repositories with Criteria API
In some cases, you need to build a query programmatically and at the runtime; for that, Micronaut Data implements a subset of Jakarta Persistence Criteria API 3.0, which can be used for Micronaut Data JDBC and R2DBC features. To utilize this feature add the following dependency:
implementation("jakarta.persistence:jakarta.persistence-api")
<dependency>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
</dependency>
To implement queries that cannot be defined at the compile-time Micronaut Data introduces JpaSpecificationExecutor repository interface that can be used to extend your repository interface:
@JdbcRepository(dialect = Dialect.H2)
public interface PersonRepository extends CrudRepository<Person, Long>, JpaSpecificationExecutor<Person> {
}
@JdbcRepository(dialect = Dialect.H2)
interface PersonRepository extends CrudRepository<Person, Long>, JpaSpecificationExecutor<Person> {
}
@JdbcRepository(dialect = Dialect.H2)
interface PersonRepository : CrudRepository<Person, Long>, JpaSpecificationExecutor<Person> {
}
Each method expects a "specification" which is a functional interface with a set of Criteria API objects intended to build a query programmatically.
Micronaut Criteria API currently implements only a subset of the API. Most of it is internally used to create queries with predicates and projections.
Currently, not supported JPA Criteria API features:
-
Joins with custom
ON
expressions and typed join methods likejoinSet
etc -
Sub-queries
-
Collection operations:
isMember
etc -
Custom or tuple result type
-
Transformation expressions like concat, substring etc.
-
Cases and functions
More information about Jakarta Persistence Criteria API 3.0 you can find at the official API specification
6.4.1 Querying
To find an entity or multiple entities you can use one of the following methods from JpaSpecificationExecutor interface:
Optional<Person> findOne(PredicateSpecification<Person> spec);
Optional<Person> findOne(QuerySpecification<Person> spec);
List<Person> findAll(PredicateSpecification<Person> spec);
List<Person> findAll(QuerySpecification<Person> spec);
List<Person> findAll(PredicateSpecification<Person> spec, Sort sort);
List<Person> findAll(QuerySpecification<Person> spec, Sort sort);
Page<Person> findAll(PredicateSpecification<Person> spec, Pageable pageable);
Page<Person> findAll(QuerySpecification<Person> spec, Pageable pageable);
Optional<Person> findOne(PredicateSpecification<Person> spec)
Optional<Person> findOne(QuerySpecification<Person> spec)
List<Person> findAll(PredicateSpecification<Person> spec)
List<Person> findAll(QuerySpecification<Person> spec)
List<Person> findAll(PredicateSpecification<Person> spec, Sort sort)
List<Person> findAll(QuerySpecification<Person> spec, Sort sort)
Page<Person> findAll(PredicateSpecification<Person> spec, Pageable pageable)
Page<Person> findAll(QuerySpecification<Person> spec, Pageable pageable)
fun findOne(spec: PredicateSpecification<Person>?): Optional<Person>
fun findOne(spec: QuerySpecification<Person>?): Optional<Person>
fun findAll(spec: PredicateSpecification<Person>?): List<Person>
fun findAll(spec: QuerySpecification<Person>?): List<Person>
fun findAll(spec: PredicateSpecification<Person>?, sort: Sort): List<Person>
fun findAll(spec: QuerySpecification<Person>?, sort: Sort): List<Person>
fun findAll(spec: PredicateSpecification<Person>?, pageable: Pageable): Page<Person>
fun findAll(spec: QuerySpecification<Person>?, pageable: Pageable): Page<Person>
As you can see, there are two variations of findOne
/findAll
methods.
First method is expecting PredicateSpecification which is a simple specification interface that can be implemented to return a predicate:
import static jakarta.persistence.criteria.*;
public interface PredicateSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaBuilder criteriaBuilder (3)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria builder |
This interface can also be used for update and delete methods, and it provides or
and and
methods for combining multiple predicates.
The second interface is intended only for query criteria because it includes jakarta.persistence.criteria.CriteriaQuery
as a parameter.
import static jakarta.persistence.criteria.*;
public interface QuerySpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaQuery<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria query instance |
4 | The criteria builder |
For implementing counting queries following methods can be used:
long count(PredicateSpecification<Person> spec);
long count(QuerySpecification<Person> spec);
long count(PredicateSpecification<Person> spec)
long count(QuerySpecification<Person> spec)
fun count(spec: PredicateSpecification<Person>?): Long
fun count(spec: QuerySpecification<Person>?): Long
You can define criteria specification methods that will help you to create a query:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.name), name);
}
static PredicateSpecification<Person> longNameEquals(String longName) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.longName), longName);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get(Person_.age), age);
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
}
object Specifications {
fun nameEquals(name: String?) = PredicateSpecification<Person> { root, criteriaBuilder ->
criteriaBuilder.equal(root.get<Any>("name"), name)
}
fun ageIsLessThan(age: Int) = PredicateSpecification<Person> { root, criteriaBuilder ->
criteriaBuilder.lessThan(root.get("age"), age)
}
}
Then you can combine them for find
or count
queries:
Person denis = personRepository.findOne(nameEquals("Denis")).orElse(null);
Person josh = personRepository.findOne(longNameEquals("Josh PM")).orElse(null);
long countAgeLess30 = personRepository.count(ageIsLessThan(30));
long countAgeLess20 = personRepository.count(ageIsLessThan(20));
long countAgeLess30NotDenis = personRepository.count(ageIsLessThan(30).and(not(nameEquals("Denis"))));
List<Person> people = personRepository.findAll(where(nameEquals("Denis").or(nameEquals("Josh"))));
Person denis = personRepository.findOne(nameEquals("Denis")).orElse(null)
long countAgeLess30 = personRepository.count(ageIsLessThan(30))
long countAgeLess20 = personRepository.count(ageIsLessThan(20))
long countAgeLess30NotDenis = personRepository.count(ageIsLessThan(30) & not(nameEquals("Denis")))
List<Person> people = personRepository.findAll(where(nameEquals("Denis") | nameEquals("Josh")))
val denis: Person? = personRepository.findOne(nameEquals("Denis")).orElse(null)
val countAgeLess30: Long = personRepository.count(ageIsLessThan(30))
val countAgeLess20: Long = personRepository.count(ageIsLessThan(20))
val countAgeLess30NotDenis: Long = personRepository.count(ageIsLessThan(30).and(not(nameEquals("Denis"))))
val people = personRepository.findAll(PredicateSpecification.where(nameEquals("Denis").or(nameEquals("Josh"))))
The examples use compile-known values, and in this case, it would be better to create custom repository methods which would come with compile-time generates queries and eliminate runtime overhead. It’s recommended to use criteria only for dynamic queries where the query structure is not known at the build-time. |
6.4.2 Updating
To implement the update you can use following method from JpaSpecificationExecutor interface:
long updateAll(UpdateSpecification<Person> spec);
long updateAll(UpdateSpecification<Person> spec)
fun updateAll(spec: UpdateSpecification<Person>?): Long
This method is expecting UpdateSpecification which is a variation of specification interface that includes access to jakarta.persistence.criteria.CriteriaUpdate
:
import static jakarta.persistence.criteria.*;
public interface UpdateSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaUpdate<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria update instance |
4 | The criteria builder |
Updating specific properties can be done using jakarta.persistence.criteria.CriteriaUpdate
interface:
query.set(root.get(Person_.name), newName);
query.set(root.get("name"), newName)
query.set(root.get("name"), newName)
You can define criteria specification methods including update specification that will help you to create an update query:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.name), name);
}
static PredicateSpecification<Person> longNameEquals(String longName) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.longName), longName);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get(Person_.age), age);
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get(Person_.name), newName);
return null;
};
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get("name"), newName)
null
}
}
}
object Specifications {
fun nameEquals(name: String?) = PredicateSpecification<Person> { root, criteriaBuilder ->
criteriaBuilder.equal(root.get<Any>("name"), name)
}
fun ageIsLessThan(age: Int) = PredicateSpecification<Person> { root, criteriaBuilder ->
criteriaBuilder.lessThan(root.get("age"), age)
}
fun setNewName(newName: String) = UpdateSpecification<Person> { root, query, criteriaBuilder ->
query.set(root.get("name"), newName)
null
}
fun nameInList(names: List<String>) = where<Person> {
root[Person::name] inList names
}
}
Then you can use the update specification combined with predicate specifications:
long recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")));
long recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")))
val recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")))
6.4.3 Deleting
To delete an entity or multiple entities you can use one of the following methods from JpaSpecificationExecutor interface:
long deleteAll(PredicateSpecification<Person> spec);
long deleteAll(DeleteSpecification<Person> spec);
long deleteAll(PredicateSpecification<Person> spec)
long deleteAll(DeleteSpecification<Person> spec)
fun deleteAll(spec: PredicateSpecification<Person>?): Long
fun deleteAll(spec: DeleteSpecification<Person>?): Long
As it is for querying, deleteAll
methods also come in two variations.
First method is expecting PredicateSpecification which is the same interface described in Querying section
The second method comes with DeleteSpecification and is intended only for delete criteria because it includes access to jakarta.persistence.criteria.CriteriaDelete
.
import static jakarta.persistence.criteria.*;
public interface DeleteSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaDelete<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria delete instance |
4 | The criteria builder |
For deleting you can reuse the same predicates as for querying and updating:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.name), name);
}
static PredicateSpecification<Person> longNameEquals(String longName) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.longName), longName);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get(Person_.age), age);
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
}
object Specifications {
fun nameEquals(name: String?) = PredicateSpecification<Person> { root, criteriaBuilder ->
criteriaBuilder.equal(root.get<Any>("name"), name)
}
fun ageIsLessThan(age: Int) = PredicateSpecification<Person> { root, criteriaBuilder ->
criteriaBuilder.lessThan(root.get("age"), age)
}
}
Simply pass the predicate specification to the deleteAll
method:
long recordsDeleted = personRepository.deleteAll(where(nameEquals("Denis")));
long recordsDeleted = personRepository.deleteAll(where(nameEquals("Denis")))
val recordsDeleted = personRepository.deleteAll(PredicateSpecification.where(nameEquals("Denis")))
6.4.4 Other repository variations
Micronaut Data includes different variations of specification executor interface intended to be used with async or reactive repositories.
Interface |
Description |
The default interface for querying, deleting and updating data |
|
The async version of the specifications repository |
|
The reactive streams - |
|
The Reactor version of the specifications repository |
|
The Kotlin version of the interface that is using coroutines |
6.4.5 Type-Safe Java queries
Jakarta Persistence Criteria API supports type-safe queries by using static metamodel that are generated at compilation time.
The metamodel generator will generate a corresponding metamodel class with an underscore suffix letter e.g. an entity MyEntity
will have a corresponding metamodel entity generated with a name MyEntity_
and it will be in the same package as the original entity. Every field in the newly generated entity will correspond to the entity’s property and can be used as a property reference.
Example from the official API specification:
CriteriaBuilder cb = ...
CriteriaQuery<String> q = cb.createQuery(String.class);
Root<Customer> customer = q.from(Customer.class);
Join<Customer, Order> order = customer.join(Customer_.orders);
Join<Order, Item> item = order.join(Order_.lineItems);
q.select(customer.get(Customer_.name))
.where(cb.equal(item.get(Item_.product).get(Product_.productType), "printer"));
Note that as of this writing you cannot use Micronaut Data annotations (those found in the io.micronaut.data.annotation package) to generate static JPA metadata, the only supported way is to use Jakarta Persistence annotations (located in the jakarta.persistence
package) in combination with Hibernate JPA Static Metamodel Generator which will generate the metamodel even if at runtime you do not actually use Hibernate, but instead use Micronaut Data JDBC.
To configure the metamodel generator simply add the following dependency to the annotation processor classpath:
annotationProcessor("org.hibernate.orm:hibernate-jpamodelgen")
<annotationProcessorPaths>
<path>
<groupId>org.hibernate.orm</groupId>
<artifactId>hibernate-jpamodelgen</artifactId>
</path>
</annotationProcessorPaths>
The Hibernate 6 version of hibernate-jpamodelgen-jakarta is required because prior versions of Hibernate are still using the javax.persistence package.
For Kotlin, add the dependency in kapt or ksp scope, and for Groovy add it in compileOnly scope.
|
And we need to include the generated classes on the Java classpath to have them accessible:
Example for Gradle builds:
sourceSets {
generated {
java {
srcDirs = ["$build/generated/java"]
}
}
}
If everything is correctly set up you should be able to see the generated metamodel classes in the IDE code completion and be able to use them:
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.name), name);
}
static PredicateSpecification<Person> longNameEquals(String longName) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get(Person_.longName), longName);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get(Person_.age), age);
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get(Person_.name), newName);
return null;
};
}
static PredicateSpecification<Product> manufacturerNameEquals(String name) {
return (root, cb) -> cb.equal(root.join(Product_.manufacturer).get(Manufacturer_.name), name);
}
static PredicateSpecification<Product> joined() {
return (root, cb) -> {
root.join("manufacturer");
return null;
};
}
More information about the static metamodel can be found in the official specification |
6.5 Mapping Entities
As mentioned in the Quick Start section, if you need to customize how entities map to the table and column names of the database you can use JPA annotations to do so or Micronaut Data’s own annotations in the io.micronaut.data.annotation
package.
An important aspect of Micronaut Data JDBC / R2DBC is that regardless whether you use JPA annotations or Micronaut Data annotations the entity classes must be compiled with Micronaut Data.
This is because Micronaut Data pre-computes the persistence model (the relationships between entities, the class/property name to table/column name mappings) at compilation time, which is one of the reasons Micronaut Data JDBC can start up so fast.
An example of mapping with Micronaut Data annotations can be seen below:
/*
* Copyright 2017-2020 original authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.micronaut.data.tck.entities;
import io.micronaut.data.annotation.AutoPopulated;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.MappedEntity;
import io.micronaut.data.annotation.Relation;
import java.util.Set;
import java.util.UUID;
@MappedEntity
public class Country {
@Id
@AutoPopulated
private UUID uuid;
private String name;
@Relation(value = Relation.Kind.ONE_TO_MANY, mappedBy = "country")
private Set<CountryRegion> regions;
public Country(String name) {
this.name = name;
}
public String getName() {
return name;
}
public UUID getUuid() {
return uuid;
}
public void setUuid(UUID uuid) {
this.uuid = uuid;
}
public Set<CountryRegion> getRegions() {
return regions;
}
public void setRegions(Set<CountryRegion> regions) {
this.regions = regions;
}
}
6.5.1 SQL Annotations
The following table summarizes the different annotations and what they enable. If you are familiar with and prefer the JPA annotations then feel free to skip to the next section:
Annotation |
Description |
Meta annotation for a value that should be auto-populated by Micronaut Data (such as time stamps and UUIDs) |
|
Allows assigning a data created value (such as a |
|
Allows assigning a last updated value (such as a |
|
Specifies that the bean is embeddable |
|
Specifies an embedded ID of an entity |
|
Specifies that the property value is generated by the database and not included in inserts |
|
Specifies a join table association |
|
Specifies a join column mapping |
|
Specifies the ID of an entity |
|
Specifies the entity is mapped to the database. If your table name differs from the entity name, pass the name as the |
|
Used to customize the column name, definition and data type |
|
Used to specify a relationship (one-to-one, one-to-many, etc.) |
|
Used to specify a property is transient |
|
Used to specify the property’s data type and custom converter |
|
Specifies the version field of an entity, enables optimistic locking |
In the case of using JPA only a subset of annotations are supported including the following:
-
Basic:
@Table
@Id
@Version
@Column
@Transient
@Enumerated
-
Embedded definition:
@Embedded
@EmbeddedId
@Embeddable
-
Relationship mapping:
@OneToMany
@OneToOne
@ManyToOne
@ManyToMany
-
Join specification:
@JoinTable
@JoinColumn
-
Type converters:
@Convert
@Converter
andAttributeConverter
interface
Micronaut Data supports both javax.persistence and jakarta.persistence packages.
|
Again Micronaut Data JDBC / R2DBC is not an ORM, but instead a simple data mapper so many of the concepts in JPA simply don’t apply, however for users familiar with these annotations it is handy being able to use them.
6.5.2 Expandable queries
In some cases, the query needs to be expanded to accommodate all the parameter’s values. The query with a parameter which is a collection or an array: WHERE value IN (?)
would be expanded to: WHERE value IN (?, ?, ?, ?)
Micronaut Data will store additional information about the query at the build-time if one of the parameters is expandable, that eliminates the need to parse the query at runtime.
By default, all parameters of a type that extends java.lang.Iterable
are automatically expandable. You can mark a parameter as expandable by annotating it with @Expandable, for example, you might want to do it if the parameter is an array.
It’s better to use the array type if your targeted database supports it. For example, in Postgres you can use WHERE value = ANY (:myValues) where myValues is of type @TypeDef(type = DataType.STRING_ARRAY) .
|
6.5.3 ID Generation
The default ID generation expects the database to populate a value for the ID such as an IDENTITY
column.
You can remove the @GeneratedValue
annotation and in this case the expectation is that you will assign an ID before calling save()
.
If you wish to use sequences for the ID you should invoke the SQL that generates the sequence value and assign it prior to calling save()
.
Automatically assigned UUIDs are also supported by adding a property annotated with @Id
and @AutoPopulated
.
6.5.4 Composite Primary Keys
Composite primary keys can be defined using either JPA or Micronaut Data annotations. A composite ID requires an additional class to represent the key. The class should define fields that correspond to the columns making up the composite key. For example:
package example;
import jakarta.persistence.Embeddable;
import java.util.Objects;
@Embeddable
public class ProjectId {
private final int departmentId;
private final int projectId;
public ProjectId(int departmentId, int projectId) {
this.departmentId = departmentId;
this.projectId = projectId;
}
public int getDepartmentId() {
return departmentId;
}
public int getProjectId() {
return projectId;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
ProjectId projectId1 = (ProjectId) o;
return departmentId == projectId1.departmentId &&
projectId == projectId1.projectId;
}
@Override
public int hashCode() {
return Objects.hash(departmentId, projectId);
}
}
package example
import groovy.transform.EqualsAndHashCode
import jakarta.persistence.Embeddable
@EqualsAndHashCode
@Embeddable
class ProjectId {
final int departmentId
final int projectId
ProjectId(int departmentId, int projectId) {
this.departmentId = departmentId
this.projectId = projectId
}
}
package example
import jakarta.persistence.Embeddable
@Embeddable
data class ProjectId(val departmentId: Int, val projectId: Int)
It is recommended that the ID class be immutable and implement equals /hashCode .
TIP: When using Java, be sure to define getters for the fields making up your composite key.
|
You can then declare the id
property of the entity using either JPA’s @EmbeddedId
or @EmbeddedId:
package example;
import jakarta.persistence.EmbeddedId;
import jakarta.persistence.Entity;
@Entity
public class Project {
@EmbeddedId
private ProjectId projectId;
private String name;
public Project(ProjectId projectId, String name) {
this.projectId = projectId;
this.name = name;
}
public ProjectId getProjectId() {
return projectId;
}
public String getName() {
return name;
}
}
package example
import jakarta.persistence.EmbeddedId
import jakarta.persistence.Entity
@Entity
class Project {
@EmbeddedId
private ProjectId projectId
private String name
Project(ProjectId projectId, String name) {
this.projectId = projectId
this.name = name
}
ProjectId getProjectId() {
return projectId
}
String getName() {
return name
}
}
package example
import jakarta.persistence.EmbeddedId
import jakarta.persistence.Entity
@Entity
class Project(
@EmbeddedId val projectId: ProjectId,
val name: String
)
To alter the column mappings for the ID, you may use the @Column annotation on the fields within the ProjectId class
|
6.5.5 Constructor Arguments
Micronaut Data JDBC / R2DBC also allows the definition of immutable objects using constructor arguments instead of getters/setters. If you define multiple constructors then the one used to create the object from the database should be annotated with io.micronaut.core.annotation.Creator
.
For example:
package example;
import io.micronaut.core.annotation.Creator;
import jakarta.persistence.*;
@Entity
public class Manufacturer {
@Id
@GeneratedValue
private Long id;
private String name;
@Creator
public Manufacturer(String name) {
this.name = name;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
}
package example
import io.micronaut.core.annotation.Creator
import jakarta.persistence.*
@Entity
class Manufacturer {
@Id
@GeneratedValue
Long id
final String name
@Creator
Manufacturer(String name) {
this.name = name
}
}
package example
import jakarta.persistence.*
@Entity
data class Manufacturer(
@Id
@GeneratedValue
var id: Long?,
val name: String
)
As you can see from the example above, the ID
of the object should however include a setter since this has to be assigned from the database generated value.
6.5.6 SQL Naming Strategies
The default naming strategy when converting camel case class and property names to database tables and columns is to use underscore separated lower case. In other words FooBar
becomes foo_bar
.
If this is not satisfactory then you can customize this by setting the namingStrategy
member of the @MappedEntity annotation on the entity:
@MappedEntity(namingStrategy = NamingStrategies.Raw.class)
public class CountryRegion {
...
}
Few important things to note. Since Micronaut Data pre-computes the table and column name mappings at compilation time the specified NamingStrategy implementation must be on the annotation processor classpath (annotationProcessor
scope for Java or kapt
for Kotlin).
If running project in native image, custom naming strategy needs to have io.micronaut.core.annotation.TypeHint(CustomNamingStrategy.class)
annotation where custom naming strategy class is CustomNamingStrategy
.
In addition, if you don’t want to repeat the above annotation definition on every entity it is handy to define a meta-annotation where the above annotation definition is applied to another annotation that you add to your class.
Escaping Table/Column Name Identifiers
In some cases it may be necessary to escape table and/or column names if characters are used within the names that are invalid without the presence of escaping.
In this case you should set the escape
member of the @MappedEntity annotation to true
:
@MappedEntity(escape=true)
Micronaut Data will generate SQL statements that escape table and column names within queries using the escape character that is appropriate for the configured SQL dialect.
Overriding default query alias
The default query alias is the table name followed by an underscore. If you want to change it, specify it in the @MappedEntity annotation:
@MappedEntity(alias="my_table_")
6.5.7 Association Mapping
To specify a relation between two entities you need to use @Relation annotation. The relation kind is specified using enum @Kind value
attribute which is similar to JPA relations annotation names (@OneToMany
, @OneToOne
etc.)
Kind |
Description |
|
One-To-Many association |
|
One-To-One association |
|
Many-To-Many association |
|
Many-To-One association |
|
Embedded association |
Use 'mappedBy' to specify inverse property that this relation is mapped by.
Type |
Description |
|
Associated entity or entities are going to be persisted when owning entity is saved |
|
Associated entity or entities are going to be updated when owning entity is updated |
|
(Default) No operation is cascaded |
|
All ( |
You can use JPA’s equivalent annotations @JoinTable and @JoinColumn to specify more complex mapping definition. |
6.5.8 Association Fetching
Micronaut Data is a simple data mapper, hence it will not fetch any associations for you using techniques like lazy loading of entity proxies for single-ended associations.
You must instead specify ahead of time what data you want to fetch. You cannot map an association as being eager or lazy. The reason for this design choice is simple, even in the JPA world accessing lazy associations or lazy initialization collections is considered bad practice due to the N+1 query issue and the recommendation is always to write an optimized join query.
Micronaut Data JDBC / R2DBC takes this a step further by simply not supporting those features considered bad practice anyway. However, it does impact how you may model an association. For example, if you define an association in a constructor argument such as the following entity:
package example;
import jakarta.persistence.*;
@Entity
public class Product {
@Id
@GeneratedValue
private Long id;
private String name;
@ManyToOne
private Manufacturer manufacturer;
public Product(String name, Manufacturer manufacturer) {
this.name = name;
this.manufacturer = manufacturer;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public Manufacturer getManufacturer() {
return manufacturer;
}
}
package example
import jakarta.persistence.*
@Entity
class Product {
@Id
@GeneratedValue
Long id
private String name
@ManyToOne
private Manufacturer manufacturer
Product(String name, Manufacturer manufacturer) {
this.name = name
this.manufacturer = manufacturer
}
String getName() {
return name
}
Manufacturer getManufacturer() {
return manufacturer
}
}
package example
import jakarta.persistence.*
@Entity
data class Product(
@Id
@GeneratedValue
var id: Long?,
var name: String,
@ManyToOne
var manufacturer: Manufacturer?
)
Then attempt to read the Product
entity back without specifying a join an exception will occur since the manufacturer
association is not Nullable
.
There are few ways around this, one way is to declare at the repository level to always fetch manufacturer
, another is declared the @Nullable
annotation on the manufacturer
argument to allow it to be declared null
(or in Kotlin add ?
to the end of the constructor argument name). Which approach you choose is dependent on the design of the application.
The following section provides more coverage on handling joins.
6.5.9 Using @ColumnTransformer
Inspired by the similar annotation in Hibernate, you can apply a transformation when either reading or writing a column from or to the database using the @ColumnTransformer annotation.
This feature can be used to encrypt/decrypt values or invoke any arbitrary database function. To define a read transformation use the read
member. For example:
@ColumnTransformer(read = "UPPER(@.name)")
private String name;
@ is a query alias placeholder and will be replaced with one if the query specifies it. Example: "UPPER(@.name) is going to become UPPER(project_.name).
|
To apply a write transformation you should use the write
member and include exactly one ?
placeholder:
@ColumnTransformer(write = "UPPER(?)")
private String name;
With this any place any INSERT
or UPDATE
statement generated will include the above write
entry.
6.5.10 Using @MappedProperty alias
If there is a need to return column name in the result set as custom name, there is alias
property in the @MappedProperty annotation.
It can be useful, for example, in legacy columns that might be too long for the query result (when combined with table aliases can exceed max column length).
package example;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.MappedProperty;
import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
@Entity
public class Person {
@Id
@GeneratedValue
private Long id;
private String name;
private int age;
@MappedProperty(value = "long_name_column_legacy_system", alias = "long_name")
private String longName;
public Person() {
}
public Person(String name, int age, String longName) {
this(null, name, age, longName);
}
public Person(Long id, String name, int age, String longName) {
this.id = id;
this.name = name;
this.age = age;
this.longName = longName;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public String getLongName() {
return longName;
}
public void setLongName(String longName) {
this.longName = longName;
}
}
package example
import io.micronaut.data.annotation.Id
import jakarta.persistence.Entity
import jakarta.persistence.GeneratedValue
@Entity
class Person {
@Id
@GeneratedValue
private Long id
private String name
private int age
Person() {
}
Person(String name, int age) {
this(null, name, age)
}
Person(Long id, String name, int age) {
this.id = id
this.name = name
this.age = age
}
Long getId() {
return id
}
void setId(Long id) {
this.id = id
}
String getName() {
return name
}
void setName(String name) {
this.name = name
}
int getAge() {
return age
}
void setAge(int age) {
this.age = age
}
}
package example
import io.micronaut.data.annotation.Id
import jakarta.persistence.Entity
import jakarta.persistence.GeneratedValue
@Entity
class Person {
@Id
@GeneratedValue
var id: Long? = null
var name: String? = null
var age = 0
constructor(name: String?, age: Int) : this(null, name, age) {}
constructor(id: Long?, name: String?, age: Int) {
this.id = id
this.name = name
this.age = age
}
}
In this example, original column name long_name_column_legacy_system
will be returned in a result from the database as long_name
.
When alias
property is set then be careful when writing custom or native queries to return field as indicated in alias
value.
Setting alias
in MappedProperty on assocations does not have an effect as it makes sense only on fields/columns mappings.
6.5.11 JSON Column Support
You can declare a field of a class as a JSON type using the @TypeDef annotation as follows:
@TypeDef(type = DataType.JSON)
private Map<String, String> data;
The above will map to a column called data
. Depending on the underling database the column type will be adjusted. For example for Postgres which features native JSON support the column type will be JSONB
.
To allow JSON to be serialized and deserialized in entity properties you must have Jackson and the micronaut-runtime module your classpath.
|
6.5.12 JSON View
Since Micronaut Data 4.0 and Oracle23c database, an entity can be mapped to an JSON VIEW as follows:
@JsonView("CONTACT_VIEW")
public class ContactView
where "CONTACT_VIEW" is actual name of duality json view object in the database. It is currently supported only by the Oracle database, since version 23c. More about Oracle JSON VIEW can be read here https://docs.oracle.com/en/database/oracle/oracle-database/23/jsnvu/overview-json-relational-duality-views.html.
Essentially, json view will be treated like mapped entity and will return JSON structure from the database and be mapped to java entity. All CRUD operations can be performed against json view mapped entities.
Limitations
-
During schema creation, json view mapped entities are skipped, and it is expected for users to create them manually or via migration scripts.
6.5.13 Support for Java 16 Records
Since 2.3.0, Micronaut Data JDBC / R2DBC has support for using Java 16 records to model entities.
The following record class demonstrates this capability:
package example;
import io.micronaut.core.annotation.Nullable;
import io.micronaut.data.annotation.*;
import java.util.Date;
@MappedEntity // (1)
record Book(
@Id @GeneratedValue @Nullable Long id, // (2)
@DateCreated @Nullable Date dateCreated,
String title,
int pages) {
}
1 | The @MappedEntity annotation is used on the record |
2 | The database identifier is annotated with @Id and @GeneratedValue plus marked as @Nullable |
Since records are immutable constructor arguments that are generated values need to be marked as @Nullable
and you should pass null
for those arguments. The following presents an example:
Book book = new Book(null,null, "The Stand", 1000);
book = bookRepository.save(book);
It is important to note that the returned instance is not the same as the instance passed to the save
method. When a write operation is performed Micronaut Data will use a copy-constructor approach to populate the database identifier and return a new instance from the save
method.
6.5.14 Support for Kotlin immutable data classes
Micronaut Data JDBC / R2DBC supports using immutable Kotlin data classes as model entities. The implementation is the same as for Java 16 records: to modify an entity a copy-constructor will be used and every modification means a new entity instance.
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import io.micronaut.data.annotation.Relation
@MappedEntity
data class Student(
@field:Id @GeneratedValue
val id: Long?,
val name: String,
@Relation(value = Relation.Kind.MANY_TO_MANY, cascade = [Relation.Cascade.PERSIST])
val courses: List<Course>,
@Relation(value = Relation.Kind.ONE_TO_MANY, mappedBy = "student")
val ratings: List<CourseRating>
) {
constructor(name: String, items: List<Course>) : this(null, name, items, emptyList())
}
Generated values and relations that cannot be created during the entity initialization should be declared as nullable. |
6.6 Data Types
Micronaut Data JDBC / R2DBC supports most common Java data types. The following properties types are supported by default:
-
All primitive types and their wrappers (
int
,java.lang.Integer
etc.) -
CharSequence
,String
etc. -
Date types like
java.util.Date
,java.time.LocalDate
etc. -
Enum types (by name only)
-
Entity References. In the case of
@ManyToOne
the foreign key column name is computed to be the name of the association plus a suffix of_id
. You can alter this with either@Column(name="..")
or by providing aNamingStrategy.mappedName(..)
implementation. -
Collections of Entity. In the case of
@OneToMany
and ifmappedBy
is specified then it is expected that the inverse property exists defining the column, otherwise a join table mapping is created.
If you wish to define a custom data type then you can do so by defining a class that is annotated with @TypeDef.
6.7 Using Attribute Converter
There are cases where you would like to represent the attribute differently in the database than in the entity.
Consider the following example entity:
package example;
import jakarta.persistence.*;
@Entity
public class Sale {
@ManyToOne
private final Product product;
private final Quantity quantity;
@Id
@GeneratedValue
private Long id;
public Sale(Product product, Quantity quantity) {
this.product = product;
this.quantity = quantity;
}
public Product getProduct() {
return product;
}
public Quantity getQuantity() {
return quantity;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
}
package example
import jakarta.persistence.Id
import jakarta.persistence.Entity
import jakarta.persistence.ManyToOne
import jakarta.persistence.GeneratedValue
@Entity
class Sale {
@ManyToOne
final Product product
final Quantity quantity
@Id
@GeneratedValue
Long id
Sale(Product product, Quantity quantity) {
this.product = product
this.quantity = quantity
}
}
package example
import jakarta.persistence.*
@Entity
data class Sale(
@Id
@GeneratedValue
var id: Long?,
@ManyToOne
val product: Product,
val quantity: Quantity
)
The Sale
class has a reference to a type Quantity
. The Quantity
type is defined as:
package example;
import io.micronaut.data.annotation.TypeDef;
import io.micronaut.data.model.DataType;
@TypeDef(type = DataType.INTEGER, converter = QuantityAttributeConverter.class)
public class Quantity {
private final int amount;
private Quantity(int amount) {
this.amount = amount;
}
public int getAmount() {
return amount;
}
public static Quantity valueOf(int amount) {
return new Quantity(amount);
}
}
package example
import groovy.transform.Immutable
import io.micronaut.data.annotation.TypeDef
import io.micronaut.data.model.DataType
@TypeDef(type = DataType.INTEGER, converter = QuantityAttributeConverter.class)
@Immutable
class Quantity {
int amount
}
package example
import io.micronaut.data.annotation.TypeDef
import io.micronaut.data.model.DataType
@TypeDef(type = DataType.INTEGER, converter = QuantityAttributeConverter::class)
data class Quantity(val amount: Int)
As you can see @TypeDef
is used to define the Quantity
type as an INTEGER
using the DataType enum.
If you cannot declare @TypeDef directly on the type then you can declare it on the field where the type is used.
|
The last step is to add custom attribute conversion so that Micronaut Data knows how to read and write the type from an Integer
:
package example;
import io.micronaut.core.convert.ConversionContext;
import io.micronaut.data.model.runtime.convert.AttributeConverter;
import jakarta.inject.Singleton;
@Singleton // (1)
public class QuantityAttributeConverter implements AttributeConverter<Quantity, Integer> {
@Override // (2)
public Integer convertToPersistedValue(Quantity quantity, ConversionContext context) {
return quantity == null ? null : quantity.getAmount();
}
@Override // (3)
public Quantity convertToEntityValue(Integer value, ConversionContext context) {
return value == null ? null : Quantity.valueOf(value);
}
}
package example
import groovy.transform.CompileStatic
import io.micronaut.core.convert.ConversionContext
import io.micronaut.data.model.runtime.convert.AttributeConverter
import jakarta.inject.Singleton
@Singleton // (1)
@CompileStatic
class QuantityAttributeConverter implements AttributeConverter<Quantity, Integer> {
@Override // (2)
Integer convertToPersistedValue(Quantity quantity, ConversionContext context) {
return quantity == null ? null : quantity.getAmount()
}
@Override // (3)
Quantity convertToEntityValue(Integer value, ConversionContext context) {
return value == null ? null : new Quantity(value)
}
}
package example
import io.micronaut.core.convert.ConversionContext
import io.micronaut.data.model.runtime.convert.AttributeConverter
import jakarta.inject.Singleton
@Singleton // (1)
class QuantityAttributeConverter : AttributeConverter<Quantity?, Int?> {
// (2)
override fun convertToPersistedValue(quantity: Quantity?, context: ConversionContext): Int? {
return quantity?.amount
}
// (3)
override fun convertToEntityValue(value: Int?, context: ConversionContext): Quantity? {
return if (value == null) null else Quantity(value)
}
}
1 | The attribute converter implements @AttributeConverter and must be a bean |
2 | A converter from Quantity to Integer |
3 | A converter from Integer to Quantity |
It’s possible to define the converter using @MappedProperty: @MappedProperty(converter = QuantityTypeConverter.class) , in this case the data type will be detected automatically.
|
6.8 Join Queries
As discussed in the previous section, Micronaut Data JDBC doesn’t support associations in the traditional ORM sense. There is no lazy loading or support for proxies.
Consider a Product
entity from the previous section that has an association to a Manufacturer
entity:
package example;
import io.micronaut.core.annotation.Creator;
import jakarta.persistence.*;
@Entity
public class Manufacturer {
@Id
@GeneratedValue
private Long id;
private String name;
@Creator
public Manufacturer(String name) {
this.name = name;
}
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getName() {
return name;
}
}
package example
import io.micronaut.core.annotation.Creator
import jakarta.persistence.*
@Entity
class Manufacturer {
@Id
@GeneratedValue
Long id
final String name
@Creator
Manufacturer(String name) {
this.name = name
}
}
package example
import jakarta.persistence.*
@Entity
data class Manufacturer(
@Id
@GeneratedValue
var id: Long?,
val name: String
)
Say you query for Product
instances, what happens is that by default Micronaut Data JDBC will only query for and fetch the simple properties. In the case of single ended associations like the above Micronaut Data will only retrieve the ID and assign it if is possible (In the case of entities that require constructor arguments this is not even possible).
If you need to fetch the association too then you can use the @Join annotation on your repository interface to specify that a INNER JOIN
(or whichever join types is more appropriate) should be executed to retrieve the associated Manufacturer
.
@JdbcRepository(dialect = Dialect.H2)
public interface ProductRepository extends CrudRepository<Product, Long> {
@Join(value = "manufacturer", type = Join.Type.FETCH) // (1)
List<Product> list();
}
@JdbcRepository(dialect = Dialect.H2)
public interface ProductRepository extends CrudRepository<Product, Long> {
@Join(value = "manufacturer", type = Join.Type.FETCH) // (1)
List<Product> list();
}
@JdbcRepository(dialect = Dialect.H2)
interface ProductRepository : CrudRepository<Product, Long> {
@Join(value = "manufacturer", type = Join.Type.FETCH) // (1)
fun list(): List<Product>
}
1 | The @Join is used to indicate a INNER JOIN clause should be included. |
Note that the @Join annotation is repeatable and hence can be specified multiple time for different associations. In addition, the type
member of the annotation can be used to specify the join type, for example LEFT
, INNER
or RIGHT
.
Finally, by default Micronaut Data will generate aliases to use for selecting columns in joins and querying. However, if at any point you experience a conflict you can specify an alias for a particular join using the alias
member of the @Join annotation. You can override the default entity alias using the alias
member of the @MappedEntity annotation.
Some databases like Oracle limit the length of alias names in SQL queries so another reason you may want to set custom aliases is to avoid exceeding the alias name length restriction in Oracle. |
If you need to do anything more complex than the join options Micronaut Data has to offer then you may need a native query.
6.9 Explicit Queries
When using Micronaut Data with JDBC you can execute native SQL queries using the @Query annotation:
@Query("select * from book b where b.title like :title limit 5")
List<Book> findBooks(String title);
@Query("select * from book b where b.title like :title limit 5")
List<Book> findBooks(String title);
@Query("select * from book b where b.title like :title limit 5")
fun findBooks(title: String): List<Book>
The above example will execute the raw SQL against the database.
For Pagination queries that return a Page you also need to specify a native countQuery .
|
Explicit Queries and Joins
When writing an explicit SQL query if you specify any joins within the query you may want the resulting data bound to the returned entity. Micronaut Data will not automatically do this, instead you need to specify the associated @Join annotation.
For example:
@Query("""
SELECT *, m_.name as m_name, m_.id as m_id
FROM product p
INNER JOIN manufacturer m_ ON p.manufacturer_id = m_.id
WHERE p.name like :name limit 5""")
@Join(value = "manufacturer", alias = "m_")
List<Product> searchProducts(String name);
@Query("""SELECT *, m_.name as m_name, m_.id as m_id
FROM product p
INNER JOIN manufacturer m_ ON p.manufacturer_id = m_.id
WHERE p.name like :name limit 5""")
@Join(value = "manufacturer", alias = "m_")
List<Product> searchProducts(String name);
@Query("""SELECT *, m_.name as m_name, m_.id as m_id
FROM product p
INNER JOIN manufacturer m_ ON p.manufacturer_id = m_.id
WHERE p.name like :name limit 5""")
@Join(value = "manufacturer", alias = "m_")
fun searchProducts(name: String): List<Product>
In the above example the query uses an alias called m_
to query the manufacturer
table via an INNER JOIN
. Since the returned Product
entity features a manufacturer
association it may be nice to materialize this object as well. The alias
member of the @Join annotation is used to specify which alias to materialize the Manufacturer
instance from.
It is necessary to use the "logical name" of the field in the @Join (the name used in the @Entity class) and not the name used in the native query itself. In the previous example, if the name in the class were myManufacturer , then you would need to use Join(value = "myManufacturer", alias = "m_") without modifying anything on the native sql query.
|
6.10 Procedures
Micronaut Data supports executing simple SQL procedures. Simply annotate a repository method with @Procedure.
All the method parameters will be used as incoming parameters of the procedure and not-void result will be bind as an out parameter.
By default, the method name will be used as the procedure name, to customize the name it’s possible to use the value
attribute:
@Procedure
Long calculateSum(@NonNull Long bookId);
@Procedure
Long calculateSum(@NonNull Long bookId);
@Procedure
fun calculateSum(bookId: @NonNull Long): Long
7 Micronaut Data MongoDB
Micronaut Data MongoDB supports most of the things that are possible to do with JPA and JDBC/R2DBC implementations, including:
The interaction between the object layer and MongoDB’s driver serialization/deserialization is implemented using Micronaut Serialization and BSON support.
7.1 Quick Start
The quickest way to get started is to create a new Micronaut application with Micronaut Launch and choose the data-mongodb
or data-mongodb-async
.
You can also find a great guides on building Micronaut Data MongoDB applications including sample code in a variety of languages in the Micronaut Guides: Access a MongoDB Database with Micronaut Data MongoDB and Access a MongoDB Database Asynchronously with Micronaut Data MongoDB and Reactive Streams |
Clicking on one of the links in the table below will take you to Micronaut Launch with the appropriate options already pre-configured with your selected language and build tool:
Gradle |
Maven |
|
Java |
||
Kotlin |
||
Groovy |
Gradle |
Maven |
|
Java |
||
Kotlin |
||
Groovy |
# For Maven add: --build maven
$ mn create-app --lang java example --features data-mongodb
Or via curl
:
curl
# For Maven add to the URL: &build=maven
$ curl https://launch.micronaut.io/demo.zip?lang=java&features=data-mongodb -o demo.zip && unzip demo.zip -d demo && cd demo
Pre-generated applications should have everything properly setup. You can follow the manual configuration instructions for proper understanding of the dependency setup.
To get started with Micronaut Data MongoDB add the following dependency to your annotation processor path:
annotationProcessor("io.micronaut.data:micronaut-data-document-processor")
<annotationProcessorPaths>
<path>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-document-processor</artifactId>
</path>
</annotationProcessorPaths>
For Kotlin, add the micronaut-data-document-processor dependency in kapt or ksp scope, and for Groovy add micronaut-data-document-processor in compileOnly scope.
|
You should then configure a compile-scoped dependency on the micronaut-data-mongodb
module:
implementation("io.micronaut.data:micronaut-data-mongodb")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-mongodb</artifactId>
</dependency>
And include MongoDB Sync driver:
runtimeOnly("org.mongodb:mongodb-driver-sync")
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-sync</artifactId>
<scope>runtime</scope>
</dependency>
Or reactive MongoDB driver:
runtimeOnly("org.mongodb:mongodb-driver-reactivestreams")
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongodb-driver-reactivestreams</artifactId>
<scope>runtime</scope>
</dependency>
It’s not possible to use both drivers at the same time. If you have both drivers on the classpath you can use property micronaut.data.mongodb.driver-type and value: sync or reactive to select proper driver.
|
Next up you need to configure at least one data source. The following snippet from the application configuration file is an example of configuring the default MongoDB data source:
mongodb.uri=mongodb://username:password@localhost:27017/databaseName
mongodb:
uri: mongodb://username:password@localhost:27017/databaseName
[mongodb]
uri="mongodb://username:password@localhost:27017/databaseName"
mongodb {
uri = "mongodb://username:password@localhost:27017/databaseName"
}
{
mongodb {
uri = "mongodb://username:password@localhost:27017/databaseName"
}
}
{
"mongodb": {
"uri": "mongodb://username:password@localhost:27017/databaseName"
}
}
If entity classes are located outside the default package then packages containing entity classes need to be configured in MongoDB Configuration:
mongodb.uri=mongodb://username:password@localhost:27017/databaseName
mongodb.package-names[0]=com.example.domain
mongodb.package-names[1]=com.example.other
mongodb:
uri: mongodb://username:password@localhost:27017/databaseName
package-names:
- com.example.domain
- com.example.other
[mongodb]
uri="mongodb://username:password@localhost:27017/databaseName"
package-names=[
"com.example.domain",
"com.example.other"
]
mongodb {
uri = "mongodb://username:password@localhost:27017/databaseName"
packageNames = ["com.example.domain", "com.example.other"]
}
{
mongodb {
uri = "mongodb://username:password@localhost:27017/databaseName"
package-names = ["com.example.domain", "com.example.other"]
}
}
{
"mongodb": {
"uri": "mongodb://username:password@localhost:27017/databaseName",
"package-names": ["com.example.domain", "com.example.other"]
}
}
To retrieve objects from the database you need to define a class annotated with @MappedEntity:
@MappedEntity
public class Book {
@Id
@GeneratedValue
private ObjectId id;
private String title;
private int pages;
public Book(String title, int pages) {
this.title = title;
this.pages = pages;
}
// ...
}
@MappedEntity
class Book {
@Id
@GeneratedValue
private ObjectId id
private String title
private int pages
Book(String title, int pages) {
this.title = title
this.pages = pages
}
//...
}
@MappedEntity
data class Book(@field:Id
@GeneratedValue
var id: ObjectId,
var title: String,
var pages: Int = 0)
Followed by an interface that extends from CrudRepository
package example;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.model.Page;
import io.micronaut.data.model.Pageable;
import io.micronaut.data.model.Slice;
import io.micronaut.data.mongodb.annotation.MongoAggregateQuery;
import io.micronaut.data.mongodb.annotation.MongoDeleteQuery;
import io.micronaut.data.mongodb.annotation.MongoFindQuery;
import io.micronaut.data.mongodb.annotation.MongoRepository;
import io.micronaut.data.mongodb.annotation.MongoUpdateQuery;
import io.micronaut.data.repository.CrudRepository;
import org.bson.types.ObjectId;
import java.util.List;
@MongoRepository // (1)
interface BookRepository extends CrudRepository<Book, ObjectId> { // (2)
Book find(String title);
}
package example
import io.micronaut.data.annotation.Id
import io.micronaut.data.model.Page
import io.micronaut.data.model.Pageable
import io.micronaut.data.model.Slice
import io.micronaut.data.mongodb.annotation.MongoAggregateQuery
import io.micronaut.data.mongodb.annotation.MongoDeleteQuery
import io.micronaut.data.mongodb.annotation.MongoFindQuery
import io.micronaut.data.mongodb.annotation.MongoRepository
import io.micronaut.data.mongodb.annotation.MongoUpdateQuery
import io.micronaut.data.repository.CrudRepository
import org.bson.types.ObjectId
@MongoRepository // (1)
interface BookRepository extends CrudRepository<Book, ObjectId> { // (2)
Book find(String title);
}
package example
import io.micronaut.context.annotation.Executable
import io.micronaut.data.annotation.Id
import io.micronaut.data.model.Page
import io.micronaut.data.model.Pageable
import io.micronaut.data.model.Slice
import io.micronaut.data.mongodb.annotation.*
import io.micronaut.data.repository.CrudRepository
import org.bson.types.ObjectId
@MongoRepository // (1)
interface BookRepository : CrudRepository<Book, ObjectId> { // (2)
@Executable
fun find(title: String): Book
}
1 | The interface is annotated with @MongoRepository |
2 | The CrudRepository interface take 2 generic arguments, the entity type (in this case Book ) and the ID type (in this case ObjectId ) |
You can now perform CRUD (Create, Read, Update, Delete) operations on the entity. The implementation of example.BookRepository
is created at compilation time. To obtain a reference to it simply inject the bean:
@Inject BookRepository bookRepository;
@Inject @Shared BookRepository bookRepository
@Inject
lateinit var bookRepository: BookRepository
Saving an Instance (Create)
To save an instance use the save
method of the CrudRepository
interface:
Book book = new Book("The Stand", 1000);
bookRepository.save(book);
Book book = new Book("The Stand", 1000)
bookRepository.save(book)
var book = Book(ObjectId(),"The Stand", 1000)
bookRepository.save(book)
Retrieving an Instance (Read)
To read a book back use findById
:
book = bookRepository.findById(id).orElse(null);
book = bookRepository.findById(id).orElse(null)
book = bookRepository.findById(id).orElse(null)
Updating an Instance (Update)
With Micronaut Data MongoDB, you must manually implement an update
method since the MongoDB implementation doesn’t include any dirty checking or persistence session notion. So you have to define explicit update methods for updates in your repository. For example:
void update(@Id ObjectId id, int pages);
void update(@Id ObjectId id, String title);
void update(@Id ObjectId id, int pages);
void update(@Id ObjectId id, String title);
fun update(@Id id: ObjectId, pages: Int)
fun update(@Id id: ObjectId, title: String)
Which can then be called like so:
bookRepository.update(book.getId(), "Changed");
bookRepository.update(book.getId(), "Changed")
bookRepository.update(book.id, "Changed")
Deleting an Instance (Delete)
To delete an instance use deleteById
:
bookRepository.deleteById(id);
bookRepository.deleteById(id)
bookRepository.deleteById(id)
Congratulations you have implemented your first Micronaut Data MongoDB repository! Read on to find out more.
Micronaut Data MongoDB supports creating collections by setting property micronaut.data.mongodb.create-collections to true . MongoDB will create them automatically except for a few cases like transactional context, where collection needs to be already present.
|
7.2 Repositories
As seen in the Quick Start MongoDB repositories in Micronaut Data are defined as interfaces that are annotated with the @MongoRepository.
In multiple servers scenario, the serverName
annotation property can be used to specify the datasource configuration to use. By default, Micronaut Data will look for the default server.
For example:
@MongoRepository(serverName = "inventoryServer") (1)
public interface PhoneRepository extends CrudRepository<Phone, Integer> {
Optional<Phone> findByAssetId(@NotNull Integer assetId);
}
1 | @MongoRepository marking the interface to access MongoDB and pointing to the server configuration 'inventoryServer' |
The entity to treat as the root entity for the purposes of querying is established either from the method signature or from the generic type parameter specified to the GenericRepository interface.
If no root entity can be established then a compilation error will occur.
The same interfaces supported by the JPA implementation are supported by MongoDB.
Note that in addition to interfaces you can also define repositories as abstract classes:
package example;
import io.micronaut.data.mongodb.annotation.MongoRepository;
import io.micronaut.data.repository.CrudRepository;
import org.bson.types.ObjectId;
import java.util.List;
@MongoRepository
public abstract class AbstractBookRepository implements CrudRepository<Book, ObjectId> {
public abstract List<Book> findByTitle(String title);
}
package example
import io.micronaut.data.mongodb.annotation.MongoRepository
import io.micronaut.data.repository.CrudRepository
import org.bson.types.ObjectId
@MongoRepository
abstract class AbstractBookRepository implements CrudRepository<Book, ObjectId> {
abstract List<Book> findByTitle(String title);
}
package example
import io.micronaut.data.mongodb.annotation.MongoRepository
import io.micronaut.data.repository.CrudRepository
import org.bson.types.ObjectId
@MongoRepository
abstract class AbstractBookRepository : CrudRepository<Book, ObjectId> {
abstract fun findByTitle(title: String): List<Book>
}
You can specify MongoDB’s database name using the repository annotation: @MongoRepository(databaseName = "mydb") or in the connection url: mongodb://username:password@localhost:27017/mydb
|
Micronaut Data MongoDB introduces one special repository interface MongoQueryExecutor (and corresponding reactive interface MongoReactiveQueryExecutor) which accepts Bson
/List<Bson>
filter/pipeline/update parameters intended to be used in combination with MongoDB DSL API:
-
com.mongodb.client.model.Filters
-
com.mongodb.client.model.Aggregates
-
com.mongodb.client.model.Updates
Specific criteria supported by Micronaut Data MongoDB that allows to filter documents by checking occurrences in list or array of strings in given field can be achieved using ArrayContains
or CollectionContains
criteria. Here is an example of repository method declaration which will search people which interests field (list of strings) contains given value(s):
List<Person> findByInterestsCollectionContains(String interest);
List<Person> findByInterestsCollectionContains(String interest)
fun findByInterestsCollectionContains(interest: String): List<Person>
Micronaut Data MongoDB supports array or list containment check for single or multiple values using ArrayContains
or CollectionContains
criteria.
7.2.1 Accessing data
Unlike JPA/Hibernate, Micronaut Data MongoDB is stateless and has no notion of a persistence session that requires state management.
Since there is no session, features like dirty checking are not supported. This has implications when defining repository methods for inserts and updates.
By default, when saving an entity with a method like save(MyEntity)
an insert is always performed since Micronaut Data has no way to know whether the entity is associated to a particular session.
If you wish to update an entity you should instead either use update(MyEntity)
or even better define an appropriate update
method to update only the data you want to update, for example:
void update(@Id ObjectId id, int pages);
void update(@Id ObjectId id, String title);
void update(@Id ObjectId id, int pages);
void update(@Id ObjectId id, String title);
fun update(@Id id: ObjectId, pages: Int)
fun update(@Id id: ObjectId, title: String)
7.2.2 Custom Queries and Options
Micronaut Data MongoDB introduces a few annotations that can be used to define custom queries and modify default options:
Annotation |
Description |
Allows defining a custom find method execution with values for filtering, sorting, projection and collation. |
|
Allows defining a custom aggregate method execution with a value for the pipeline. |
|
Allows defining a custom update method execution with values for filter, update and collation. |
|
Allows defining a custom update method execution with values for filter and collation. |
|
Allows defining a custom filter value for operations that support it. Can be used on annotation to create a predefined filter annotation. |
|
Allows defining a custom sort value for operations that support it. Can be used on repository class to define a default sort or to create a predefined sort annotation. |
|
Allows defining a custom projection value for operations that support it. Can be used on repository class to define a default projection or to create a predefined projection annotation. |
|
Allows defining a custom collation value for operations that support it. Can be used on repository class to define a default collation or to create a predefined collation annotation. |
|
The aggregation operation options. |
|
The find operation options. |
|
The update operation options. |
|
The delete operation options. |
Custom queries for MongoDB are defined in JSON and method parameters can be references as a variable prefixed with :
.
@MongoFindQuery(filter = "{title:{$regex: :t}}", sort = "{title: 1}")
List<Book> customFind(String t);
@MongoAggregateQuery("[{$match: {name:{$regex: :t}}}, {$sort: {name: 1}}, {$project: {name: 1}}]")
List<Person> customAggregate(String t);
@MongoUpdateQuery(filter = "{title:{$regex: :t}}", update = "{$set:{name: 'tom'}}")
void customUpdate(String t);
@MongoDeleteQuery(filter = "{title:{$regex: :t}}", collation = "{locale:'en_US', numericOrdering:true}")
void customDelete(String t);
@MongoFindQuery(filter = '{title:{$regex: :t}}', sort = '{title: 1}')
List<Book> customFind(String t);
@MongoAggregateQuery('[{$match: {name:{$regex: :t}}}, {$sort: {name: 1}}, {$project: {name: 1}}]')
List<Person> customAggregate(String t)
@MongoUpdateQuery(filter = '{title:{$regex: :t}}', update = '{$set:{name: "tom"}}')
void customUpdate(String t);
@MongoDeleteQuery(filter = '{title:{$regex: :t}}', collation = "{locale:'en_US', numericOrdering:true}")
void customDelete(String t);
@MongoFindQuery(filter = "{title:{\$regex: :t}}", sort = "{title: 1}")
fun customFind(t: String): List<Book>
@MongoAggregateQuery("[{\$match: {name:{\$regex: :t}}}, {\$sort: {name: 1}}, {\$project: {name: 1}}]")
fun customAggregate(t: String): List<Person>
@MongoUpdateQuery(filter = "{title:{\$regex: :t}}", update = "{\$set:{name: 'tom'}}")
fun customUpdate(t: String)
@MongoDeleteQuery(filter = "{title:{\$regex: :t}}", collation = "{locale:'en_US', numericOrdering:true}")
fun customDelete(t: String)
Only queries for the filter, pipeline and the update can reference method parameters. |
Some annotations support to be defined on the repository, that can be used to provide the defaults for all operations that support it:
@MongoFindOptions(allowDiskUse = true, maxTimeMS = 1000)
@MongoAggregateOptions(allowDiskUse = true, maxTimeMS = 100)
@MongoCollation("{ locale: 'en_US', numericOrdering: true}")
@MongoRepository
public interface SaleRepository extends CrudRepository<Sale, ObjectId> {
@MongoFindOptions(allowDiskUse = true, maxTimeMS = 1000L)
@MongoAggregateOptions(allowDiskUse = true, maxTimeMS = 100L)
@MongoCollation("{ locale: 'en_US', numericOrdering: true}")
@MongoRepository
interface SaleRepository extends CrudRepository<Sale, ObjectId> {
@MongoFindOptions(allowDiskUse = true, maxTimeMS = 1000)
@MongoAggregateOptions(allowDiskUse = true, maxTimeMS = 100)
@MongoCollation("{ locale: 'en_US', numericOrdering: true}")
@MongoRepository
interface SaleRepository : CrudRepository<Sale, ObjectId> {
7.3 Mapping Entities
As mentioned in the Quick Start section, if you need to customize how entities map to the collection and attribute names of the collection you need use Micronaut Data’s own annotations in the io.micronaut.data.annotation
package.
An important aspect of Micronaut Data MongoDB is that the entity classes must be compiled with Micronaut Data. This is because Micronaut Data pre-computes the persistence model (the relationships between entities, the class/property name to collection/attribute name mappings) at compilation time, which is one of the reasons Micronaut Data MongoDB can start up so fast.
An example of mapping with Micronaut Data annotations can be seen below:
@MappedEntity // (1)
public class Country {
@Id
private ObjectId id; // (2)
@Relation(value = Relation.Kind.ONE_TO_MANY, mappedBy = "country")
private Set<CountryRegion> regions; // (3)
private String name; // (4)
// ...
}
@MappedEntity // (1)
class Country {
@Id
private ObjectId id // (2)
@Relation(value = Relation.Kind.ONE_TO_MANY, mappedBy = "country")
private Set<CountryRegion> regions // (3)
private String name // (4)
// ...
}
@MappedEntity // (1)
data class Country(
@field:Id
val id: ObjectId, // (2)
@Relation(value = Relation.Kind.ONE_TO_MANY, mappedBy = "country")
val regions: Set<CountryRegion>, // (3)
val name: String // (4)
)
1 | The class is marked as a mapped entity that should be persisted in the country collection |
2 | The id is defined as MongoDB’s ObjectId |
3 | The regions are stored in a separate collection represented by CountryRegion |
4 | The name field that should be persisted in a collection |
7.3.1 Mapping Annotations
The following table summarizes the different annotations and what they enable. If you are familiar with and prefer the JPA annotations then feel free to skip to the next section:
Annotation |
Description |
Meta annotation for a value that should be auto-populated by Micronaut Data (such as time stamps and UUIDs) |
|
Allows assigning a data created value (such as a |
|
Allows assigning a last updated value (such as a |
|
Specifies that the bean is embeddable |
|
Specifies an embedded ID of an entity |
|
Specifies that the property value is generated by the database and not included in inserts |
|
Specifies a join collection association |
|
Specifies a join attribute mapping |
|
Specifies the ID of an entity |
|
Specifies the entity is mapped to the collection. If your collection name differs from the entity name, pass the name as the |
|
Used to customize the attribute name |
|
Used to specify a relationship (one-to-one, one-to-many, etc.) |
|
Used to specify a property is transient |
|
Specifies the version field of an entity, enables optimistic locking |
Again Micronaut Data MongoDB is not an ORM, but instead a simple data mapper so many of the concepts in JPA simply don’t apply, however for users familiar with these annotations it is handy being able to use them.
Micronaut Data MongoDB doesn’t support JPA annotations |
7.3.2 ID Generation
The default ID generation for MongoDB is using ObjectId
as an ID, there are only two supported types: the default ObjectId
and a simple Java String which will have the hex value of the ObjectId
.
You can remove the @GeneratedValue
annotation and in this case the expectation is that you will assign an ID before calling save()
.
Automatically assigned UUIDs are also supported by adding a property annotated with @Id
and @AutoPopulated
.
7.3.3 Composite Primary Keys
Composite primary keys can be defined using @EmbeddedId annotation. A composite ID requires an additional class to represent the key. The class should define fields that correspond to the collection’s attribute making up the composite key. For example:
package example;
import io.micronaut.data.annotation.Embeddable;
import java.util.Objects;
@Embeddable
public class ProjectId {
private final int departmentId;
private final int projectId;
public ProjectId(int departmentId, int projectId) {
this.departmentId = departmentId;
this.projectId = projectId;
}
public int getDepartmentId() {
return departmentId;
}
public int getProjectId() {
return projectId;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
ProjectId projectId1 = (ProjectId) o;
return departmentId == projectId1.departmentId &&
projectId == projectId1.projectId;
}
@Override
public int hashCode() {
return Objects.hash(departmentId, projectId);
}
}
package example
import groovy.transform.EqualsAndHashCode
import io.micronaut.data.annotation.Embeddable
@EqualsAndHashCode
@Embeddable
class ProjectId {
final int departmentId
final int projectId
ProjectId(int departmentId, int projectId) {
this.departmentId = departmentId
this.projectId = projectId
}
}
package example
import io.micronaut.data.annotation.Embeddable
@Embeddable
data class ProjectId(val departmentId: Int, val projectId: Int)
It is recommended that the ID class be immutable and implement equals /hashCode .
TIP: When using Java, be sure to define getters for the fields making up your composite key.
|
package example;
import io.micronaut.data.annotation.EmbeddedId;
import io.micronaut.data.annotation.MappedEntity;
@MappedEntity
public class Project {
@EmbeddedId
private ProjectId projectId;
private String name;
public Project(ProjectId projectId, String name) {
this.projectId = projectId;
this.name = name;
}
public ProjectId getProjectId() {
return projectId;
}
public String getName() {
return name;
}
}
package example
import io.micronaut.data.annotation.EmbeddedId
import io.micronaut.data.annotation.MappedEntity
@MappedEntity
class Project {
@EmbeddedId
private ProjectId projectId
private String name
Project(ProjectId projectId, String name) {
this.projectId = projectId
this.name = name
}
ProjectId getProjectId() {
return projectId
}
String getName() {
return name
}
}
package example
import io.micronaut.data.annotation.EmbeddedId
import io.micronaut.data.annotation.MappedEntity
@MappedEntity
class Project(@EmbeddedId val projectId: ProjectId, val name: String)
To alter the collection’s attribute mappings for the ID, you may use the @MappedProperty annotation on the fields within the ProjectId class
|
7.3.4 Constructor Arguments
Micronaut Data MongoDB also allows the definition of immutable objects using constructor arguments instead of getters/setters. If you define multiple constructors then the one used to create the object from the database should be annotated with io.micronaut.core.annotation.Creator
.
For example:
package example;
import io.micronaut.core.annotation.Creator;
import io.micronaut.data.annotation.GeneratedValue;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.MappedEntity;
import org.bson.types.ObjectId;
@MappedEntity
public class Manufacturer {
@Id
@GeneratedValue
private ObjectId id;
private String name;
@Creator
public Manufacturer(String name) {
this.name = name;
}
public ObjectId getId() {
return id;
}
public void setId(ObjectId id) {
this.id = id;
}
public String getName() {
return name;
}
}
package example
import io.micronaut.core.annotation.Creator
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
@MappedEntity
class Manufacturer {
@Id
@GeneratedValue
Long id
final String name
@Creator
Manufacturer(String name) {
this.name = name
}
}
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import org.bson.types.ObjectId
@MappedEntity
data class Manufacturer(
@field:Id
@GeneratedValue
var id: ObjectId?,
val name: String
)
As you can see from the example above, the ID
of the object should however include a setter since this has to be assigned from the database generated value.
7.3.5 Naming Strategies
The default naming strategy when converting camel case class and property names to collection and attribute names is to use underscore separated lower case. In other words FooBar
becomes foo_bar
.
If this is not satisfactory then you can customize this by setting the namingStrategy
member of the @MappedEntity annotation on the entity:
@MappedEntity(namingStrategy = NamingStrategies.Raw.class)
public class CountryRegion {
...
}
Few important things to note. Since Micronaut Data pre-computes the collection and attribute name mappings at compilation time the specified NamingStrategy implementation must be on the annotation processor classpath (annotationProcessor
scope for Java or kapt
for Kotlin).
In addition, if you don’t want to repeat the above annotation definition on every entity it is handy to define a meta-annotation where the above annotation definition is applied to another annotation that you add to your class.
7.3.6 Association Mapping
To specify a relation between two entities you need to use @Relation annotation. The relation kind is specified using enum @Kind value
attribute which is similar to JPA relations annotation names (@OneToMany
, @OneToOne
etc.)
Kind |
Description |
|
One-To-Many association |
|
One-To-One association |
|
Many-To-Many association |
|
Many-To-One association |
|
Embedded association |
Use 'mappedBy' to specify inverse property that this relation is mapped by.
Type |
Description |
|
Associated entity or entities are going to be persisted when owning entity is saved |
|
Associated entity or entities are going to be updated when owning entity is updated |
|
(Default) No operation is cascaded |
|
All ( |
7.3.7 Association Fetching
Micronaut Data is a simple data mapper, hence it will not fetch any associations for you using techniques like lazy loading of entity proxies for single-ended associations.
You must instead specify ahead of time what data you want to fetch. You cannot map an association as being eager or lazy. The reason for this design choice is simple, even in the JPA world accessing lazy associations or lazy initialization collections is considered bad practice due to the N+1 query issue and the recommendation is always to write an optimized join query.
Micronaut Data MongoDB takes this a step further by simply not supporting those features considered bad practice anyway. However, it does impact how you may model an association. For example, if you define an association in a constructor argument such as the following entity:
package example;
import io.micronaut.core.annotation.Nullable;
import io.micronaut.data.annotation.GeneratedValue;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.MappedEntity;
import io.micronaut.data.annotation.Relation;
import org.bson.types.ObjectId;
@MappedEntity
public class Product {
@Id
@GeneratedValue
private ObjectId id;
private String name;
@Nullable
@Relation(Relation.Kind.MANY_TO_ONE)
private Manufacturer manufacturer;
public Product(String name, @Nullable Manufacturer manufacturer) {
this.name = name;
this.manufacturer = manufacturer;
}
public ObjectId getId() {
return id;
}
public void setId(ObjectId id) {
this.id = id;
}
public String getName() {
return name;
}
public Manufacturer getManufacturer() {
return manufacturer;
}
}
package example
import io.micronaut.core.annotation.Nullable
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import io.micronaut.data.annotation.Relation
import org.bson.types.ObjectId
@MappedEntity
class Product {
@Id
@GeneratedValue
ObjectId id
private String name
@Nullable
@Relation(Relation.Kind.MANY_TO_ONE)
private Manufacturer manufacturer
Product(String name, Manufacturer manufacturer) {
this.name = name
this.manufacturer = manufacturer
}
String getName() {
return name
}
Manufacturer getManufacturer() {
return manufacturer
}
}
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import io.micronaut.data.annotation.Relation
import org.bson.types.ObjectId
@MappedEntity
data class Product(@field:Id @GeneratedValue
var id: ObjectId?,
var name: String,
@Relation(Relation.Kind.MANY_TO_ONE)
var manufacturer: Manufacturer?) {
constructor(name: String, manufacturer: Manufacturer?) : this(null, name, manufacturer)
}
Then attempt to read the Product
entity back without specifying a join an exception will occur since the manufacturer
association is not Nullable
.
There are few ways around this, one way is to declare at the repository level to always fetch manufacturer
, another is declared the @Nullable
annotation on the manufacturer
argument to allow it to be declared null
(or in Kotlin add ?
to the end of the constructor argument name). Which approach you choose is dependent on the design of the application.
The following section provides more coverage on handling joins.
7.4 Join Queries
As discussed in the previous section, Micronaut Data MongoDB doesn’t support associations in the traditional ORM sense. There is no lazy loading or support for proxies.
Consider a Product
entity from the previous section that has an association to a Manufacturer
entity:
package example;
import io.micronaut.core.annotation.Creator;
import io.micronaut.data.annotation.GeneratedValue;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.MappedEntity;
import org.bson.types.ObjectId;
@MappedEntity
public class Manufacturer {
@Id
@GeneratedValue
private ObjectId id;
private String name;
@Creator
public Manufacturer(String name) {
this.name = name;
}
public ObjectId getId() {
return id;
}
public void setId(ObjectId id) {
this.id = id;
}
public String getName() {
return name;
}
}
package example
import io.micronaut.core.annotation.Creator
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
@MappedEntity
class Manufacturer {
@Id
@GeneratedValue
Long id
final String name
@Creator
Manufacturer(String name) {
this.name = name
}
}
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import org.bson.types.ObjectId
@MappedEntity
data class Manufacturer(
@field:Id
@GeneratedValue
var id: ObjectId?,
val name: String
)
Say you query for Product
instances, what happens is that by default Micronaut Data MongoDB will only query for and fetch the simple properties. In the case of single ended associations like the above Micronaut Data will only retrieve the ID and assign it if is possible (In the case of entities that require constructor arguments this is not even possible).
If you need to fetch the association too then you can use the @Join annotation on your repository interface to specify that the aggregation should be executed to with a lookup of the associated Manufacturer
.
@MongoRepository
public interface ProductRepository extends CrudRepository<Product, ObjectId> {
@Join("manufacturer") // (1)
List<Product> list();
}
@MongoRepository
public interface ProductRepository extends CrudRepository<Product, ObjectId> {
@Join("manufacturer") // (1)
List<Product> list();
}
@MongoRepository
interface ProductRepository : CrudRepository<Product, ObjectId>, JpaSpecificationExecutor<Product> {
@Join("manufacturer") // (1)
fun list(): List<Product>
}
1 | List query should include joined relation manufacturer from a different collection |
Micronaut Data MongoDB will generate the following aggregation JSON query at the compile-time and only bind the required parameters and the runtime:
[
{
"$lookup":{
"from":"cart_item",
"localField":"_id",
"foreignField":"cart._id",
"as":"items"
}
},
{
"$match":{
"_id":{
"$eq":{
"$oid":"61d69d67e8cb2c06b66d2e67"
}
}
}
}
]
Note that the @Join annotation is repeatable and hence can be specified multiple time for different associations.
Micronaut Data MongoDB doesn’t support different join types or a custom alias defined in @Join. |
7.5 Using Attribute Converter
There are cases where you would like to represent the attribute differently in the database than in the entity.
Consider the following example entity:
package example;
import io.micronaut.data.annotation.GeneratedValue;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.annotation.MappedEntity;
import io.micronaut.data.annotation.MappedProperty;
import io.micronaut.data.annotation.Relation;
import org.bson.types.ObjectId;
@MappedEntity
public class Sale {
@Relation(Relation.Kind.MANY_TO_ONE)
private final Product product;
@MappedProperty(converter = QuantityAttributeConverter.class)
private final Quantity quantity;
@Id
@GeneratedValue
private ObjectId id;
public Sale(Product product, Quantity quantity) {
this.product = product;
this.quantity = quantity;
}
public Product getProduct() {
return product;
}
public Quantity getQuantity() {
return quantity;
}
public ObjectId getId() {
return id;
}
public void setId(ObjectId id) {
this.id = id;
}
}
package example
import io.micronaut.data.annotation.GeneratedValue
import io.micronaut.data.annotation.Id
import io.micronaut.data.annotation.MappedEntity
import io.micronaut.data.annotation.Relation
import org.bson.types.ObjectId
@MappedEntity
class Sale {
@Id
@GeneratedValue
ObjectId id
@Relation(Relation.Kind.MANY_TO_ONE)
final Product product
final Quantity quantity
Sale(Product product, Quantity quantity) {
this.product = product
this.quantity = quantity
}
}
package example
import io.micronaut.data.annotation.*
import org.bson.types.ObjectId
@MappedEntity
data class Sale(
@field:Id
@GeneratedValue
var id: ObjectId?,
@Relation(Relation.Kind.MANY_TO_ONE)
val product: Product,
@MappedProperty(converter = QuantityAttributeConverter::class)
val quantity: Quantity
)
The Sale
class has a reference to a type Quantity
. The Quantity
type is defined as:
package example;
public class Quantity {
private final int amount;
private Quantity(int amount) {
this.amount = amount;
}
public int getAmount() {
return amount;
}
public static Quantity valueOf(int amount) {
return new Quantity(amount);
}
}
package example
import groovy.transform.Immutable
@Immutable
class Quantity {
int amount
}
package example
data class Quantity(val amount: Int)
As you can see @MappedProperty(converter = QuantityAttributeConverter.class)
is used to define the Quantity
converter.
Micronaut Data MongoDB doesn’t support defining the converter using @TypeDef .
|
The last step is to add custom attribute conversion so that Micronaut Data knows how to read and write the type from an Integer
:
package example;
import io.micronaut.core.convert.ConversionContext;
import io.micronaut.data.model.runtime.convert.AttributeConverter;
import jakarta.inject.Singleton;
@Singleton // (1)
public class QuantityAttributeConverter implements AttributeConverter<Quantity, Integer> {
@Override // (2)
public Integer convertToPersistedValue(Quantity quantity, ConversionContext context) {
return quantity == null ? null : quantity.getAmount();
}
@Override // (3)
public Quantity convertToEntityValue(Integer value, ConversionContext context) {
return value == null ? null : Quantity.valueOf(value);
}
}
package example
import groovy.transform.CompileStatic
import io.micronaut.core.convert.ConversionContext
import io.micronaut.data.model.runtime.convert.AttributeConverter
import jakarta.inject.Singleton
@Singleton // (1)
@CompileStatic
class QuantityAttributeConverter implements AttributeConverter<Quantity, Integer> {
@Override // (2)
Integer convertToPersistedValue(Quantity quantity, ConversionContext context) {
return quantity == null ? null : quantity.getAmount()
}
@Override // (3)
Quantity convertToEntityValue(Integer value, ConversionContext context) {
return value == null ? null : new Quantity(value)
}
}
package example
import io.micronaut.core.convert.ConversionContext
import io.micronaut.data.model.runtime.convert.AttributeConverter
import jakarta.inject.Singleton
@Singleton // (1)
class QuantityAttributeConverter : AttributeConverter<Quantity?, Int?> {
// (2)
override fun convertToPersistedValue(quantity: Quantity?, context: ConversionContext): Int? {
return quantity?.amount
}
// (3)
override fun convertToEntityValue(value: Int?, context: ConversionContext): Quantity? {
return if (value == null) null else Quantity(value)
}
}
1 | The attribute converter implements @AttributeConverter and must be a bean |
2 | A converter from Quantity to Integer |
3 | A converter from Integer to Quantity |
It’s possible to define the converter result type using @MappedProperty: @MappedProperty(converterPersistedType = Integer.class) , in this case the data type will be detected automatically.
|
7.6 Repositories with Criteria API
In some cases, you need to build a query programmatically and at the runtime; for that, Micronaut Data implements a subset of Jakarta Persistence Criteria API 3.0, which can be used for Micronaut Data MongoDB features. To utilize this feature add the following dependency:
implementation("jakarta.persistence:jakarta.persistence-api")
<dependency>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
</dependency>
To implement queries that cannot be defined at the compile-time Micronaut Data introduces JpaSpecificationExecutor repository interface that can be used to extend your repository interface:
@MongoRepository
public interface PersonRepository extends CrudRepository<Person, ObjectId>, JpaSpecificationExecutor<Person> {
}
@MongoRepository
interface PersonRepository extends CrudRepository<Person, ObjectId>, JpaSpecificationExecutor<Person> {
}
@MongoRepository
interface PersonRepository : CrudRepository<Person, ObjectId>, JpaSpecificationExecutor<Person> {
}
Each method expects a "specification" which is a functional interface with a set of Criteria API objects intended to build a query programmatically.
Micronaut Criteria API currently implements only a subset of the API. Most of it is internally used to create queries with predicates and projections.
Currently, not supported JPA Criteria API features:
-
Joins with custom
ON
expressions and typed join methods likejoinSet
etc -
Sub-queries
-
Collection operations:
isMember
etc -
Custom or tuple result type
-
Transformation expressions like concat, substring etc.
-
Cases and functions
More information about Jakarta Persistence Criteria API 3.0 you can find at the official API specification
7.6.1 Querying
To find an entity or multiple entities you can use one of the following methods from JpaSpecificationExecutor interface:
@Override
Optional<Person> findOne(PredicateSpecification<Person> spec);
@Override
Optional<Person> findOne(QuerySpecification<Person> spec);
@Override
List<Person> findAll(PredicateSpecification<Person> spec);
@Override
List<Person> findAll(QuerySpecification<Person> spec);
@Override
List<Person> findAll(PredicateSpecification<Person> spec, Sort sort);
@Override
List<Person> findAll(QuerySpecification<Person> spec, Sort sort);
@Override
Page<Person> findAll(PredicateSpecification<Person> spec, Pageable pageable);
@Override
Page<Person> findAll(QuerySpecification<Person> spec, Pageable pageable);
Optional<Person> findOne(PredicateSpecification<Person> spec)
Optional<Person> findOne(QuerySpecification<Person> spec)
List<Person> findAll(PredicateSpecification<Person> spec)
List<Person> findAll(QuerySpecification<Person> spec)
List<Person> findAll(PredicateSpecification<Person> spec, Sort sort)
List<Person> findAll(QuerySpecification<Person> spec, Sort sort)
Page<Person> findAll(PredicateSpecification<Person> spec, Pageable pageable)
Page<Person> findAll(QuerySpecification<Person> spec, Pageable pageable)
fun findOne(spec: PredicateSpecification<Person>?): Optional<Person>
fun findOne(spec: QuerySpecification<Person>?): Optional<Person>
fun findAll(spec: PredicateSpecification<Person>?): List<Person>
fun findAll(spec: QuerySpecification<Person>?): List<Person>
fun findAll(spec: PredicateSpecification<Person>?, sort: Sort): List<Person>
fun findAll(spec: QuerySpecification<Person>?, sort: Sort): List<Person>
fun findAll(spec: PredicateSpecification<Person>?, pageable: Pageable): Page<Person>
fun findAll(spec: QuerySpecification<Person>?, pageable: Pageable): Page<Person>
As you can see, there are two variations of findOne
/findAll
methods.
First method is expecting PredicateSpecification which is a simple specification interface that can be implemented to return a predicate:
import static jakarta.persistence.criteria.*;
public interface PredicateSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaBuilder criteriaBuilder (3)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria builder |
This interface can also be used for update and delete methods, and it provides or
and and
methods for combining multiple predicates.
The second interface is intended only for query criteria because it includes jakarta.persistence.criteria.CriteriaQuery
as a parameter.
import static jakarta.persistence.criteria.*;
public interface QuerySpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaQuery<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria query instance |
4 | The criteria builder |
For implementing counting queries following methods can be used:
@Override
long count(PredicateSpecification<Person> spec);
@Override
long count(QuerySpecification<Person> spec);
long count(PredicateSpecification<Person> spec)
long count(QuerySpecification<Person> spec)
fun count(spec: PredicateSpecification<Person>?): Long
fun count(spec: QuerySpecification<Person>?): Long
You can define criteria specification methods that will help you to create a query:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age);
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
}
object Specifications {
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
fun nameInList(names: List<String>) = where<Person> { root[Person::name] inList names }
fun nameOrAgeMatches(age: Int, name: String) = query<Person> {
where {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
}
}
Then you can combine them for find
or count
queries:
Person denis = personRepository.findOne(nameEquals("Denis")).orElse(null);
long countAgeLess30 = personRepository.count(ageIsLessThan(30));
long countAgeLess20 = personRepository.count(ageIsLessThan(20));
long countAgeLess30NotDenis = personRepository.count(ageIsLessThan(30).and(not(nameEquals("Denis"))));
List<Person> people = personRepository.findAll(where(nameEquals("Denis").or(nameEquals("Josh"))));
Person denis = personRepository.findOne(nameEquals("Denis")).orElse(null)
long countAgeLess30 = personRepository.count(ageIsLessThan(30))
long countAgeLess20 = personRepository.count(ageIsLessThan(20))
long countAgeLess30NotDenis = personRepository.count(ageIsLessThan(30) & not(nameEquals("Denis")))
List<Person> people = personRepository.findAll(where(nameEquals("Denis") | nameEquals("Josh")))
val denis: Person? = personRepository.findOne(nameEquals("Denis")).orElse(null)
val countAgeLess30: Long = personRepository.count(ageIsLessThan(30))
val countAgeLess20: Long = personRepository.count(ageIsLessThan(20))
val countAgeLess30NotDenis: Long = personRepository.count(ageIsLessThan(30).and(not(nameEquals("Denis"))))
val people = personRepository.findAll(PredicateSpecification.where(nameEquals("Denis").or(nameEquals("Josh"))))
The examples use compile-known values, and in this case, it would be better to create custom repository methods which would come with compile-time generates queries and eliminate runtime overhead. It’s recommended to use criteria only for dynamic queries where the query structure is not known at the build-time. |
7.6.2 Updating
To implement the update you can use following method from JpaSpecificationExecutor interface:
@Override
long updateAll(UpdateSpecification<Person> spec);
long updateAll(UpdateSpecification<Person> spec)
fun updateAll(spec: UpdateSpecification<Person>?): Long
This method is expecting UpdateSpecification which is a variation of specification interface that includes access to jakarta.persistence.criteria.CriteriaUpdate
:
import static jakarta.persistence.criteria.*;
public interface UpdateSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaUpdate<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria update instance |
4 | The criteria builder |
Updating specific properties can be done using jakarta.persistence.criteria.CriteriaUpdate
interface:
query.set(root.get("name"), newName);
query.set(root.get("name"), newName)
fun updateName(newName: String, existingName: String) = update<Person> {
set(Person::name, newName)
where {
root[Person::name] eq existingName
}
}
query.set(root[Person::name], newName)
You can define criteria specification methods including update specification that will help you to create an update query:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age);
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get("name"), newName);
return null;
};
}
static PredicateSpecification<Person> interestsContains(String interest) {
return (root, criteriaBuilder) -> ((PersistentEntityCriteriaBuilder) criteriaBuilder).arrayContains(root.get("interests"), criteriaBuilder.literal(interest));
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get("name"), newName)
null
}
}
static PredicateSpecification<Person> interestsContains(String interest) {
return (root, criteriaBuilder) -> ((PersistentEntityCriteriaBuilder) criteriaBuilder).arrayContains(root.get("interests"), criteriaBuilder.literal(interest))
}
}
object Specifications {
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
fun nameInList(names: List<String>) = where<Person> { root[Person::name] inList names }
fun nameOrAgeMatches(age: Int, name: String) = query<Person> {
where {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
}
fun updateName(newName: String, existingName: String) = update<Person> {
set(Person::name, newName)
where {
root[Person::name] eq existingName
}
}
fun interestsContains(interest: String): PredicateSpecification<Person>? {
return PredicateSpecification { root: Root<Person>, criteriaBuilder: CriteriaBuilder ->
(criteriaBuilder as PersistentEntityCriteriaBuilder).arrayContains(
root.get<Any>("interests"),
criteriaBuilder.literal(interest)
)
}
}
// Different style using the criteria builder
fun nameEquals2(name: String?) = PredicateSpecification { root, criteriaBuilder ->
criteriaBuilder.equal(root[Person::name], name)
}
fun ageIsLessThan2(age: Int) = PredicateSpecification { root, criteriaBuilder ->
criteriaBuilder.lessThan(root[Person::age], age)
}
fun setNewName2(newName: String) = UpdateSpecification { root, query, criteriaBuilder ->
query.set(root[Person::name], newName)
null
}
}
Then you can use the update specification combined with predicate specifications:
long recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")));
long recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")))
val recordsUpdated = personRepository.updateAll(updateName("Steven", "Denis"))
7.6.3 Deleting
To delete an entity or multiple entities you can use one of the following methods from JpaSpecificationExecutor interface:
@Override
long deleteAll(PredicateSpecification<Person> spec);
@Override
long deleteAll(DeleteSpecification<Person> spec);
long deleteAll(PredicateSpecification<Person> spec)
long deleteAll(DeleteSpecification<Person> spec)
fun deleteAll(spec: PredicateSpecification<Person>?): Long
fun deleteAll(spec: DeleteSpecification<Person>?): Long
As it is for querying, deleteAll
methods also come in two variations.
First method is expecting PredicateSpecification which is the same interface described in Querying section
The second method comes with DeleteSpecification and is intended only for delete criteria because it includes access to jakarta.persistence.criteria.CriteriaDelete
.
import static jakarta.persistence.criteria.*;
public interface DeleteSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaDelete<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria delete instance |
4 | The criteria builder |
For deleting you can reuse the same predicates as for querying and updating:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age);
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
}
object Specifications {
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
fun nameInList(names: List<String>) = where<Person> { root[Person::name] inList names }
fun nameOrAgeMatches(age: Int, name: String) = query<Person> {
where {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
}
}
Simply pass the predicate specification to the deleteAll
method:
long recordsDeleted = personRepository.deleteAll(where(nameEquals("Denis")));
long recordsDeleted = personRepository.deleteAll(where(nameEquals("Denis")))
val recordsDeleted = personRepository.deleteAll(PredicateSpecification.where(nameEquals("Denis")))
val recordsDeleted = personRepository.deleteAll(where {
root[Person::name] eq "Denis"
})
val recordsDeleted = personRepository.deleteAll(where {
root[Person::name] eq "Denis"
})
7.6.4 Other repository variations
Micronaut Data includes different variations of specification executor interface intended to be used with async or reactive repositories.
Interface |
Description |
The default interface for querying, deleting and updating data |
|
The async version of the specifications repository |
|
The reactive streams - |
|
The Reactor version of the specifications repository |
|
The Kotlin version of the interface that is using coroutines |
7.7 Optimistic locking
Optimistic locking is a strategy where you note the actual record state’s version and modify the record only when the version is the same.
To enable optimistic locking for your entity add @Version annotated field with one of the types:
-
java.lang.Integer
-
java.lang.Long
-
java.lang.Short
-
Date-time type extending
java.time.Temporal
The field is going to be incremented (for number types) or replaced (for date types) on an update operation.
Micronaut Data will generate update/delete filter queries with a version match and if the update/delete doesn’t produce any result OptimisticLockException will be thrown.
@MappedEntity
public class Student {
@Id
@GeneratedValue
private ObjectId id;
@Version
private Long version;
@MappedEntity
class Student {
@Id
@GeneratedValue
ObjectId id
@Version
Long version
@MappedEntity
data class Student(
@field:Id @GeneratedValue
val id: ObjectId?,
@field:Version
val version: Long?,
It’s possible to use @Version in a partial update or a delete method, in this case the version needs to match the version of the stored record.
@MongoRepository
public interface StudentRepository extends CrudRepository<Student, ObjectId> {
void update(@Id ObjectId id, @Version Long version, String name);
void delete(@Id ObjectId id, @Version Long version);
}
@MongoRepository
interface StudentRepository extends CrudRepository<Student, ObjectId> {
void update(@Id ObjectId id, @Version Long version, String name)
void delete(@Id ObjectId id, @Version Long version)
}
@MongoRepository
interface StudentRepository : CrudRepository<Student, ObjectId> {
fun update(@Id id: ObjectId, @Version version: Long, name: String)
fun delete(@Id id: ObjectId, @Version version: Long)
}
See the guides for Access a MongoDB Database with Micronaut Data MongoDB and Access a MongoDB Database Asynchronously with Micronaut Data MongoDB and Reactive Streams to learn more. |
8 Micronaut Data Azure Cosmos
Micronaut Data Azure Cosmos supports some of the features of JPA implementations, including:
Cascading and joins are not supported like in the rest of data modules. More about specifics can be seen here.
The interaction between the object layer and Azure Cosmos Db serialization/deserialization is implemented using Micronaut Serialization.
8.1 Quick Start
At this point there is still no ability to create Micronaut project with Azure Cosmos Db support via Micronaut Launch. Our team will be working on it in the near future.
To get started with Micronaut Data Azure Cosmos add the following dependency to your annotation processor path:
annotationProcessor("io.micronaut.data:micronaut-data-document-processor")
<annotationProcessorPaths>
<path>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-document-processor</artifactId>
</path>
</annotationProcessorPaths>
For Kotlin, add the micronaut-data-document-processor dependency in kapt or ksp scope, and for Groovy add micronaut-data-document-processor in compileOnly scope.
|
You should then configure a compile-scoped dependency on the micronaut-data-azure-cosmos
module:
implementation("io.micronaut.data:micronaut-data-azure-cosmos")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-azure-cosmos</artifactId>
</dependency>
Next up you need to configure at least one data source. The following snippet from the application configuration file is an example of configuring the default Azure Cosmos Db data source:
micronaut.application.name=example
azure.cosmos.default-gateway-mode=true
azure.cosmos.endpoint-discovery-enabled=false
azure.cosmos.endpoint=https://localhost:8081
azure.cosmos.key=
azure.cosmos.database.throughput-settings.request-units=1000
azure.cosmos.database.throughput-settings.auto-scale=false
azure.cosmos.database.database-name=testDb
micronaut:
application:
name: example
azure:
cosmos:
default-gateway-mode: true
endpoint-discovery-enabled: false
endpoint: https://localhost:8081
key: ''
database:
throughput-settings:
request-units: 1000
auto-scale: false
database-name: testDb
[micronaut]
[micronaut.application]
name="example"
[azure]
[azure.cosmos]
default-gateway-mode=true
endpoint-discovery-enabled=false
endpoint="https://localhost:8081"
key=""
[azure.cosmos.database]
database-name="testDb"
[azure.cosmos.database.throughput-settings]
request-units=1000
auto-scale=false
micronaut {
application {
name = "example"
}
}
azure {
cosmos {
defaultGatewayMode = true
endpointDiscoveryEnabled = false
endpoint = "https://localhost:8081"
key = ""
database {
throughputSettings {
requestUnits = 1000
autoScale = false
}
databaseName = "testDb"
}
}
}
{
micronaut {
application {
name = "example"
}
}
azure {
cosmos {
default-gateway-mode = true
endpoint-discovery-enabled = false
endpoint = "https://localhost:8081"
key = ""
database {
throughput-settings {
request-units = 1000
auto-scale = false
}
database-name = "testDb"
}
}
}
}
{
"micronaut": {
"application": {
"name": "example"
}
},
"azure": {
"cosmos": {
"default-gateway-mode": true,
"endpoint-discovery-enabled": false,
"endpoint": "https://localhost:8081",
"key": "",
"database": {
"throughput-settings": {
"request-units": 1000,
"auto-scale": false
},
"database-name": "testDb"
}
}
}
}
You can find more details about configuration here.
To retrieve objects from the database you need to define a class annotated with @MappedEntity:
@MappedEntity
public class Book {
@Id
@GeneratedValue
@PartitionKey
private String id;
private String title;
private int pages;
@MappedProperty(converter = ItemPriceAttributeConverter.class)
@Nullable
private ItemPrice itemPrice;
@DateCreated
private Date createdDate;
@DateUpdated
private Date updatedDate;
public Book(String title, int pages) {
this.title = title;
this.pages = pages;
}
// ...
}
@MappedEntity
class Book {
@Id
@GeneratedValue
private String id
private String title
private int pages
@MappedProperty(converter = ItemPriceAttributeConverter)
@Nullable
private ItemPrice itemPrice
Book(String title, int pages) {
this.title = title
this.pages = pages
}
//...
}
@MappedEntity
data class Book(@field:Id
@GeneratedValue
var id: String?,
var title: String,
var pages: Int = 0,
@MappedProperty(converter = ItemPriceAttributeConverter::class)
var itemPrice: ItemPrice? = null,
@DateCreated
var createdDate: Date? = null,
@DateUpdated
var updatedDate: Date? = null)
Followed by an interface that extends from CrudRepository
package example;
import io.micronaut.data.annotation.Id;
import io.micronaut.data.cosmos.annotation.CosmosRepository;
import io.micronaut.data.model.Page;
import io.micronaut.data.model.Pageable;
import io.micronaut.data.model.Slice;
import io.micronaut.data.repository.CrudRepository;
import java.util.List;
@CosmosRepository // (1)
interface BookRepository extends CrudRepository<Book, String> { // (2)
Book find(String title);
}
package example
import io.micronaut.data.annotation.Id
import io.micronaut.data.cosmos.annotation.CosmosRepository
import io.micronaut.data.model.Pageable
import io.micronaut.data.model.Slice
import io.micronaut.data.repository.CrudRepository
@CosmosRepository // (1)
interface BookRepository extends CrudRepository<Book, String> { // (2)
Book find(String title)
}
package example
import io.micronaut.data.annotation.Id
import io.micronaut.data.cosmos.annotation.CosmosRepository
import io.micronaut.data.model.Pageable
import io.micronaut.data.model.Slice
import io.micronaut.data.repository.CrudRepository
@CosmosRepository // (1)
interface BookRepository : CrudRepository<Book, String> { // (2)
fun find(title: String): Book
}
1 | The interface is annotated with @CosmosRepository |
2 | The CrudRepository interface take 2 generic arguments, the entity type (in this case Book ) and the ID type (in this case String ) |
You can now perform CRUD (Create, Read, Update, Delete) operations on the entity. The implementation of example.BookRepository
is created at compilation time. To obtain a reference to it simply inject the bean:
@Inject
BookRepository bookRepository;
@Inject @Shared BookRepository bookRepository
@Inject
lateinit var bookRepository: BookRepository
When using Micronaut Data Azure Cosmos, every MappedEntity
will correspond to the container. One container can hold only one entity or document type.
The simple name of the class annotated with @MappedEntity will be used as the container name by default. If entity class is CosmosBook
then expected container name will be cosmos_book
unless not overriden in MappedEntity
annotation value. Default naming strategy for entity fields is Raw
strategy and users should not usually need to override it.
Saving an Instance (Create)
To save an instance use the save
method of the CrudRepository
interface:
Book book = new Book("The Stand", 1000);
book.setItemPrice(new ItemPrice(200));
bookRepository.save(book);
def book = new Book("The Stand", 1000)
book.itemPrice = new ItemPrice(99.5)
bookRepository.save(book)
def id = book.id
var book = Book(null,"The Stand", 1000, ItemPrice(199.99))
bookRepository.save(book)
Retrieving an Instance (Read)
To read a book back use findById
:
book = bookRepository.findById(id).orElse(null);
book = bookRepository.findById(id).orElse(null)
book = bookRepository.findById(id).orElse(null)
Updating an Instance (Update)
With Micronaut Data Azure Cosmos, you can use save
method of the CrudRepository
or manually implement an update
method. You can define explicit update methods for updates in your repository. For example:
void update(@Id String id, int pages);
void update(@Id String id, String title);
void update(@Id String id, int pages)
void update(@Id String id, String title)
fun update(@Id id: String, pages: Int)
fun update(@Id id: String, title: String)
Which can then be called like so:
bookRepository.update(book.getId(), "Changed");
bookRepository.update(book.getId(), "Changed")
bookRepository.update(book.id.orEmpty(), "Changed")
Deleting an Instance (Delete)
To delete an instance use deleteById
:
bookRepository.deleteById(id);
bookRepository.deleteById(id)
bookRepository.deleteById(id)
Congratulations you have implemented your first Micronaut Data Azure Cosmos repository! Read on to find out more.
8.2 Configuration
When using an existing Azure Cosmos database with existing containers no special configuration is needed, except the endpoint, key and database name. However, for test purposes or when database containers need to be created during application startup there are additional options to configure containers.
As mentioned in the Quick Start every class annotated with @MappedEntity will correspond to one container in Azure Cosmos Db. If the property azure.cosmos.database.update-policy is set to NONE then no attempt to create the container will be made. If that value is set to CREATE_IF_NOT_EXISTS then the application will attempt to create container if the container does not already exist. Whilst, if the value is UPDATE the application will try to replace existing any containers and its properties.
Currently, it is only possible to configure a small subset of properties for the database and containers. For the database throughput properties can be configured whilst for containers the throughput properties and partition key path are configurable.
An additional way to configure the partition key value is adding the annotation @PartitionKey |
The below is an example application configuration showing container and db properties used when creating new containers if the containers don’t already exist:
micronaut.application.name=example
azure.cosmos.default-gateway-mode=true
azure.cosmos.endpoint-discovery-enabled=false
azure.cosmos.endpoint=https://localhost:8081
azure.cosmos.key=
azure.cosmos.database.throughput-settings.request-units=1000
azure.cosmos.database.throughput-settings.auto-scale=false
azure.cosmos.database.database-name=testDb
azure.cosmos.database.packages=io.micronaut.data.azure.entities
azure.cosmos.database.update-policy=CREATE_IF_NOT_EXISTS
azure.cosmos.database.container-settings[0].container-name=family
azure.cosmos.database.container-settings[0].partition-key-path=/lastname
azure.cosmos.database.container-settings[0].throughput-settings.request-units=1000
azure.cosmos.database.container-settings[0].throughput-settings.auto-scale=false
azure.cosmos.database.container-settings[1].container-name=book
azure.cosmos.database.container-settings[1].partition-key-path=/id
azure.cosmos.database.container-settings[1].throughput-settings.request-units=1200
azure.cosmos.database.container-settings[1].throughput-settings.auto-scale=false
micronaut:
application:
name: example
azure:
cosmos:
default-gateway-mode: true
endpoint-discovery-enabled: false
endpoint: https://localhost:8081
key: ''
database:
throughput-settings:
request-units: 1000
auto-scale: false
database-name: testDb
packages: io.micronaut.data.azure.entities
update-policy: CREATE_IF_NOT_EXISTS
container-settings:
- container-name: family
partition-key-path: /lastname
throughput-settings:
request-units: 1000
auto-scale: false
- container-name: book
partition-key-path: /id
throughput-settings:
request-units: 1200
auto-scale: false
[micronaut]
[micronaut.application]
name="example"
[azure]
[azure.cosmos]
default-gateway-mode=true
endpoint-discovery-enabled=false
endpoint="https://localhost:8081"
key=""
[azure.cosmos.database]
database-name="testDb"
packages="io.micronaut.data.azure.entities"
update-policy="CREATE_IF_NOT_EXISTS"
[azure.cosmos.database.throughput-settings]
request-units=1000
auto-scale=false
[[azure.cosmos.database.container-settings]]
container-name="family"
partition-key-path="/lastname"
[azure.cosmos.database.container-settings.throughput-settings]
request-units=1000
auto-scale=false
[[azure.cosmos.database.container-settings]]
container-name="book"
partition-key-path="/id"
[azure.cosmos.database.container-settings.throughput-settings]
request-units=1200
auto-scale=false
micronaut {
application {
name = "example"
}
}
azure {
cosmos {
defaultGatewayMode = true
endpointDiscoveryEnabled = false
endpoint = "https://localhost:8081"
key = ""
database {
throughputSettings {
requestUnits = 1000
autoScale = false
}
databaseName = "testDb"
packages = "io.micronaut.data.azure.entities"
updatePolicy = "CREATE_IF_NOT_EXISTS"
containerSettings = [{
containerName = "family"
partitionKeyPath = "/lastname"
throughputSettings {
requestUnits = 1000
autoScale = false
}
}, {
containerName = "book"
partitionKeyPath = "/id"
throughputSettings {
requestUnits = 1200
autoScale = false
}
}]
}
}
}
{
micronaut {
application {
name = "example"
}
}
azure {
cosmos {
default-gateway-mode = true
endpoint-discovery-enabled = false
endpoint = "https://localhost:8081"
key = ""
database {
throughput-settings {
request-units = 1000
auto-scale = false
}
database-name = "testDb"
packages = "io.micronaut.data.azure.entities"
update-policy = "CREATE_IF_NOT_EXISTS"
container-settings = [{
container-name = "family"
partition-key-path = "/lastname"
throughput-settings {
request-units = 1000
auto-scale = false
}
}, {
container-name = "book"
partition-key-path = "/id"
throughput-settings {
request-units = 1200
auto-scale = false
}
}]
}
}
}
}
{
"micronaut": {
"application": {
"name": "example"
}
},
"azure": {
"cosmos": {
"default-gateway-mode": true,
"endpoint-discovery-enabled": false,
"endpoint": "https://localhost:8081",
"key": "",
"database": {
"throughput-settings": {
"request-units": 1000,
"auto-scale": false
},
"database-name": "testDb",
"packages": "io.micronaut.data.azure.entities",
"update-policy": "CREATE_IF_NOT_EXISTS",
"container-settings": [{
"container-name": "family",
"partition-key-path": "/lastname",
"throughput-settings": {
"request-units": 1000,
"auto-scale": false
}
}, {
"container-name": "book",
"partition-key-path": "/id",
"throughput-settings": {
"request-units": 1200,
"auto-scale": false
}
}]
}
}
}
}
8.3 Repositories
As seen in the Quick Start Azure Cosmos Data repositories in Micronaut Data are defined as interfaces that are annotated with the @CosmosRepository.
For example:
@CosmosRepository (1)
public interface BookRepository extends CrudRepository<Book, String> {
Optional<Book> findByAuthorId(@NotNull String authorId);
}
1 | @CosmosRepository marking the interface to access Azure Cosmos Db |
The entity to treat as the root entity for the purposes of querying is established either from the method signature or from the generic type parameter specified to the GenericRepository interface.
If no root entity can be established then a compilation error will occur.
The same interfaces supported by the JPA implementation are supported by Azure Cosmos Data.
Note that in addition to interfaces you can also define repositories as abstract classes:
package example;
import io.micronaut.data.cosmos.annotation.CosmosRepository;
import io.micronaut.data.repository.CrudRepository;
import java.util.List;
@CosmosRepository
public abstract class AbstractBookRepository implements CrudRepository<Book, String> {
public abstract List<Book> findByTitle(String title);
}
package example
import io.micronaut.data.cosmos.annotation.CosmosRepository
import io.micronaut.data.repository.CrudRepository
@CosmosRepository
abstract class AbstractBookRepository implements CrudRepository<Book, String> {
abstract List<Book> findByTitle(String title)
}
package example
import io.micronaut.data.cosmos.annotation.CosmosRepository
import io.micronaut.data.repository.CrudRepository
@CosmosRepository
abstract class AbstractBookRepository : CrudRepository<Book, String> {
abstract fun findByTitle(title: String): List<Book>
}
8.4 Repositories with Criteria API
In some cases, you need to build a query programmatically and at the runtime; for that, Micronaut Data implements a subset of Jakarta Persistence Criteria API 3.0, which can be used for Micronaut Data Azure Cosmos features. To utilize this feature add the following dependency:
implementation("jakarta.persistence:jakarta.persistence-api")
<dependency>
<groupId>jakarta.persistence</groupId>
<artifactId>jakarta.persistence-api</artifactId>
</dependency>
To implement queries that cannot be defined at the compile-time Micronaut Data introduces JpaSpecificationExecutor repository interface that can be used to extend your repository interface:
@MongoRepository
public interface PersonRepository extends CrudRepository<Person, ObjectId>, JpaSpecificationExecutor<Person> {
}
@MongoRepository
interface PersonRepository extends CrudRepository<Person, ObjectId>, JpaSpecificationExecutor<Person> {
}
@MongoRepository
interface PersonRepository : CrudRepository<Person, ObjectId>, JpaSpecificationExecutor<Person> {
}
Each method expects a "specification" which is a functional interface with a set of Criteria API objects intended to build a query programmatically.
Micronaut Criteria API currently implements only a subset of the API. Most of it is internally used to create queries with predicates and projections.
Currently, not supported JPA Criteria API features:
-
Joins with custom
ON
expressions and typed join methods likejoinSet
etc -
Sub-queries
-
Collection operations:
isMember
etc -
Custom or tuple result type
-
Transformation expressions like concat, substring etc.
-
Cases and functions
More information about Jakarta Persistence Criteria API 3.0 you can find at the official API specification
8.4.1 Querying
To find an entity or multiple entities you can use one of the following methods from JpaSpecificationExecutor interface:
@Override
Optional<Person> findOne(PredicateSpecification<Person> spec);
@Override
Optional<Person> findOne(QuerySpecification<Person> spec);
@Override
List<Person> findAll(PredicateSpecification<Person> spec);
@Override
List<Person> findAll(QuerySpecification<Person> spec);
@Override
List<Person> findAll(PredicateSpecification<Person> spec, Sort sort);
@Override
List<Person> findAll(QuerySpecification<Person> spec, Sort sort);
@Override
Page<Person> findAll(PredicateSpecification<Person> spec, Pageable pageable);
@Override
Page<Person> findAll(QuerySpecification<Person> spec, Pageable pageable);
Optional<Person> findOne(PredicateSpecification<Person> spec)
Optional<Person> findOne(QuerySpecification<Person> spec)
List<Person> findAll(PredicateSpecification<Person> spec)
List<Person> findAll(QuerySpecification<Person> spec)
List<Person> findAll(PredicateSpecification<Person> spec, Sort sort)
List<Person> findAll(QuerySpecification<Person> spec, Sort sort)
Page<Person> findAll(PredicateSpecification<Person> spec, Pageable pageable)
Page<Person> findAll(QuerySpecification<Person> spec, Pageable pageable)
fun findOne(spec: PredicateSpecification<Person>?): Optional<Person>
fun findOne(spec: QuerySpecification<Person>?): Optional<Person>
fun findAll(spec: PredicateSpecification<Person>?): List<Person>
fun findAll(spec: QuerySpecification<Person>?): List<Person>
fun findAll(spec: PredicateSpecification<Person>?, sort: Sort): List<Person>
fun findAll(spec: QuerySpecification<Person>?, sort: Sort): List<Person>
fun findAll(spec: PredicateSpecification<Person>?, pageable: Pageable): Page<Person>
fun findAll(spec: QuerySpecification<Person>?, pageable: Pageable): Page<Person>
As you can see, there are two variations of findOne
/findAll
methods.
First method is expecting PredicateSpecification which is a simple specification interface that can be implemented to return a predicate:
import static jakarta.persistence.criteria.*;
public interface PredicateSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaBuilder criteriaBuilder (3)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria builder |
This interface can also be used for update and delete methods, and it provides or
and and
methods for combining multiple predicates.
The second interface is intended only for query criteria because it includes jakarta.persistence.criteria.CriteriaQuery
as a parameter.
import static jakarta.persistence.criteria.*;
public interface QuerySpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaQuery<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria query instance |
4 | The criteria builder |
For implementing counting queries following methods can be used:
@Override
long count(PredicateSpecification<Person> spec);
@Override
long count(QuerySpecification<Person> spec);
long count(PredicateSpecification<Person> spec)
long count(QuerySpecification<Person> spec)
fun count(spec: PredicateSpecification<Person>?): Long
fun count(spec: QuerySpecification<Person>?): Long
You can define criteria specification methods that will help you to create a query:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age);
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
}
object Specifications {
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
fun nameInList(names: List<String>) = where<Person> { root[Person::name] inList names }
fun nameOrAgeMatches(age: Int, name: String?) = where<Person> {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
fun nameAndAgeMatch(age: Int, name: String) = query<Person> {
where {
root[Person::name] eq name
root[Person::age] lt age
}
}
}
Then you can combine them for find
or count
queries:
Person denis = personRepository.findOne(nameEquals("Denis")).orElse(null);
long countAgeLess30 = personRepository.count(ageIsLessThan(30));
long countAgeLess20 = personRepository.count(ageIsLessThan(20));
long countAgeLess30NotDenis = personRepository.count(ageIsLessThan(30).and(not(nameEquals("Denis"))));
List<Person> people = personRepository.findAll(where(nameEquals("Denis").or(nameEquals("Josh"))));
Person denis = personRepository.findOne(nameEquals("Denis")).orElse(null)
long countAgeLess30 = personRepository.count(ageIsLessThan(30))
long countAgeLess20 = personRepository.count(ageIsLessThan(20))
long countAgeLess30NotDenis = personRepository.count(ageIsLessThan(30) & not(nameEquals("Denis")))
List<Person> people = personRepository.findAll(where(nameEquals("Denis") | nameEquals("Josh")))
val denis: Person? = personRepository.findOne(nameEquals("Denis")).orElse(null)
val countAgeLess30: Long = personRepository.count(ageIsLessThan(30))
val countAgeLess20: Long = personRepository.count(ageIsLessThan(20))
val countAgeLess30NotDenis: Long = personRepository.count(ageIsLessThan(30).and(not(nameEquals("Denis"))))
val people = personRepository.findAll(PredicateSpecification.where(nameEquals("Denis").or(nameEquals("Josh"))))
Specific criteria supported by Micronaut Azure Cosmos Data is ArrayContains
or CollectionContains
and for a class having an array or list of strings field named tags
it can be used either via custom repository method like this:
public abstract List<Family> findByTagsArrayContains(String tag);
abstract List<Family> findByTagsArrayContains(String tag)
abstract fun findByTagsArrayContains(tag: String): List<Family>
or predicate specification:
public static PredicateSpecification<Family> tagsContain(String tag) {
return (root, criteriaBuilder) -> ((PersistentEntityCriteriaBuilder) criteriaBuilder).arrayContains(root.get("tags"), criteriaBuilder.literal(tag));
}
static PredicateSpecification<Family> tagsContain(String tag) {
return (root, criteriaBuilder) -> ((PersistentEntityCriteriaBuilder)criteriaBuilder).arrayContains(root.get("tags"), criteriaBuilder.literal(tag))
}
fun tagsContain(tag: String): PredicateSpecification<Family?>? {
return PredicateSpecification { root: Root<Family?>, criteriaBuilder: CriteriaBuilder ->
(criteriaBuilder as PersistentEntityCriteriaBuilder).arrayContains(
root.get<Any>("tags"),
criteriaBuilder.literal(tag)
)
}
}
Please note that Microsoft Data Azure Cosmos Db supports searching for list or array containing only against single element.
For partial search using ArrayContains
generic repository methods cannot be used but custom methods with raw query like this:
@Query("SELECT DISTINCT VALUE f FROM family f WHERE ARRAY_CONTAINS(f.children, :gender, true)")
public abstract List<Family> childrenArrayContainsGender(Map.Entry<String, Object> gender);
@Query("SELECT DISTINCT VALUE f FROM family f WHERE ARRAY_CONTAINS(f.children, :gender, true)")
abstract List<Family> childrenArrayContainsGender(Map.Entry<String, Object> gender)
@Query("SELECT DISTINCT VALUE f FROM family f WHERE ARRAY_CONTAINS(f.children, :gender, true)")
abstract fun childrenArrayContainsGender(gender: Map.Entry<String, Any>): List<Family>
and then pass map entry with "gender" as key and gender as value, basically any object that will serialize to {"gender": "<gender_value>"}
for this example.
This will perform search against children
array in the Family
class using just gender
field.
It can be also achieved by using predicate specification:
public static PredicateSpecification<Family> childrenArrayContainsGender(GenderAware gender) {
return (root, criteriaBuilder) -> ((PersistentEntityCriteriaBuilder) criteriaBuilder).arrayContains(root.join("children"), criteriaBuilder.literal(gender));
}
static PredicateSpecification<Family> childrenArrayContainsGender(GenderAware gender) {
return (root, criteriaBuilder) -> ((PersistentEntityCriteriaBuilder) criteriaBuilder).arrayContains(root.join("children"), criteriaBuilder.literal(gender))
}
fun childrenArrayContainsGender(gender: IGenderAware): PredicateSpecification<Family?>? {
return PredicateSpecification { root: Root<Family?>, criteriaBuilder: CriteriaBuilder ->
(criteriaBuilder as PersistentEntityCriteriaBuilder).arrayContains(
root.join<Any, Any>("children"),
criteriaBuilder.literal(gender)
)
}
}
The examples use compile-known values, and in this case, it would be better to create custom repository methods which would come with compile-time generates queries and eliminate runtime overhead. It’s recommended to use criteria only for dynamic queries where the query structure is not known at the build-time. |
8.4.2 Updating
To implement the update you can use following method from JpaSpecificationExecutor interface:
@Override
long updateAll(UpdateSpecification<Person> spec);
long updateAll(UpdateSpecification<Person> spec)
fun updateAll(spec: UpdateSpecification<Person>?): Long
This method is expecting UpdateSpecification which is a variation of specification interface that includes access to jakarta.persistence.criteria.CriteriaUpdate
:
import static jakarta.persistence.criteria.*;
public interface UpdateSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaUpdate<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria update instance |
4 | The criteria builder |
Updating specific properties can be done using jakarta.persistence.criteria.CriteriaUpdate
interface:
query.set(root.get("name"), newName);
query.set(root.get("name"), newName)
fun updateName(newName: String, existingName: String) = update<Person> {
set(Person::name, newName)
where {
root[Person::name] eq existingName
}
}
query.set(root[Person::name], newName)
You can define criteria specification methods including update specification that will help you to create an update query:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age);
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get("name"), newName);
return null;
};
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
static UpdateSpecification<Person> setNewName(String newName) {
return (root, query, criteriaBuilder) -> {
query.set(root.get("name"), newName)
null
}
}
}
object Specifications {
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
fun nameInList(names: List<String>) = where<Person> { root[Person::name] inList names }
fun nameOrAgeMatches(age: Int, name: String?) = where<Person> {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
fun nameAndAgeMatch(age: Int, name: String) = query<Person> {
where {
root[Person::name] eq name
root[Person::age] lt age
}
}
fun updateName(newName: String, existingName: String) = update<Person> {
set(Person::name, newName)
where {
root[Person::name] eq existingName
}
}
// Different style using the criteria builder
fun nameEquals2(name: String?) = PredicateSpecification { root, criteriaBuilder ->
criteriaBuilder.equal(root[Person::name], name)
}
fun ageIsLessThan2(age: Int) = PredicateSpecification { root, criteriaBuilder ->
criteriaBuilder.lessThan(root[Person::age], age)
}
fun setNewName2(newName: String) = UpdateSpecification { root, query, _ ->
query.set(root[Person::name], newName)
null
}
}
Then you can use the update specification combined with predicate specifications:
long recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")));
long recordsUpdated = personRepository.updateAll(setNewName("Steven").where(nameEquals("Denis")))
val recordsUpdated = personRepository.updateAll(updateName("Steven", "Denis"))
8.4.3 Deleting
To delete an entity or multiple entities you can use one of the following methods from JpaSpecificationExecutor interface:
@Override
long deleteAll(PredicateSpecification<Person> spec);
@Override
long deleteAll(DeleteSpecification<Person> spec);
long deleteAll(PredicateSpecification<Person> spec)
long deleteAll(DeleteSpecification<Person> spec)
fun deleteAll(spec: PredicateSpecification<Person>?): Long
fun deleteAll(spec: DeleteSpecification<Person>?): Long
As it is for querying, deleteAll
methods also come in two variations.
First method is expecting PredicateSpecification which is the same interface described in Querying section
The second method comes with DeleteSpecification and is intended only for delete criteria because it includes access to jakarta.persistence.criteria.CriteriaDelete
.
import static jakarta.persistence.criteria.*;
public interface DeleteSpecification<T> {
(1)
@Nullable
Predicate toPredicate(@NonNull Root<T> root, (2)
@NonNull CriteriaDelete<?> query, (3)
@NonNull CriteriaBuilder criteriaBuilder (4)
);
}
1 | The specification is producing a query limiting predicate |
2 | The entity root |
3 | The criteria delete instance |
4 | The criteria builder |
For deleting you can reuse the same predicates as for querying and updating:
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name);
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age);
}
}
class Specifications {
static PredicateSpecification<Person> nameEquals(String name) {
return (root, criteriaBuilder) -> criteriaBuilder.equal(root.get("name"), name)
}
static PredicateSpecification<Person> ageIsLessThan(int age) {
return (root, criteriaBuilder) -> criteriaBuilder.lessThan(root.get("age"), age)
}
}
object Specifications {
fun nameEquals(name: String?) = where<Person> { root[Person::name] eq name }
fun ageIsLessThan(age: Int) = where<Person> { root[Person::age] lt age }
fun nameInList(names: List<String>) = where<Person> { root[Person::name] inList names }
fun nameOrAgeMatches(age: Int, name: String?) = where<Person> {
or {
root[Person::name] eq name
root[Person::age] lt age
}
}
fun nameAndAgeMatch(age: Int, name: String) = query<Person> {
where {
root[Person::name] eq name
root[Person::age] lt age
}
}
}
Simply pass the predicate specification to the deleteAll
method:
long recordsDeleted = personRepository.deleteAll(where(nameEquals("Denis")));
long recordsDeleted = personRepository.deleteAll(where(nameEquals("Denis")))
val recordsDeleted = personRepository.deleteAll(PredicateSpecification.where(nameEquals("Denis")))
val recordsDeleted = personRepository.deleteAll(where {
root[Person::name] eq "Denis"
})
val recordsDeleted = personRepository.deleteAll(where {
root[Person::name] eq "Denis"
})
8.5 Azure Cosmos Specifics
Since Azure Cosmos database is not relational like most of the database Micronaut Data supports, it does have different implementations in some specifics.
Relation Mapping
Since this database is not relational and cross container and cross document joins are not supported, relations between entities/containers are not mappable. The only type of relations supported is @Relation(value = Relation.Kind.EMBEDDED)
and @Relation(value = Relation.Kind.ONE_TO_MANY)
which are actually relations between document and its embedded objects or arrays. Here is an example of such mapping:
@MappedEntity
public class Family {
@Id
private String id;
@PartitionKey
private String lastName;
@Relation(value = Relation.Kind.EMBEDDED)
private Address address;
@Relation(value = Relation.Kind.ONE_TO_MANY)
private List<Child> children = new ArrayList<>();
@MappedEntity
class Family {
@Id
private String id
@PartitionKey
private String lastName
@Relation(value = Relation.Kind.EMBEDDED)
private Address address
@Relation(value = Relation.Kind.ONE_TO_MANY)
private List<Child> children = new ArrayList<>()
@MappedEntity
data class Family(
@field:Id
val id: String,
@PartitionKey
var lastName: String,
@Relation(value = Relation.Kind.EMBEDDED)
var address: Address,
@Relation(value = Relation.Kind.ONE_TO_MANY)
var children: List<Child> = ArrayList(),
where Relation
mapping in this case is needed for our query builder to generate projections, ordering and filtering by the fields in the embedded objects or arrays which can be seen in methods declared in FamilyRepository
public abstract List<Family> findByAddressStateAndAddressCityOrderByAddressCity(String state, String city);
public abstract void updateByAddressCounty(String county, boolean registered, @Nullable Date registeredDate);
@Join(value = "children.pets", alias = "pets")
public abstract List<Family> findByChildrenPetsType(PetType type);
public abstract List<Child> findChildrenByChildrenPetsGivenName(String name);
abstract List<Family> findByAddressStateAndAddressCityOrderByAddressCity(String state, String city)
abstract void updateByAddressCounty(String county, boolean registered, @Nullable Date registeredDate)
@Join(value = "children.pets", alias = "pets")
abstract List<Family> findByChildrenPetsType(PetType type)
abstract List<Child> findChildrenByChildrenPetsGivenName(String name)
abstract fun findByAddressStateAndAddressCityOrderByAddressCity(state: String, city: String): List<Family>
abstract fun updateByAddressCounty(county: String, registered: Boolean, @Nullable registeredDate: Date?)
@Join(value = "children.pets", alias = "pets")
abstract fun findByChildrenPetsType(type: PetType): List<Family>
abstract fun findChildrenByChildrenPetsGivenName(name: String): List<Child>
Due to the nature of the database and implementation of relations, cascading does not have much sense either. Embedded objects and arrays in the documents are being automatically saved when the document is saved.
Identity
With Azure Cosmos Db, every document has got internal id property of String type. Micronaut Data Cosmos expects @Id to be of types: Short, Integer, Long, String or UUID. When saving and reading the type is serialized to String and deserialized from a String stored in the id property. Declaring a property annotated with @Id with an unsupported type will result in an exception. Generation of ids will work only for String and UUID where UUID can be generated either by using the @GeneratedValue or @AutoPopulated annotations. String id can be generated only by using @GeneratedValue annotation. Numerical ids cannot be auto generated, and it is up to user to set the id value before saving. Composite identities are not supported.
Partition Key
In Azure Cosmos Db partition keys are the core element to distributing data efficiently into different logical and physical sets so that the queries performed against the database are completed as quickly as possible. Every mapped entity should have partition key defined. Like explained above, it can be defined using @PartitionKey annotation on appropriate entity field or via configuration as explained in configuration section. Efficiently using well-defined partition key will improve operations performance and reduce request unit costs. Micronaut Data Cosmos tries to use a partition key whenever possible. Here are some repository method examples that make use of a partition key in read, update or delete operations
public abstract Optional<Family> queryById(String id, PartitionKey partitionKey);
public abstract void updateRegistered(@Id String id, boolean registered, PartitionKey partitionKey);
public abstract void deleteByLastName(String lastName, PartitionKey partitionKey);
public abstract void deleteById(String id, PartitionKey partitionKey);
abstract Optional<Family> queryById(String id, PartitionKey partitionKey)
abstract void updateRegistered(@Id String id, boolean registered, PartitionKey partitionKey)
abstract void deleteByLastName(String lastName, PartitionKey partitionKey)
abstract void deleteById(String id, PartitionKey partitionKey)
abstract fun queryById(id: String?, partitionKey: PartitionKey?): Optional<Family?>?
abstract fun deleteByLastName(lastName: String, partitionKey: PartitionKey)
abstract fun deleteById(id: String, partitionKey: PartitionKey)
abstract fun updateRegistered(@Id id: String, registered: Boolean, partitionKey: PartitionKey)
Diagnostics
Azure Cosmos Db provides operations diagnostics so users can get that information and perhaps integrate with their logging or metrics system. In Micronaut Data Azure we expose interface CosmosDiagnosticsProcessor. Users need to implement this interface and add it to the context, so it can be available to our operations classes. It has only one method
void processDiagnostics(String operationName, @Nullable CosmosDiagnostics cosmosDiagnostics, @Nullable String activityId, double requestCharge);
which is being called after each operation against Azure Cosmos Db. Parameter operationName
is internal operation name in Micronaut Data Azure, and it has got these known values:
String CREATE_DATABASE_IF_NOT_EXISTS = "CreateDatabaseIfNotExists";
String REPLACE_DATABASE_THROUGHPUT = "ReplaceDatabaseThroughput";
String CREATE_CONTAINER_IF_NOT_EXISTS = "CreateContainerIfNotExists";
String REPLACE_CONTAINER_THROUGHPUT = "ReplaceContainerThroughput";
String REPLACE_CONTAINER = "ReplaceContainer";
String QUERY_ITEMS = "QueryItems";
String EXECUTE_BULK = "ExecuteBulk";
String CREATE_ITEM = "CreateItem";
String REPLACE_ITEM = "ReplaceItem";
String DELETE_ITEM = "DeleteItem";
so user is aware for which operation diagnostics are being processed.
8.6 Using Attribute Converter
There are cases where you would like to represent the attribute differently in the database than in the entity.
Consider the following example entity:
@MappedEntity
public class Book {
@Id
@GeneratedValue
@PartitionKey
private String id;
private String title;
private int pages;
@MappedProperty(converter = ItemPriceAttributeConverter.class)
@Nullable
private ItemPrice itemPrice;
@DateCreated
private Date createdDate;
@DateUpdated
private Date updatedDate;
public Book(String title, int pages) {
this.title = title;
this.pages = pages;
}
// ...
}
@MappedEntity
class Book {
@Id
@GeneratedValue
private String id
private String title
private int pages
@MappedProperty(converter = ItemPriceAttributeConverter)
@Nullable
private ItemPrice itemPrice
Book(String title, int pages) {
this.title = title
this.pages = pages
}
//...
}
@MappedEntity
data class Book(@field:Id
@GeneratedValue
var id: String?,
var title: String,
var pages: Int = 0,
@MappedProperty(converter = ItemPriceAttributeConverter::class)
var itemPrice: ItemPrice? = null,
@DateCreated
var createdDate: Date? = null,
@DateUpdated
var updatedDate: Date? = null)
The Book
class has a reference to a type ItemPrice
. The ItemPrice
type is defined as:
package example;
import io.micronaut.core.annotation.Introspected;
@Introspected
public class ItemPrice {
private double price;
public ItemPrice(double price) {
this.price = price;
}
public double getPrice() {
return price;
}
public static ItemPrice valueOf(double price) {
return new ItemPrice(price);
}
}
package example
import groovy.transform.Immutable
@Immutable
class ItemPrice {
double price
}
package example
data class ItemPrice(val price: Double)
As you can see @MappedProperty(converter = ItemPriceAttributeConverter.class)
is used to define the ItemPrce
converter.
The last step is to add custom attribute conversion so that Micronaut Data knows how to read and write the type from an Double
:
package example;
import io.micronaut.core.convert.ConversionContext;
import io.micronaut.data.model.runtime.convert.AttributeConverter;
import jakarta.inject.Singleton;
@Singleton
public class ItemPriceAttributeConverter implements AttributeConverter<ItemPrice, Double> {
@Override
public Double convertToPersistedValue(ItemPrice bookPrice, ConversionContext context) {
return bookPrice == null ? null : bookPrice.getPrice();
}
@Override
public ItemPrice convertToEntityValue(Double value, ConversionContext context) {
return value == null ? null : ItemPrice.valueOf(value);
}
}
package example
import groovy.transform.CompileStatic
import io.micronaut.core.convert.ConversionContext
import io.micronaut.data.model.runtime.convert.AttributeConverter
import jakarta.inject.Singleton
@Singleton // (1)
@CompileStatic
class ItemPriceAttributeConverter implements AttributeConverter<ItemPrice, Double> {
@Override // (2)
Double convertToPersistedValue(ItemPrice itemPrice, ConversionContext context) {
return itemPrice == null ? null : itemPrice.getPrice()
}
@Override // (3)
ItemPrice convertToEntityValue(Double value, ConversionContext context) {
return value == null ? null : new ItemPrice(value)
}
}
package example
import io.micronaut.core.convert.ConversionContext
import io.micronaut.data.model.runtime.convert.AttributeConverter
import jakarta.inject.Singleton
@Singleton // (1)
class ItemPriceAttributeConverter : AttributeConverter<ItemPrice?, Double?> {
// (2)
override fun convertToPersistedValue(itemPrice: ItemPrice?, context: ConversionContext): Double? {
return itemPrice?.price
}
// (3)
override fun convertToEntityValue(value: Double?, context: ConversionContext): ItemPrice? {
return if (value == null) null else ItemPrice(value)
}
}
1 | The attribute converter implements @AttributeConverter and must be a bean |
2 | A converter from ItemPrice to Double |
3 | A converter from Double to ItemPrice |
It’s possible to define the converter result type using @MappedProperty: @MappedProperty(converterPersistedType = Double.class) , in this case the data type will be detected automatically.
|
8.7 Optimistic locking
Optimistic locking is a strategy where you note the actual record state’s version and modify the record only when the version is the same.
Unlike some other Db implementations in Micronaut, for Azure Cosmos Db we rely on existence of _etag field in every document. We don’t use @Version because _etag field is of type String
and for that purpose we introduce @ETag annotation.
The field is updated each time document gets updated in Azure Cosmos Db and before updating it next time, it checks whether current value in the document being updated matches current value in the database. If value doesn’t match Micronaut will throw OptimisticLockException.
@ETag
private String documentVersion;
@ETag
private String documentVersion
@ETag
var documentVersion: String? = null
9 How Micronaut Data Works
Micronaut Data uses two key features of Micronaut: The TypeElementVisitor API and Introduction Advice.
Micronaut Data defines a RepositoryTypeElementVisitor that at compilation time visits all interfaces in the source tree that are annotated with the @Repository annotation.
The RepositoryTypeElementVisitor
uses service loader to load all available MethodCandidate implementations and iterate over them.
You can add additional method candidates by creating a library that depends on micronaut-data-processor and defining the META-INF/services definition for the method candidate. The new library should be added to your annotation processor path.
|
The MethodCandidate
interface features a isMethodMatch
method which allows matching a MethodElement. Once a MethodElement
has been matched the buildMatchInfo
method of the MethodCandidate
is invoked which returns an instance of MethodMatchInfo.
The constructor for MethodMatchInfo
allows specifying the runtime DataInterceptor to execute, which typically differs based on the return type and behaviour required and an optional Query instance which represents the query model of the query to be executed.
The RepositoryTypeElementVisitor
takes the MethodMatchInfo
and converts the Query instance into the equivalent String-based query (such as JPA-QL) using the QueryBuilder that is configured by the @Repository annotation.
A binding between runtime method parameters and named query parameters is also created.
The visited MethodElement
is then dynamically annotated with the following information:
-
The constructed string-based query (for example JPA-QL)
-
The parameter binding (A map containing the named parameter in the query as key and the name of the method argument as a value)
-
The runtime DataInterceptor to execute.
At runtime all the DataInterceptor has to do is retrieve the query, read the method parameter values using the parameter binding and execute the query.
10 Going Native with GraalVM
Micronaut Data supports GraalVM native images for both the JPA and JDBC implementations.
The currently supported databases are:
-
H2
-
Postgres
-
Oracle
-
MariaDB
-
MySQL
-
MS SQLServer
Micronaut Data will automatically detect the driver and configure the driver correctly for each database as appropriate.
11 Spring Data Support
Micronaut Data features general Spring support that is provided through the micronaut-data-spring
dependency:
implementation("io.micronaut.data:micronaut-data-spring")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-spring</artifactId>
</dependency>
In addition to this dependency you will need either spring-orm
(for Hibernate) or spring-jdbc
(for JDBC) on your classpath to enable support for Spring-based transaction management:
implementation("org.springframework:spring-orm:5.2.0.RELEASE")
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-orm</artifactId>
<version>5.2.0.RELEASE</version>
</dependency>
You can then compile existing Spring Data repository interfaces and use Spring annotations such as org.springframework.transaction.annotation.Transactional
in your application.
You can extend from existing Spring Data interfaces such as CrudRepository
, PagingAndSortingRepository
and so on.
The following Spring Data types are also supported:
Spring Data JPA Specification Support
To obtain additional support for Spring Data JPA Specifications when using Hibernate and JPA you should add the following dependency to your classpath:
implementation("io.micronaut.data:micronaut-data-spring")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-spring</artifactId>
</dependency>
You can then implement the JpaSpecificationExecutor
(the generic argument to the interface should be a domain class) interface as per the Spring Data JPA documentation.
Spring TX manager
To replace the internal data-source TX manager with the Spring JDBC alternative include:
implementation("io.micronaut.data:micronaut-data-spring-jdbc")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-spring-jdbc</artifactId>
</dependency>
And to replace the internal Hibernate TX manager with the Spring Hibernate alternative include:
implementation("io.micronaut.data:micronaut-data-spring-jpa")
<dependency>
<groupId>io.micronaut.data</groupId>
<artifactId>micronaut-data-spring-jpa</artifactId>
</dependency>
12 Guides
Read from the following list of guides to learn more about working with Micronaut Data in the Micronaut Framework: