$ mvn package
Packaging an application
This plugin supports different <packaging> types:
-
jar(default): produces a runnable fat JAR. -
native-image: generates a GraalVM native image. -
docker: builds a Docker image with the application artifacts (compiled classes, resources, dependencies, etc). -
docker-native: builds a Docker image with a GraalVM native image inside. -
docker-crac: builds a Docker image containing a CRaC checkpointed application. -
k8s: delegates packaging to Eclipse JKube’s Kubernetes Maven Plugin (k8s:build). -
openshift: delegates packaging to Eclipse JKube’s OpenShift Maven Plugin (oc:build).
To package an application, mvn package is the one-stop shop to produce the desired artifact. By default, applications
generated from Micronaut Launch have the packaging defined like
<packaging>${packaging}</packaging>, so that you can do something like mvn package -Dpackaging=native-image:
Packaging the application in a fat JAR
If the <packaging> is set to jar, this plugin will delegate to the maven-shade-plugin to produce a JAR file. Its
configuration is defined in the io.micronaut:micronaut-parent POM, and the defaults should be enough. Should you want
to customize how to produce the JAR file, refer to the
Maven Shade Plugin documentation.
Classpath-sensitive libraries and shaded JARs
Shaded JARs are convenient for single-file deployments, but they are not a good fit for every runtime. Libraries that expect to discover resources, languages, or services from separate classpath entries can fail once everything is merged into one archive. GraalVM Truffle or polyglot language deployments are a common example of this limitation.
For those applications, disable the Shade execution and deploy the application as separate artifacts instead of as a single fat JAR:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
When shading is disabled, prefer one of these deployment models:
-
mvn package -Dpackaging=dockerto build a JVM Docker image with classes, resources, and dependencies kept as separate artifacts instead of as a shaded JAR. -
A custom
Dockerfileor launcher script that starts the application from an explicit classpath.
If you provide your own Dockerfile, Micronaut Maven Plugin prepares the runtime dependencies under
target/dependency, so you can launch the application with a regular classpath instead of java -jar.
For compatibility with existing custom Dockerfiles, the flat target/dependency/* layout is still produced even
though the plugin-managed Dockerfile templates now also stage target/dependency/release/ and
target/dependency/snapshot/ as separate layers:
FROM ...
...
COPY classes /home/app/classes
COPY dependency/* /home/app/libs/
...
ENTRYPOINT ["java", "-cp", "/home/app/libs/*:/home/app/classes/", "com.example.app.Application"]
The plugin does not currently provide a dedicated non-shaded launcher for plain jar packaging, so if your runtime
depends on separate classpath entries, prefer docker packaging or your own classpath-based launch script over the
default shaded JAR.
Generating GraalVM native images
$ mvn package -Dpackaging=native-image
If the <packaging> is set to native-image, this plugin will delegate to the official Maven plugin for GraalVM Native Image (org.graalvm.buildtools:native-maven-plugin) to generate a native image. Note that for this packaging to work,
you need to run locally a GraalVM JDK.
Refer to the Native Maven Plugin documentation for more information about how to customize the generated native image.
If you want to collect native-image metadata before packaging, you can run the application with
mvn mn:run -Dagent=true -Dmn.watch=false. That generates tracing output in target/native/agent-output/main, which
you can then review or process with the upstream native-build-tools workflow before running
mvn package -Dpackaging=native-image.
For example, to add --verbose to the native image args, you should define:
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<configuration>
<buildArgs combine.children="append">
<buildArg>--verbose</buildArg>
</buildArgs>
</configuration>
</plugin>
Windows users
Sometimes, depending on how many classes are in the classpath you may see an error The command line is too long when
building a native-image. This is a known issue
with Windows. To work around this you need to configure the Shade plugin in your pom.xml and then configure the native image
plugin to use that shaded jar for building the native image:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<shadedArtifactAttached>true</shadedArtifactAttached>
<shadedClassifierName>shaded</shadedClassifierName>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.graalvm.buildtools</groupId>
<artifactId>native-maven-plugin</artifactId>
<configuration combine.self="override">
<mainClass>${exec.mainClass}</mainClass>
<classpath>
<param>${project.build.directory}/${project.artifactId}-${project.version}-shaded.jar</param>
</classpath>
</configuration>
</plugin>
Then run:
mvn package (1)
mvn package -Dpackaging=native-image (2)
| 1 | Create the shaded runnable jar for the application. |
| 2 | Use the shaded runnable jar to build the native image. |
Building JVM-based Docker images
$ mvn package -Dpackaging=docker
If the <packaging> is set to docker, this plugin will use com.google.cloud.tools:jib-maven-plugin to produce a
Docker image with the application artifacts (compiled classes, resources, dependencies, etc) inside.
The Docker image is built to a local Docker daemon (the equivalent of executing the jib::dockerBuild goal).
If you want daemonless packaging, set -Djib.buildGoal=buildTar to produce the standard Jib tarball at
target/jib-image.tar, or -Djib.buildGoal=build to publish directly to the configured registry during the package
phase.
Depending on the micronaut.runtime property, the image built might be different. Options are:
-
Default runtime:
mvn package -Dpackaging=docker. -
JDK HTTP server runtime (Starter compatibility alias; same Docker image as the default runtime):
mvn package -Dpackaging=docker -Dmicronaut.runtime=http_server_jdk. -
Daemonless tarball:
mvn package -Dpackaging=docker -Djib.buildGoal=buildTar. -
Oracle Cloud Function:
mvn package -Dpackaging=docker -Dmicronaut.runtime=oracle_function. -
AWS Lambda (Java runtimes):
mvn package -Dpackaging=docker -Dmicronaut.runtime=lambda.
You can use the mn:dockerfile goal to generate the equivalent Dockerfile. Generated
Dockerfiles copy dependency/release/ and dependency/snapshot/ separately so Docker layer caching can reuse
non-SNAPSHOT dependencies more effectively. For example, to generate the Dockerfile for AWS Lambda, run
mvn mn:dockerfile -Dpackaging=docker -Dmicronaut.runtime=lambda.
Refer to the Jib Maven Plugin documentation to see what are the configuration options that can be used.
For example, you can define the jib-maven-plugin in your POM as follows to pass additional JVM and application args:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<configuration>
<container>
<jvmFlags>
<jvmFlag>-Dmy.property=example.value</jvmFlag>
<jvmFlag>-Xms512m</jvmFlag>
<jvmFlag>-Xdebug</jvmFlag>
</jvmFlags>
<args>
<arg>some</arg>
<arg>args</arg>
</args>
</container>
</configuration>
</plugin>
Delegating packaging to Eclipse JKube
$ mvn package -Dpackaging=k8s
The k8s and openshift packaging types let Micronaut’s lifecycle bindings delegate to Eclipse JKube instead of
the built-in Docker/Jib packaging.
-
k8smapspackagetoorg.eclipse.jkube:kubernetes-maven-plugin:build. -
openshiftmapspackagetoorg.eclipse.jkube:openshift-maven-plugin:build.
These packaging types are additive. They only activate when you explicitly set the packaging, and you must still declare and configure the corresponding JKube plugin in your build. Micronaut does not manage JKube plugin versions or configuration for you.
For example, to use the Kubernetes integration:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.19.0</version>
</plugin>
Then package with:
mvn package -Dpackaging=k8s
For OpenShift, declare the OpenShift plugin instead:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>openshift-maven-plugin</artifactId>
<version>1.19.0</version>
</plugin>
and package with:
mvn package -Dpackaging=openshift
Bringing your own Dockerfile
$ mvn package -Dpackaging=docker
If there is a Dockerfile in the project’s root directory, it will be used to build the image. The image will be built
using the target folder as the context directory. This plugin will also prepare all the compile and runtime
dependency JARs in the target/dependency folder. Custom Dockerfiles can continue to use the flat
target/dependency/* layout, while generated Dockerfiles use split dependency/release/ and
dependency/snapshot/ directories for finer-grained image layers. For a custom Dockerfile, you can still do:
FROM ... ... COPY classes /home/app/classes COPY dependency/* /home/app/libs/ ... ENTRYPOINT ["java", "-cp", "/home/app/libs/*:/home/app/classes/", "com.example.app.Application"]
Building GraalVM-based Docker images
$ mvn package -Dpackaging=docker-native
If the <packaging> is set to docker-native, this plugin will use a Docker client to build and tag custom Docker
images. In this case, the micronaut.runtime property will also determine how the image is prepared.
-
Default runtime.
-
Default image (pinned immutable base):
mvn package -Dpackaging=docker-native. -
JDK HTTP server runtime (Starter compatibility alias for the default runtime; same behavior as the default image):
mvn package -Dpackaging=docker-native -Dmicronaut.runtime=http_server_jdk. -
Static image:
mvn package -Dpackaging=docker-native -Dmicronaut.native-image.static=true. This uses GraalVM’s--static --libc=muslflags and then puts the binary in ascratchimage. -
Mostly static image:
mvn package -Dpackaging=docker-native -Dmicronaut.native-image.base-image-run=gcr.io/distroless/cc-debian12 -Pgraalvm. This will create a "mostly" static native image and adds automatically-H:+StaticExecutableWithDynamicLibCflag. -
Custom runtime image:
mvn package -Dpackaging=docker-native -Dmicronaut.native-image.base-image-run=your-own-image-for-run-the-native-image
-
-
Oracle Cloud Function:
mvn package -Dpackaging=docker-native -Dmicronaut.runtime=oracle_function. -
AWS Lambda (custom runtime):
mvn package -Dpackaging=docker-native -Dmicronaut.runtime=lambda.
For builds behind an HTTP or HTTPS proxy, docker-native automatically forwards the standard JVM proxy
properties as Docker build arguments. The plugin maps http.proxyHost, http.proxyPort, https.proxyHost,
https.proxyPort, and http.nonProxyHosts to the corresponding HTTP_PROXY, HTTPS_PROXY, and NO_PROXY
build args, including their lowercase variants.
$ mvn package -Dpackaging=docker-native \
-Dhttp.proxyHost=proxy.example.com \
-Dhttp.proxyPort=8080 \
-Dhttps.proxyHost=proxy.example.com \
-Dhttps.proxyPort=8080 \
'-Dhttp.nonProxyHosts=localhost|127.0.0.1|*.example.com'
Prefer passing credential-bearing proxy values from the command line or CI secrets rather than committing them to the project POM.
Bringing your own native Dockerfile
$ mvn package -Dpackaging=docker-native
If there is a Dockerfile in the project’s root directory and the selected docker-native path produces a Docker
container image, the file will be copied into target/ and used to build that image with target/ as the Docker
build context.
AWS Lambda and other docker-native modes that only produce ZIP artifacts such as function.zip do not consult the
project-root Dockerfile and continue to use the plugin’s bundled native Dockerfiles for the native-image build step.
For image-building docker-native modes, the plugin still prepares the native-image inputs under target/ and passes
the same build arguments as the bundled native Dockerfiles where applicable. Depending on the runtime and image mode,
it may provide:
-
BASE_IMAGE -
BASE_IMAGE_RUNwhenmicronaut.native-image.staticis nottrue -
CLASS_NAMEwhen the runtime still needs an application entrypoint -
PORTS
For example, Oracle Cloud Function images disable CLASS_NAME, and static native images do not pass BASE_IMAGE_RUN.
If your custom Dockerfile needs those values, declare the matching ARG instructions only for the arguments used by
your build and copy the generated native build inputs from the target/ context. For example:
ARG BASE_IMAGE
ARG BASE_IMAGE_RUN
FROM ${BASE_IMAGE} AS builder
WORKDIR /home/app
COPY classes /home/app/classes
COPY dependency/* /home/app/libs/
COPY graalvm-reachability-metadata /home/app/graalvm-reachability-metadata
COPY native/generated /home/app/
COPY *.args /home/app/graalvm-native-image.args
ARG CLASS_NAME
RUN native-image @/home/app/graalvm-native-image.args -H:Class=${CLASS_NAME} -H:Name=application -cp "/home/app/libs/*:/home/app/classes/"
FROM ${BASE_IMAGE_RUN}
COPY --from=builder /home/app/application /app/application
ARG PORTS=8080
EXPOSE ${PORTS}
ENTRYPOINT ["/app/application"]
The image built can be customised using Jib. In particular, you can set:
-
The builder image, using
<from><image>. If the builder image comes from a registry that requires authentication, you can use<from><auth><username>and<from><auth><password>. Check the Jib documentation for more details. -
The image name/tags that will be used for building, using either
<to><image>and/or<to><tags>.
You can also override the native build-stage image with the Micronaut-specific micronaut.native-image.base-image
property. This is the preferred override when you want to stay within Micronaut Maven Plugin configuration instead of
using Jib-specific <from><image> or jib.from.image.
$ mvn package -Dpackaging=docker-native \
-Dmicronaut.native-image.base-image=container-registry.oracle.com/graalvm/native-image:21-ol8
Use micronaut.native-image.base-image for the native builder stage and micronaut.native-image.base-image-run for
the final runtime image. The native builder-stage image is resolved in this order:
-
jib.from.image. -
micronaut.native-image.base-image. -
Jib
<from><image>. -
The plugin-computed default Oracle GraalVM GFTC builder image.
That keeps jib.from.image as the highest-precedence override so existing automation keeps working unchanged, while
still letting Micronaut-specific configuration win over the Jib POM builder image.
You can also use some system properties from the command line:
-
jib.from.image. -
jib.from.auth.username. -
jib.from.auth.password. -
jib.to.image. -
jib.to.tags. -
jib.to.auth.username. -
jib.to.auth.password. -
jib.to.credHelper.
When using jib.buildGoal=buildTar or jib.buildGoal=build, only the package phase becomes daemonless. deploy for
docker packaging still goes through mn:docker-push, which expects an image to exist in the local Docker daemon. As a
result, mvn deploy -Dpackaging=docker -Djib.buildGoal=buildTar and … -Djib.buildGoal=build are not supported; for
deployments, either omit jib.buildGoal or set it to dockerBuild.
Note that changing the builder image to a totally different one than the default might break image building, since the rest
of the build steps expect a certain builder-stage base image. By default, that builder-stage base image comes from
container-registry.oracle.com/graalvm/native-image.
If you need a stricter reproducibility guarantee than a moving image tag provides, override the builder-stage image with a
digest-pinned reference via micronaut.native-image.base-image or jib.from.image.
If the selected builder image requires authentication, continue using Jib auth configuration such as
jib.from.auth.username and jib.from.auth.password.
In the case of AWS custom runtime, it starts from amazonlinux:2023, and this cannot be changed. Also, in this case the
result is not a tagged Docker image, but a function.zip archive that contains the bootstrap launch script and the native binary.
Essentially, what you need to upload to AWS Lambda. Also in this case, the Micronaut Maven Plugin will detect the host
operating system architecture (based on the os.arch Java system property) and will install the corresponding GraalVM
binary distribution inside the Docker image. This means that when running packaging from an X86_64 (Intel/AMD) machine,
the produced native image will be an amd64 binary, whilst on an ARM host (such as the new Mac M1) it will be an
aarch64 binary.
For AWS Lambda custom runtimes, use lambdaBootstrapArguments to append extra native bootstrap flags after the plugin’s
default flags. This is different from appArguments, which apply to the Oracle Cloud Function Dockerfile path.
<plugin>
<groupId>io.micronaut.maven</groupId>
<artifactId>micronaut-maven-plugin</artifactId>
<configuration>
<lambdaBootstrapArguments>
<arg>-Dio.netty.noUnsafe=true</arg>
<arg>-Dio.netty.noPreferDirect=false</arg>
</lambdaBootstrapArguments>
</configuration>
</plugin>
Or from the command line:
$ mvn package -Dpackaging=docker-native -Dmicronaut.runtime=lambda \
-Dmicronaut.lambda.bootstrap.args="-Dio.netty.noUnsafe=true,-Dio.netty.noPreferDirect=false"
Also, to pass additional arguments to the native-image process:
<plugin>
<groupId>io.micronaut.maven</groupId>
<artifactId>micronaut-maven-plugin</artifactId>
<configuration>
<nativeImageBuildArgs>
<nativeImageBuildArg>--verbose</nativeImageBuildArg>
</nativeImageBuildArgs>
</configuration>
</plugin>
Or from the command line:
$ mvn package -Dpackaging=docker-native -Dmicronaut.native-image.args="--verbose"
Building CRaC-based Docker images
Warning: The Micronaut CRaC module is in experimental stages. Use at your own risk!
The CRaC (Coordinated Restore at Checkpoint) Project researches coordination of Java programs with mechanisms to checkpoint (make an image of, snapshot) a Java instance while it is executing. Restoring from the image could be a solution to some problems with the start-up and warm-up times.
Creation of a pre-warmed, checkpointed docker image cane be done with the following command:
$ mvn package -Dpackaging=docker-crac
This will first create an intermediate image that contains the application and all of its dependencies.
This image is then executed and warmed up via a warmup.sh command, and a checkpoint is taken.
A final image is then built which contains this checkpoint.
You will then be able to run your image via:
docker run --cap-add=cap_sys_ptrace -p 8080:8080 <image-name>
The image built can be customised using Jib. In particular, you can set:
-
The base image, using
<from><image>. -
The image name/tags that will be used for building, using either
<to><image>and/or<to><tags>.
Checking an application is ready
As part of the checkpointing process, the application will be tested until it is ready to recieve requests.
This is by default done by executing the command curl --output /dev/null --silent --head http://localhost:8080, however you can override this by setting the crac.readiness property in your build.
<properties>
<crac.readiness>curl --output /dev/null --silent --head https://localhost</crac.readiness>
</properties>
Customizing warmup
The default warmup script simply makes a request to port 8080 of the application.
However, you can specify your own by placing a Bash script named warmup.sh in the root project folder.
For example, to hit the root endpoint 10 times, you could create a file with the following contents:
#!/bin/bash
for run in {1..10}; do
curl --output /dev/null --silent http://localhost:8080
done
Customizing the JDK
By default, the CRaC JDK used to build the image will be for the current system architecture and Java 25. These can be overridden by passing properties to the build:
<properties>
<crac.java.version>25</crac.java.version>
<crac.arch>amd64</crac.arch>
<crac.os>linux-glibc</crac.os>
</properties>
Currently only Java 25, and amd64 and aarch64 architectures are supported.
|