Case Study: Mill vs Maven
Compared to Maven:
-
Mill follows Maven’s innovation of good built-in defaults: Mill’s built-in
JavaModule
s follow Maven’s "convention over configuration" style, so small Mill projects require minimal effort to get started, and larger Mill projects have a consistent structure building on these defaults. -
Mill automatically caches and parallelizes your build, offering 4-10x speedups: Not just the built-in tasks that Mill ships with, but also any custom tasks or modules. This maximizes the snappiness of your command-line build workflows to keep you productive, which especially matters in larger codebases where builds tend to get slow: a Maven "clean install" workflow taking over a minute might take just a few seconds in Mill.
-
Mill makes customizing the build tool much easier than Maven. Projects usually grow beyond just compiling a single language: needing custom code generation, linting workflows, tool integrations, output artifacts, or support for additional languages. Mill’s extensibility and IDE experience makes doing this yourself easy and safe, with type-checked code and sandboxed tasks
This page compares using Mill to Maven, using the Netty Network Server
codebase as the example. Netty is a large, old codebase. 500,000 lines of Java, written by
over 100 contributors across 15 years, split over 47 subprojects, with over 10,000 lines of
Maven pom.xml
configuration alone. By porting it to Mill, this case study should give you
an idea of how Mill compares to Maven in larger, real-world projects.
To do this, we have written a Mill build.mill
file for the Netty project. This can be used
with Mill to build and test the various submodules of the Netty project without needing to
change any other files in the repository:
Completeness
The Mill build for Netty is not 100% complete, but it covers most of the major parts of Netty: compiling Java, compiling and linking C code via JNI, running JUnit tests and some integration tests using H2Spec. All 47 Maven subprojects are modelled using Mill, with the entire Netty codebase being approximately 500,000 lines of code.
$ git ls-files | grep \\.java | xargs wc -l
...
513805 total
The goal of this exercise is not to be 100% feature complete enough to replace the Maven build today. It is instead meant to provide a realistic comparison of how using Mill in a large, complex project compares to using Maven.
Both Mill and Maven builds end up compiling the same set of files, although the number being
reported by the command line is slightly higher for Mill (2915 files) than Maven (2822) due
to differences in the reporting (e.g. Maven does not report package-info.java
files as part
of the compiled file count).
Performance
The Mill build for Netty is much more performant than the default Maven build. This applies to most workflows.
For the benchmarks below, each provided number is the wall time of three consecutive runs on my M1 Macbook Pro. While ad-hoc, these benchmarks are enough to give you a flavor of how Mill’s performance compares to Maven:
Benchmark | Maven | Mill | Speedup |
---|---|---|---|
1m 38.80s |
23.14s |
4.3x |
|
0m 48.92s |
0m 08.79s |
5.6x |
|
0m 08.46s |
0m 01.94s |
4.4x |
|
0m 06.82s |
0m 00.54s |
12.6x |
|
0m 05.25s |
0m 00.47s |
11.2x |
The column on the right shows the speedups of how much faster Mill is compared to the equivalent Maven workflow. In most cases, Mill is 4-10x faster than Maven. Below, we will go into more detail of each benchmark: how they were run, what they mean, and how we can explain the difference in performing the same task with the two different build tools.
Sequential Clean Compile All
$ ./mvnw clean; time ./mvnw -Pfast -Dcheckstyle.skip -Denforcer.skip=true -DskipTests install
1m 38.80s
1m 36.14s
1m 39.95s
$ ./mill clean; time ./mill -j1 __.compile
0m 23.99s
0m 23.14s
0m 22.68s
This benchmark exercises the simple "build everything from scratch" workflow, with all remote
artifacts already in the local cache. The actual files
being compiled are the same in either case (as mentioned in the Completeness section).
I have explicitly disabled the various linters and tests for the Maven build, to just focus
on the compilation of Java source code making it an apples-to-apples comparison. As Mill
runs tasks in parallel by default, I have disabled parallelism explicitly via -j1
As a point of reference, Java typically compiles at 10,000-50,000 lines per second on a single thread, and the Netty codebase is ~500,000 lines of code, so we would expect compile to take 10-50 seconds without parallelism. The 20-30s taken by Mill seems about what you would expect for a codebase of this size, and the ~150s taken by Maven is far beyond what you would expect from simple Java compilation.
Maven Compile vs Install
In general, the reason we have to use ./mvnw install
rather than ./mvnw compile
is that
Maven’s main mechanism for managing inter-module dependencies is via the local artifact cache
at ~/.m2/repository
. Although many workflows work with compile
, some don’t, and
./mvnw clean compile
on the Netty repository fails with:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-dependency-plugin:2.10:unpack-dependencies
(unpack) on project netty-resolver-dns-native-macos: Artifact has not been packaged yet.
When used on reactor artifact, unpack should be executed after packaging: see MDEP-98. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <args> -rf :netty-resolver-dns-native-macos
In contrast, Mill builds do not rely on the local artifact cache, even though Mill is able
to publish to it. That means Mill builds are able to work directly with classfiles on disk,
simply referencing them and using them as-is without spending time packing and unpacking them
into .jar
files. Furthermore, even if we did want Mill to generate the .jar
s, the
overhead of doing so is just a few seconds, far less than the two entire minutes that
Maven’s overhead adds to the clean build:
$ ./mill clean; time ./mill -j1 __.jar
0m 32.58s
0m 24.90s
0m 23.35s
From this benchmark, we can see that although both Mill and Maven are doing the same work, Mill takes about as long as it should for this task of compiling 500,000 lines of Java source code, while Maven takes considerably longer. And much of this overhead comes from Maven doing unnecessary work packing/unpacking jar files and publishing to a local repository, whereas Mill directly uses the classfiles generated on disk to bypass all that work.
Parallel Clean Compile All
$ ./mvnw clean; time ./mvnw -T 10 -Pfast -DskipTests -Dcheckstyle.skip -Denforcer.skip=true install
0m 48.92s
0m 48.41s
0m 49.50s
$ ./mill clean; time ./mill __.compile
0m 09.07s
0m 08.79s
0m 07.93s
This example compares Maven v.s. Mill, when performing the clean build on 10 threads.
Both build tools support parallelism (-T 10
in Maven, by default in Mill), and both
tools see a similar ~2x speedup for building the Netty project using 4 threads. Again,
this tests a clean build using ./mvnw clean
or ./mill clean
.
This comparison shows that much of Mill’s speedup over Maven is unrelated to parallelism. Whether sequential or parallel, Mill has approximately the same ~7x speedup over Maven when performing a clean build of the Netty repository.
Clean Compile Single-Module
$ ./mvnw clean; time ./mvnw -pl common -Pfast -DskipTests -Dcheckstyle.skip -Denforcer.skip=true install
0m 08.46s
0m 08.90s
0m 08.30s
$ ./mill clean common; time ./mill common.test.compile
0m 01.99s
0m 01.81s
0m 01.94s
This exercise limits the comparison to compiling a single module, in this case common/
.
./mvnw -pl common install
compiles both the main/
and test/
sources, whereas
./mill common.compile
would only compile the main/
sources, and we need to explicitly
reference common.test.compile
to compile both (because common.test.compile
depends on
common.compile
, common.compile
gets run automatically)
Again, we can see a significant speedup of Mill v.s. Maven remains even when compiling a
single module: a clean compile of common/
is about 9x faster with Mill than with Maven!
Again, common/
is about 40,000 lines of Java source code, so at 10,000-50,000 lines per
second we would expect it to compile in about 1-4s. That puts Mill’s compile times right
at what you would expect, whereas Maven’s has a significant overhead.
Incremental Compile Single-Module
$ echo "" >> common/src/main/java/io/netty/util/AbstractConstant.java
$ time ./mvnw -pl common -Pfast -DskipTests -Dcheckstyle.skip -Denforcer.skip=true install
Compiling 174 source files to /Users/lihaoyi/Github/netty/common/target/classes
Compiling 60 source files to /Users/lihaoyi/Github/netty/common/target/test-classes
0m 06.89s
0m 06.34s
0m 06.82s
$ echo "" >> common/src/main/java/io/netty/util/AbstractConstant.java
$ time ./mill common.test.compile
compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
0m 00.78s
0m 00.54s
0m 00.51s
This benchmark explores editing a single file and re-compiling common/
.
Maven by default takes about as long to re-compile common/
s main/
and test/
sources
after a single-line edit as it does from scratch, about 20 seconds. However, Mill
takes just about 0.5s to compile and be done! Looking at the logs, we can see it is
because Mill only compiles the single file we changed, and not the others.
For this incremental compilation, Mill uses the Zinc Incremental Compiler. Zinc is able to analyze the dependencies between files to figure out what needs to re-compile: for an internal change that doesn’t affect downstream compilation (e.g. changing a string literal) Zinc only needs to compile the file that changed, taking barely half a second:
$ git diff
diff --git a/common/src/main/java/io/netty/util/AbstractConstant.java b/common/src/main/java/io/netty/util/AbstractConstant.java
index de16653cee..9818f6b3ce 100644
--- a/common/src/main/java/io/netty/util/AbstractConstant.java
+++ b/common/src/main/java/io/netty/util/AbstractConstant.java
@@ -83,7 +83,7 @@ public abstract class AbstractConstant<T extends AbstractConstant<T>> implements
return 1;
}
- throw new Error("failed to compare two different constants");
+ throw new Error("failed to compare two different CONSTANTS!!");
}
}
$ time ./mill common.test.compile
[info] compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
0m 00.55s6
In contrast, a change to a class or function public signature (e.g. adding a method) may require downstream code to re-compile, and we can see that below:
$ git diff
diff --git a/common/src/main/java/io/netty/util/AbstractConstant.java b/common/src/main/java/io/netty/util/AbstractConstant.java
index de16653cee..f5f5a93e0d 100644
--- a/common/src/main/java/io/netty/util/AbstractConstant.java
+++ b/common/src/main/java/io/netty/util/AbstractConstant.java
@@ -41,6 +41,10 @@ public abstract class AbstractConstant<T extends AbstractConstant<T>> implements
return name;
}
+ public final String name2() {
+ return name;
+ }
+
@Override
public final int id() {
return id;
$ time ./mill common.test.compile
[25/48] common.compile
[info] compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
[info] compiling 2 Java sources to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
[info] compiling 4 Java sources to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
[info] compiling 3 Java sources to /Users/lihaoyi/Github/netty/out/common/test/compile.super/mill/scalalib/JavaModule/compile.dest/classes ...
[info] compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/test/compile.super/mill/scalalib/JavaModule/compile.dest/classes ...
0m 00.81s2
Here, we can see that Zinc ended up re-compiling 7 files in common/src/main/
and 3 files
in common/src/test/
as a result of adding a method to AbstractConstant.java
.
In general, Zinc is conservative, and does not always end up selecting the minimal set of
files that need re-compiling: e.g. in the above example, the new method name2
does not
interfere with any existing method, and the ~9 downstream files did not actually need to
be re-compiled! However, even conservatively re-compiling 9 files is much faster than
Maven blindly re-compiling all 234 files, and as a result the iteration loop of
editing-compiling-testing your Java projects in Mill can be much faster than doing
the same thing in Maven
No-Op Compile Single-Module
$ time ./mvnw -pl common -Pfast -DskipTests -Dcheckstyle.skip -Denforcer.skip=true install
0m 05.08s
0m 05.25s
0m 05.26s
$ time ./mill common.test.compile
0m 00.49s
0m 00.47s
0m 00.45s
This last benchmark explores the boundaries of Maven and Mill: what happens if we ask to compile a single module that has already been compiled? In this case, there is literally nothing to do. For Maven, "doing nothing" takes ~17 seconds, whereas for Mill we can see it complete and return in less than 0.5 seconds
Grepping the logs, we can confirm that both build tools skip re-compilation of the
common/
source code. In Maven, skipping compilation only saves us ~2 seconds,
bringing down the 19s we saw in Clean Compile Single-Module to 17s here. This
matches what we expect about Java compilation speed, with the 2s savings on
40,000 lines of code telling us Java compiles at ~20,000 lines per second. However,
we still see Maven taking 17 entire seconds before it can decide to do nothing!
In contrast, doing the same no-op compile using Mill, we see the timing from 2.2s in Clean Compile Single-Module to 0.5 seconds here. This is the same ~2s reduction we saw with Maven, but due to Mill’s minimal overhead, in the end the command finishes in less than half a second.
Extensibility & IDE Experience
Even though Maven is designed to be declarative, in many real-world codebases you end up needing to run ad-hoc scripts and logic. This section will explore two such scenarios, so you can see how Mill differs from Maven in the handling of these requirements.
JVM Libraries: Groovy
The Maven build for the common/
subproject
uses a Groovy script for code generation. This is configured via:
<properties>
<collection.template.dir>${project.basedir}/src/main/templates</collection.template.dir>
<collection.template.test.dir>${project.basedir}/src/test/templates</collection.template.test.dir>
<collection.src.dir>${project.build.directory}/generated-sources/collections/java</collection.src.dir>
<collection.testsrc.dir>${project.build.directory}/generated-test-sources/collections/java</collection.testsrc.dir>
</properties>
<plugin>
<groupId>org.codehaus.gmaven</groupId>
<artifactId>groovy-maven-plugin</artifactId>
<version>2.1.1</version>
<dependencies>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy</artifactId>
<version>3.0.9</version>
</dependency>
<dependency>
<groupId>ant</groupId>
<artifactId>ant-optional</artifactId>
<version>1.5.3-1</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>generate-collections</id>
<phase>generate-sources</phase>
<goals>
<goal>execute</goal>
</goals>
<configuration>
<source>${project.basedir}/src/main/script/codegen.groovy</source>
</configuration>
</execution>
</executions>
</plugin>
In contrast, the Mill build configures the code generation as follows:
import $ivy.`org.codehaus.groovy:groovy:3.0.9`
import $ivy.`org.codehaus.groovy:groovy-ant:3.0.9`
import $ivy.`ant:ant-optional:1.5.3-1`
object common extends NettyModule{
...
def script = Task.Source(millSourcePath / "src" / "main" / "script")
def generatedSources0 = Task {
val shell = new groovy.lang.GroovyShell()
val context = new java.util.HashMap[String, Object]
context.put("collection.template.dir", "common/src/main/templates")
context.put("collection.template.test.dir", "common/src/test/templates")
context.put("collection.src.dir", (T.dest / "src").toString)
context.put("collection.testsrc.dir", (T.dest / "testsrc").toString)
shell.setProperty("properties", context)
shell.setProperty("ant", new groovy.ant.AntBuilder())
shell.evaluate((script().path / "codegen.groovy").toIO)
(PathRef(T.dest / "src"), PathRef(T.dest / "testsrc"))
}
def generatedSources = Task { Seq(generatedSources0()._1)}
}
While the number of lines of code written is not that different, the Mill configuration
is a lot more direct: rather than writing 35 lines of XML to configure an opaque third-party
plugin, we instead write 25 lines of code to directly do what we want: import groovy
,
configure a GroovyShell
, and use it to evaluate our codegen.groovy
script. Although
you may not be familiar with the Scala language that Mill builds are written in, you could
probably skim the snippet above and guess what it is doing, and guess correctly.
This direct control means you are not beholden to third party plugins: rather than being limited to what an existing plugin allows you to do, Mill allows you to directly write the code necessary to do what you need to do. In this case, if we need to invoke Groovy and Groovy-Ant, Mill allows us to direct import $ivy the relevant JVM artifacts from Maven Central and begin using them in our build code in a safe, strongly-typed fashion, with full autocomplete and code assistance:
Mill gives you the full power of the JVM ecosystem to use in your build: any Java library
on Maven central is just an import $ivy
away, and can be used with the full IDE support
and tooling experience you are used to in the JVM ecosystem.
Subprocesses: Make
The Maven build for the transport-native-unix-common/
subproject needs to call
make
in order to compile its C code to modules that can be loaded into Java applications
via JNI. Maven does this via the maven-dependency-plugin
and maven-antrun-plugin
which are
approximately configured as below:
<properties>
<exe.make>make</exe.make>
<exe.compiler>gcc</exe.compiler>
<exe.archiver>ar</exe.archiver>
<nativeLibName>libnetty-unix-common</nativeLibName>
<nativeIncludeDir>${project.basedir}/src/main/c</nativeIncludeDir>
<jniUtilIncludeDir>${project.build.directory}/netty-jni-util/</jniUtilIncludeDir>
<nativeJarWorkdir>${project.build.directory}/native-jar-work</nativeJarWorkdir>
<nativeObjsOnlyDir>${project.build.directory}/native-objs-only</nativeObjsOnlyDir>
<nativeLibOnlyDir>${project.build.directory}/native-lib-only</nativeLibOnlyDir>
</properties>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<!-- unpack netty-jni-util files -->
<execution>
<id>unpack</id>
<phase>generate-sources</phase>
<goals>
<goal>unpack-dependencies</goal>
</goals>
<configuration>
<includeGroupIds>io.netty</includeGroupIds>
<includeArtifactIds>netty-jni-util</includeArtifactIds>
<classifier>sources</classifier>
<outputDirectory>${jniUtilIncludeDir}</outputDirectory>
<includes>**.h,**.c</includes>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>true</overWriteSnapshots>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<!-- invoke the make file to build a static library -->
<execution>
<id>build-native-lib</id>
<phase>generate-sources</phase>
<goals>
<goal>run</goal>
</goals>
<configuration>
<target>
<exec executable="${exe.make}" failonerror="true" resolveexecutable="true">
<env key="CC" value="${exe.compiler}" />
<env key="AR" value="${exe.archiver}" />
<env key="LIB_DIR" value="${nativeLibOnlyDir}" />
<env key="OBJ_DIR" value="${nativeObjsOnlyDir}" />
<env key="JNI_PLATFORM" value="${jni.platform}" />
<env key="CFLAGS" value="-O3 -Werror -Wno-attributes -fPIC -fno-omit-frame-pointer -Wunused-variable -fvisibility=hidden" />
<env key="LDFLAGS" value="-Wl,--no-as-needed -lrt -Wl,-platform_version,macos,10.9,10.9" />
<env key="LIB_NAME" value="${nativeLibName}" />
<!-- support for __attribute__((weak_import)) by the linker was added in 10.2 so ensure we
explicitly set the target platform. Otherwise we may get fatal link errors due to weakly linked
methods which are not expected to be present on MacOS (e.g. accept4). -->
<env key="MACOSX_DEPLOYMENT_TARGET" value="10.9" />
</exec>
</target>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
The maven-dependency-plugin
is used to download and unpack a single jar
file,
while maven-antrun-plugin
is used to call make
. Both are configured via XML,
with the make
command essentially being a bash script wrapped in layers of XML.
In contrast, the Mill configuration for this logic is as follows:
def makefile = Task.Source(millSourcePath / "Makefile")
def cSources = Task.Source(millSourcePath / "src" / "main" / "c")
def cHeaders = Task {
for(p <- os.walk(cSources().path) if p.ext == "h"){
os.copy(p, T.dest / p.relativeTo(cSources().path), createFolders = true)
}
PathRef(T.dest)
}
def make = Task {
os.copy(makefile().path, T.dest / "Makefile")
os.copy(cSources().path, T.dest / "src" / "main" / "c", createFolders = true)
val Seq(sourceJar) = resolveDeps(
deps = Task.Anon(Agg(ivy"io.netty:netty-jni-util:0.0.9.Final").map(bindDependency())),
sources = true
)().toSeq
os.proc("jar", "xf", sourceJar.path).call(cwd = T.dest / "src" / "main" / "c")
os.proc("make").call(
cwd = T.dest,
env = Map(
"CC" -> "clang",
"AR" -> "ar",
"JNI_PLATFORM" -> "darwin",
"LIB_DIR" -> "lib-out",
"OBJ_DIR" -> "obj-out",
"MACOSX_DEPLOYMENT_TARGET" -> "10.9",
"CFLAGS" -> Seq(
"-mmacosx-version-min=10.9", "-O3", "-Werror", "-Wno-attributes", "-fPIC",
"-fno-omit-frame-pointer", "-Wunused-variable", "-fvisibility=hidden",
"-I" + sys.props("java.home") + "/include/",
"-I" + sys.props("java.home") + "/include/darwin",
"-I" + sys.props("java.home") + "/include/linux",
).mkString(" "),
"LD_FLAGS" -> "-Wl,--no-as-needed -lrt -Wl,-platform_version,macos,10.9,10.9",
"LIB_NAME" -> "libnetty-unix-common"
)
)
(PathRef(T.dest / "lib-out"), PathRef(T.dest / "obj-out"))
}
In Mill, we define the makefile
, cSources
, cHeaders
, and make
tasks. The bulk
of the logic is in def make
, which prepares the makefile
and C sources,
resolves the netty-jni-util
source jar and unpacks it with jar xf
, and calls make
with the given environment variables. Both cHeaders
and the output of make
are used
in downstream modules. In this case, make
is a command-line utility rather than a JVM
library, so rather than importing it from Maven Central we use os.proc.call
to invoke it.
Again, the Maven XML and Mill code contains exactly the same logic, and neither is
much more concise or verbose than the other. Rather, what is interesting is that
it is much easier to work with this kind of build logic via concise type-checked code,
rather than configuring a bunch of third-party plugins to try and achieve what you want.
With Mill, you get your full IDE experience working with your build: autocomplete, code
assistance, navigation, and so on. Although working with the os.proc.call
subprocess API
is not as right as working with the JVM libraries we saw earlier, it is still a much
richer experience than you typically get configuring XML files:
Debugging Tooling
Another area that Mill does better than Maven is providing builtin tools for you to understand
what your build is doing. For example, the Netty project build discussed has 47 submodules
and associated test suites, but how do these different modules depend on each other? With
Mill, you can run ./mill visualize __.compile
, and it will show you how the
compile
task of each module depends on the others (right-click open-image-in-new-tab to see
at full size):
Apart from the static dependency graph, another thing of interest may be the performance
profile and timeline: where the time is spent when you actually compile everything. With
Mill, when you run a compilation using ./mill __.compile
, you automatically get a
out/mill-chrome-profile.json
file that you can load into your chrome://tracing
page and
visualize where your build is spending time and where the performance bottlenecks are:
If you want to inspect the tree of third-party dependencies used by any module, the
built in ivyDepsTree
command lets you do that easily:
$ ./mill handler.ivyDepsTree
├─ org.jctools:jctools-core:4.0.5
├─ org.junit.jupiter:junit-jupiter-api:5.9.0
│ ├─ org.apiguardian:apiguardian-api:1.1.2
│ ├─ org.junit.platform:junit-platform-commons:1.9.0
│ │ └─ org.apiguardian:apiguardian-api:1.1.2
│ └─ org.opentest4j:opentest4j:1.2.0
└─ com.google.protobuf:protobuf-java:2.6.1
None of these tools are rocket science, but Mill provides all of them out of the box in a convenient package for you to use. Whether you want a visual graph layout, a parallel performance profile, or a third-party dependency tree of your project, Mill makes it easy and convenient without needing to fiddle with custom configuration or third party plugins. This helps make it easy for you to explore, understand, and take ownership of the build tool.
Conclusion
Both the Mill and Maven builds we discussed in this case study do the same thing: they
compile Java code, zip them into Jar files, run tests. Sometimes they compile and link
C code or run make
or Groovy.
Mill doesn’t try to do more than Maven does, but it
tries to do it better: faster compiles, shorter and easier to read configs, easier
extensibility via libraries (e.g. org.codehaus.groovy:groovy
) and subprocesses
(e.g. make
), better IDE support for working with your build.
Again, the Mill build used in this comparison is for demonstration purposes, and more work would be necessary to make the Mill build production ready: compatibility with different operating system architectures, Java versions, and so on. However, hopefully it demonstrates the potential value: greatly improved performance, easy extensibility, and a much better IDE experience for working with your build. Mill provides builtin tools to help you navigate, visualize, and understand your build, turning a normally opaque "build config" into something that’s transparent and easily understandable.