Java Case Study: Netty

This page compares using Mill to Maven, using the Netty Network Server codebase as the example. Netty is a large, old codebase. 500,000 lines of Java, written by over 100 contributors across 15 years, split over 47 subprojects, with over 10,000 lines of Maven pom.xml configuration alone. By porting it to Mill, this case study should give you an idea of how Mill compares to Maven in larger, real-world projects.

To do this, we have written a Mill build.sc file for the Netty project. This can be used with Mill to build and test the various submodules of the Netty project without needing to change any other files in the repository:

Completeness

The Mill build for Netty is not 100% complete, but it covers most of the major parts of Netty: compiling Java, compiling and linking C code via JNI, running JUnit tests and some integration tests using H2Spec. All 47 Maven subprojects are modelled using Mill, with the entire Netty codebase being approximately 500,000 lines of code.

$ git ls-files | grep \\.java | xargs wc -l
...
513805 total

The goal of this exercise is not to be 100% feature complete enough to replace the Maven build today. It is instead meant to provide a realistic comparison of how using Mill in a large, complex project compares to using Maven.

Both Mill and Maven builds end up compiling the same set of files, although the number being reported by the command line is slightly higher for Mill (2915 files) than Maven (2822) due to differences in the reporting (e.g. Maven does not report package-info.java files as part of the compiled file count).

Performance

The Mill build for Netty is much more performant than the default Maven build. This applies to most workflows.

For the benchmarks below, each provided number is the wall time of three consecutive runs on my M1 Macbook Pro. While ad-hoc, these benchmarks are enough to give you a flavor of how Mill’s performance compares to Maven:

Benchmark Maven Mill Speedup

Sequential Clean Compile All

2m 31.12s

0m 22.19s

6.8x

Parallel Clean Compile All

1m 16.45s

0m 09.95s

7.7x

Clean Compile Single-Module

0m 19.62s

0m 02.17s

9.0x

Incremental Compile Single-Module

0m 21.10s

0m 00.54s

39.1x

No-Op Compile Single-Module

0m 17.34s

0m 00.47s

39.1x

The column on the right shows the speedups of how much faster Mill is compared to the equivalent Maven workflow. In most cases, Mill is 5-10x faster than Maven. Below, we will go into more detail of each benchmark: how they were run, what they mean, and how we can explain the difference in performing the same task with the two different build tools.

Sequential Clean Compile All

$ time ./mvnw -DskipTests  -Dcheckstyle.skip -Denforcer.skip=true clean install
2m 42.96s
2m 27.58s
2m 31.12s

$ ./mill clean; time ./mill __.compile
0m 29.14s
0m 22.19s
0m 20.79s

This benchmark exercises the simple "build everything from scratch" workflow, with all remote artifacts already in the local cache. The actual files being compiled are the same in either case (as mentioned in the Completeness section). I have explicitly disabled the various linters and tests for the Maven build, to just focus on the compilation of Java source code making it an apples-to-apples comparison.

As a point of reference, Java typically compiles at 10,000-50,000 lines per second on a single thread, and the Netty codebase is ~500,000 lines of code, so we would expect compile to take 10-50 seconds without parallelism. The 20-30s taken by Mill seems about what you would expect for a codebase of this size, and the ~150s taken by Maven is far beyond what you would expect from simple Java compilation.

Where is Maven spending its time?

From eyeballing the logs, the added overhead comes from things like:

Downloading Metadata from Maven Central

Downloading from sonatype-nexus-snapshots: https://oss.sonatype.org/content/repositories/snapshots/io/netty/netty-transport-native-unix-common/maven-metadata.xml
Downloading from central: https://repo.maven.apache.org/maven2/io/netty/netty-transport-native-unix-common/maven-metadata.xml
Downloaded from central: https://repo.maven.apache.org/maven2/io/netty/netty-transport-native-unix-common/maven-metadata.xml (4.3 kB at 391 kB/s)
Downloaded from sonatype-nexus-snapshots: https://oss.sonatype.org/content/repositories/snapshots/io/netty/netty-transport-native-unix-common/maven-metadata.xml (2.7 kB at 7.4 kB/s)

Comparing Jars

Comparing [io.netty:netty-transport-sctp:jar:4.1.112.Final] against [io.netty:netty-transport-sctp:jar:4.1.113.Final-SNAPSHOT] (including their transitive dependencies).

In general, Maven spends much of time working with Jar files: packing them, unpacking them, comparing them, etc. None of this is strictly necessary for compiling Java source files to classfiles! But if they are not necessary, then why is Maven doing it? It turns out the reason comes own to the difference of mvn compile vs mvn install

Maven Compile vs Install

In general, the reason we have to use ./mvwn install rather than ./mvnw compile is that Maven’s main mechanism for managing inter-module dependencies is via the local artifact cache at ~/.m2/repository. Although many workflows work with compile, some don’t, and ./mvnw clean compile on the Netty repository fails with:

[ERROR] Failed to execute goal org.apache.maven.plugins:maven-dependency-plugin:2.10:unpack-dependencies
(unpack) on project netty-resolver-dns-native-macos: Artifact has not been packaged yet.
When used on reactor artifact, unpack should be executed after packaging: see MDEP-98. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :netty-resolver-dns-native-macos

In contrast, Mill builds do not rely on the local artifact cache, even though Mill is able to publish to it. That means Mill builds are able to work directly with classfiles on disk, simply referencing them and using them as-is without spending time packing and unpacking them into .jar files. Furthermore, even if we did want Mill to generate the .jars, the overhead of doing so is just a few seconds, far less than the two entire minutes that Maven’s overhead adds to the clean build:

$ time ./mvnw -DskipTests  -Dcheckstyle.skip -Denforcer.skip=true clean install
2m 42.96s
2m 27.58s
2m 31.12s

$ ./mill clean; time ./mill __.compile
0m 29.14s
0m 22.19s
0m 20.79s

$ ./mill clean; time ./mill __.jar
0m 32.58s
0m 24.90s
0m 23.35s

From this benchmark, we can see that although both Mill and Maven are doing the same work, Mill takes about as long as it should for this task of compiling 500,000 lines of Java source code, while Maven takes considerably longer. And much of this overhead comes from Maven doing unnecessary work packing/unpacking jar files and publishing to a local repository, whereas Mill directly uses the classfiles generated on disk to bypass all that work.

Parallel Clean Compile All

$ time ./mvnw -T 4 -DskipTests  -Dcheckstyle.skip -Denforcer.skip=true clean install
1m 19.58s
1m 16.34s
1m 16.45s

$ ./mill clean; time ./mill -j 4  __.compile
0m 14.80s
0m 09.95s
0m 08.83s

This example compares Maven v.s. Mill, when performing the clean build on 4 threads. Both build tools support parallelism (-T 4 in Maven and -j 4 in Mill), and both tools see a similar ~2x speedup for building the Netty project using 4 threads. Again, this tests a clean build using ./mvnw clean or ./mill clean.

This comparison shows that much of Mill’s speedup over Maven is unrelated to parallelism. Whether sequential or parallel, Mill has approximately the same ~7x speedup over Maven when performing a clean build of the Netty repository.

Clean Compile Single-Module

$ time ./mvnw -pl common -DskipTests  -Dcheckstyle.skip -Denforcer.skip=true clean install
0m 19.62s
0m 20.52s
0:19:50

$ ./mill clean common; time ./mill common.test.compile
0m 04.94s
0m 02.17s
0m 01.95s

This exercise limits the comparison to compiling a single module, in this case common/. ./mvnw -pl common install compiles both the main/ and test/ sources, whereas ./mill common.compile would only compile the main/ sources, and we need to explicitly reference common.test.compile to compile both (because common.test.compile depends on common.compile, common.compile gets run automatically)

Again, we can see a significant speedup of Mill v.s. Maven remains even when compiling a single module: a clean compile of common/ is about 9x faster with Mill than with Maven! Again, common/ is about 40,000 lines of Java source code, so at 10,000-50,000 lines per second we would expect it to compile in about 1-4s. That puts Mill’s compile times right at what you would expect, whereas Maven’s has a significant overhead.

Incremental Compile Single-Module

$ echo "" >> common/src/main/java/io/netty/util/AbstractConstant.java
$ time ./mvnw -pl common -DskipTests  -Dcheckstyle.skip -Denforcer.skip=true install
Compiling 174 source files to /Users/lihaoyi/Github/netty/common/target/classes
Compiling 60 source files to /Users/lihaoyi/Github/netty/common/target/test-classes

0m 21.10s
0m 19.64s
0:21:29


$ echo "" >> common/src/main/java/io/netty/util/AbstractConstant.java
$ time ./mill common.test.compile
compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...

0m 00.78s
0m 00.54s
0m 00.51s

This benchmark explores editing a single file and re-compiling common/.

Maven by default takes about as long to re-compile common/s main/ and test/ sources after a single-line edit as it does from scratch, about 20 seconds. However, Mill takes just about 0.5s to compile and be done! Looking at the logs, we can see it is because Mill only compiles the single file we changed, and not the others.

For this incremental compilation, Mill uses the Zinc Incremental Compiler. Zinc is able to analyze the dependencies between files to figure out what needs to re-compile: for an internal change that doesn’t affect downstream compilation (e.g. changing a string literal) Zinc only needs to compile the file that changed, taking barely half a second:

$ git diff
diff --git a/common/src/main/java/io/netty/util/AbstractConstant.java b/common/src/main/java/io/netty/util/AbstractConstant.java
index de16653cee..9818f6b3ce 100644
--- a/common/src/main/java/io/netty/util/AbstractConstant.java
+++ b/common/src/main/java/io/netty/util/AbstractConstant.java
@@ -83,7 +83,7 @@ public abstract class AbstractConstant<T extends AbstractConstant<T>> implements
             return 1;
         }

-        throw new Error("failed to compare two different constants");
+        throw new Error("failed to compare two different CONSTANTS!!");
     }

 }
$ time ./mill common.test.compile
[info] compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
0m 00.55s6

In contrast, a change to a class or function public signature (e.g. adding a method) may require downstream code to re-compile, and we can see that below:

$ git diff
diff --git a/common/src/main/java/io/netty/util/AbstractConstant.java b/common/src/main/java/io/netty/util/AbstractConstant.java
index de16653cee..f5f5a93e0d 100644
--- a/common/src/main/java/io/netty/util/AbstractConstant.java
+++ b/common/src/main/java/io/netty/util/AbstractConstant.java
@@ -41,6 +41,10 @@ public abstract class AbstractConstant<T extends AbstractConstant<T>> implements
         return name;
     }

+    public final String name2() {
+        return name;
+    }
+
     @Override
     public final int id() {
         return id;
$ time ./mill common.test.compile
[25/48] common.compile
[info] compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
[info] compiling 2 Java sources to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
[info] compiling 4 Java sources to /Users/lihaoyi/Github/netty/out/common/compile.dest/classes ...
[info] compiling 3 Java sources to /Users/lihaoyi/Github/netty/out/common/test/compile.super/mill/scalalib/JavaModule/compile.dest/classes ...
[info] compiling 1 Java source to /Users/lihaoyi/Github/netty/out/common/test/compile.super/mill/scalalib/JavaModule/compile.dest/classes ...
0m 00.81s2

Here, we can see that Zinc ended up re-compiling 7 files in common/src/main/ and 3 files in common/src/test/ as a result of adding a method to AbstractConstant.java.

In general, Zinc is conservative, and does not always end up selecting the minimal set of files that need re-compiling: e.g. in the above example, the new method name2 does not interfere with any existing method, and the ~9 downstream files did not actually need to be re-compiled! However, even conservatively re-compiling 9 files is much faster than Maven blindly re-compiling all 234 files, and as a result the iteration loop of editing-compiling-testing your Java projects in Mill can be much faster than doing the same thing in Maven

No-Op Compile Single-Module

$ time ./mvnw -pl common -DskipTests  -Dcheckstyle.skip -Denforcer.skip=true install
0m 16.34s
0m 17.34s
0m 18.28s

$ time ./mill common.test.compile
0m 00.49s
0m 00.47s
0m 00.45s

This last benchmark explores the boundaries of Maven and Mill: what happens if we ask to compile a single module that has already been compiled? In this case, there is literally nothing to do. For Maven, "doing nothing" takes ~17 seconds, whereas for Mill we can see it complete and return in less than 0.5 seconds

Grepping the logs, we can confirm that both build tools skip re-compilation of the common/ source code. In Maven, skipping compilation only saves us ~2 seconds, bringing down the 19s we saw in Clean Compile Single-Module to 17s here. This matches what we expect about Java compilation speed, with the 2s savings on 40,000 lines of code telling us Java compiles at ~20,000 lines per second. However, we still see Maven taking 17 entire seconds before it can decide to do nothing!

In contrast, doing the same no-op compile using Mill, we see the timing from 2.2s in Clean Compile Single-Module to 0.5 seconds here. This is the same ~2s reduction we saw with Maven, but due to Mill’s minimal overhead, in the end the command finishes in less than half a second.

Conciseness

A common misconception is that conciseness makes code easier to write and harder to read, but really it is the opposite that is true: copy-pasting out thousands of lines of boilerplate is easy! It is refactoring those thousands of lines, maintaining those thousands of lines, debugging those thousands of lines when a bug slips in: that is what is actually difficult.

The Mill build.sc file is approximately 600 lines of code, an order of magnitude more concise than the Maven pom.xml files which add up to over 10,000 lines. That’s ~9,000 fewer lines of config you have to read, maintain, refactor, and debug. Mill builds are concise not because they’re awkwardly compressed, but because they allow you to use standard software engineering techniques to structure the complexities of your project’s build pipelines.

Simple Modules

This can be seen in some of the simplest of the submodules, e.g. resolver, where the Mill config is just 3 lines:

object resolver extends NettyModule{
  def moduleDeps = Seq(common)
}

And the equivalent pom.xml is 30 lines:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>
  <parent>
    <groupId>io.netty</groupId>
    <artifactId>netty-parent</artifactId>
    <version>4.1.113.Final-SNAPSHOT</version>
  </parent>

  <artifactId>netty-resolver</artifactId>
  <packaging>jar</packaging>

  <name>Netty/Resolver</name>

  <properties>
    <javaModuleName>io.netty.resolver</javaModuleName>
  </properties>

  <dependencies>
    <dependency>
      <groupId>${project.groupId}</groupId>
      <artifactId>netty-common</artifactId>
      <version>${project.version}</version>
    </dependency>
    <dependency>
      <groupId>org.mockito</groupId>
      <artifactId>mockito-core</artifactId>
    </dependency>
  </dependencies>
</project>

In general, the Mill snippet contains all the same information as the Maven snippet: the name of the module and its dependency on common. Much of the other information in the Maven XML is inherited from the trait NettyModule we defined earlier in the file, where it can be shared with the rest of the modules rather than being duplicated for each one.

The benefit of short module definitions is not just that they’re easier to write, but they are also easier to read. In the example above, object resolve specifies exactly what is unique to it: it is a NettyModule with a module dependency on common. In contrast, the XML blob above contains a lot of repetitive boilerplate: this makes it difficult to see at a glance where netty-resolver differs from the other modules in the Netty codebase, and the boilerplate provides space for bugs to hide where config that should be identical accidentally falls out of sync.

The concise object resolve example above makes use of a NettyModule to provide the "default" configuration for a module in the Netty codebase. This is known as a "Module Trait", which we will explore below

Module Traits

"Module Traits" are groups of definitions that modules can inherit. For example, the NettyModule above is defined as follows:

trait NettyModule extends NettyBaseModule{
  def testModuleDeps: Seq[MavenModule] = Nil
  def testIvyDeps: T[Agg[mill.scalalib.Dep]] = T{ Agg() }

  object test extends NettyTestSuiteModule with MavenTests{
    def moduleDeps = super.moduleDeps ++ testModuleDeps
    def ivyDeps = super.ivyDeps() ++ testIvyDeps()
    def forkWorkingDir = NettyModule.this.millSourcePath
    def forkArgs = super.forkArgs() ++ Seq(
      "-Dnativeimage.handlerMetadataArtifactId=netty-" + NettyModule.this.millModuleSegments.parts.last,
    )
  }
}

A NettyModule is a NettyBaseModule with some testModuleDeps and testIvyDeps that can be overriden, and a test module internally that makes use of them along with some standard configuration. NettyBaseModule is shown below, and is just a builtin MavenModule with the javacOptions set:

trait NettyBaseModule extends MavenModule{
  def javacOptions = Seq("-source", "1.8", "-target", "1.8")
}

NettyTestSuiteModule is another module trait, that for conciseness I’ll skip over for now.

Now that trait NettyModule is defined, you can re-use it over and over many different modules:

object `codec-dns` extends NettyModule{
  def moduleDeps = Seq(common, buffer, transport, codec)
  def testModuleDeps = Seq(transport.test)
}

object `codec-haproxy` extends NettyModule{
  def moduleDeps = Seq(buffer, transport, codec)
  def testModuleDeps = Seq(transport.test)
}

object `codec-http` extends NettyModule{
  def moduleDeps = Seq(common, buffer, transport, codec, handler)
  def testModuleDeps = Seq(transport.test)
  def compileIvyDeps = Agg(
    ivy"com.jcraft:jzlib:1.1.3",
    ivy"com.aayushatharva.brotli4j:brotli4j:1.16.0",
  )
}

object `codec-http2` extends NettyModule{
  def moduleDeps = Seq(common, buffer, transport, codec, handler, `codec-http`)
  def testModuleDeps = Seq(transport.test)
  def compileIvyDeps = Agg(
    ivy"com.aayushatharva.brotli4j:brotli4j:1.16.0",
  )
}

object `codec-memcache` extends NettyModule{
  def moduleDeps = Seq(common, buffer, transport, codec)
  def testModuleDeps = Seq(transport.test)
}

object `codec-mqtt` extends NettyModule{
  def moduleDeps = Seq(common, buffer, transport, codec)
  def testModuleDeps = Seq(transport.test)
}

object `codec-redis` extends NettyModule{
  def moduleDeps = Seq(common, buffer, transport, codec)
  def testModuleDeps = Seq(transport.test)
}

Shared module traits make it very easy to skim over a bunch of different definitions and see what is important: how those modules are uniquely configured. I can glance over the handful of modules above and see exactly what differs between them, which is much easier than digging through the equivalent group of Maven pom.xml files and trying to spot the differences.

Software build pipelines tend to be very repetitive. Mill’s module traits allow you to template out common parts of your Mill build: not just the configuration flags for a single module, but common multi-step workflows or pipelines ("these application modules also contain C code which is compiled and linked for use from Java") but even entire groups of modules (e.g. "every NettyModule should have a test module). This helps you structure your project’s build pipelines and keep them manageable, while still accommodating the repetitiveness inherent in any software project’s build.

Extensibility

Even though Maven is designed to be declarative, in many real-world codebases you end up needing to run ad-hoc scripts and logic. This section will explore two such scenarios, so you can see how Mill differs from Maven in the handling of these requirements.

Groovy

The Maven build for the common/ subproject uses a Groovy script for code generation. This is configured via:

<properties>
  <collection.template.dir>${project.basedir}/src/main/templates</collection.template.dir>
  <collection.template.test.dir>${project.basedir}/src/test/templates</collection.template.test.dir>
  <collection.src.dir>${project.build.directory}/generated-sources/collections/java</collection.src.dir>
  <collection.testsrc.dir>${project.build.directory}/generated-test-sources/collections/java</collection.testsrc.dir>
</properties>
<plugin>
  <groupId>org.codehaus.gmaven</groupId>
  <artifactId>groovy-maven-plugin</artifactId>
  <version>2.1.1</version>
  <dependencies>
    <dependency>
      <groupId>org.codehaus.groovy</groupId>
      <artifactId>groovy</artifactId>
      <version>3.0.9</version>
    </dependency>
    <dependency>
      <groupId>ant</groupId>
      <artifactId>ant-optional</artifactId>
      <version>1.5.3-1</version>
    </dependency>
  </dependencies>
  <executions>
    <execution>
      <id>generate-collections</id>
      <phase>generate-sources</phase>
      <goals>
        <goal>execute</goal>
      </goals>
      <configuration>
        <source>${project.basedir}/src/main/script/codegen.groovy</source>
      </configuration>
    </execution>
  </executions>
</plugin>

In contrast, the Mill build configures the code generation as follows:

import $ivy.`org.codehaus.groovy:groovy:3.0.9`
import $ivy.`org.codehaus.groovy:groovy-ant:3.0.9`
import $ivy.`ant:ant-optional:1.5.3-1`

object common extends NettyModule{
  ...
  def script = T.source(millSourcePath / "src" / "main" / "script")
  def generatedSources0 = T{
    val shell = new groovy.lang.GroovyShell()
    val context = new java.util.HashMap[String, Object]

    context.put("collection.template.dir", "common/src/main/templates")
    context.put("collection.template.test.dir", "common/src/test/templates")
    context.put("collection.src.dir", (T.dest / "src").toString)
    context.put("collection.testsrc.dir", (T.dest / "testsrc").toString)

    shell.setProperty("properties", context)
    shell.setProperty("ant", new groovy.ant.AntBuilder())

    shell.evaluate((script().path / "codegen.groovy").toIO)

    (PathRef(T.dest / "src"), PathRef(T.dest / "testsrc"))
  }

  def generatedSources = T{ Seq(generatedSources0()._1)}
}

While the number of lines of code written is not that different, the Mill configuration is a lot more direct: rather than writing 35 lines of XML to configure an opaque third-party plugin, we instead write 25 lines of code to directly do what we want: import groovy, configure a GroovyShell, and use it to evaluate our codegen.groovy script.

This direct control means you are not beholden to third party plugins: rather than being limited to what an existing plugin allows you to do, Mill allows you to directly write the code necessary to do what you need to do.

Calling Make

The Maven build for the transport-native-unix-common/ subproject needs to call make in order to compile its C code to modules that can be loaded into Java applications via JNI. Maven does this via the maven-dependency-plugin and maven-antrun-plugin which are approximately configured as below:

<properties>
  <exe.make>make</exe.make>
  <exe.compiler>gcc</exe.compiler>
  <exe.archiver>ar</exe.archiver>
  <nativeLibName>libnetty-unix-common</nativeLibName>
  <nativeIncludeDir>${project.basedir}/src/main/c</nativeIncludeDir>
  <jniUtilIncludeDir>${project.build.directory}/netty-jni-util/</jniUtilIncludeDir>
  <nativeJarWorkdir>${project.build.directory}/native-jar-work</nativeJarWorkdir>
  <nativeObjsOnlyDir>${project.build.directory}/native-objs-only</nativeObjsOnlyDir>
  <nativeLibOnlyDir>${project.build.directory}/native-lib-only</nativeLibOnlyDir>
</properties>

<plugins>
  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <executions>
      <!-- unpack netty-jni-util files -->
      <execution>
        <id>unpack</id>
        <phase>generate-sources</phase>
        <goals>
          <goal>unpack-dependencies</goal>
        </goals>
        <configuration>
          <includeGroupIds>io.netty</includeGroupIds>
          <includeArtifactIds>netty-jni-util</includeArtifactIds>
          <classifier>sources</classifier>
          <outputDirectory>${jniUtilIncludeDir}</outputDirectory>
          <includes>**.h,**.c</includes>
          <overWriteReleases>false</overWriteReleases>
          <overWriteSnapshots>true</overWriteSnapshots>
        </configuration>
      </execution>
    </executions>
  </plugin>
  <plugin>
    <artifactId>maven-antrun-plugin</artifactId>
    <executions>
      <!-- invoke the make file to build a static library -->
      <execution>
        <id>build-native-lib</id>
        <phase>generate-sources</phase>
        <goals>
          <goal>run</goal>
        </goals>
        <configuration>
          <target>
            <exec executable="${exe.make}" failonerror="true" resolveexecutable="true">
              <env key="CC" value="${exe.compiler}" />
              <env key="AR" value="${exe.archiver}" />
              <env key="LIB_DIR" value="${nativeLibOnlyDir}" />
              <env key="OBJ_DIR" value="${nativeObjsOnlyDir}" />
              <env key="JNI_PLATFORM" value="${jni.platform}" />
              <env key="CFLAGS" value="-O3 -Werror -Wno-attributes -fPIC -fno-omit-frame-pointer -Wunused-variable -fvisibility=hidden" />
              <env key="LDFLAGS" value="-Wl,--no-as-needed -lrt -Wl,-platform_version,macos,10.9,10.9" />
              <env key="LIB_NAME" value="${nativeLibName}" />
              <!-- support for __attribute__((weak_import)) by the linker was added in 10.2 so ensure we
                   explicitly set the target platform. Otherwise we may get fatal link errors due to weakly linked
                   methods which are not expected to be present on MacOS (e.g. accept4). -->
              <env key="MACOSX_DEPLOYMENT_TARGET" value="10.9" />
            </exec>
          </target>
        </configuration>
      </execution>
    </executions>
  </plugin>
</plugins>

The maven-dependency-plugin is used to download and unpack a single jar file, while maven-antrun-plugin is used to call make. Both are configured via XML, with the make command essentially being a bash script wrapped in layers of XML.

In contrast, the Mill configuration for this logic is as follows:

def makefile = T.source(millSourcePath / "Makefile")
def cSources = T.source(millSourcePath / "src" / "main" / "c")
def cHeaders = T{
  for(p <- os.walk(cSources().path) if p.ext == "h"){
    os.copy(p, T.dest / p.relativeTo(cSources().path), createFolders = true)
  }
  PathRef(T.dest)
}

def make = T{
  os.copy(makefile().path, T.dest / "Makefile")
  os.copy(cSources().path, T.dest / "src" / "main" / "c", createFolders = true)

  val Seq(sourceJar) = resolveDeps(
    deps = T.task(Agg(ivy"io.netty:netty-jni-util:0.0.9.Final").map(bindDependency())),
    sources = true
  )().toSeq

  os.proc("jar", "xf", sourceJar.path).call(cwd = T.dest  / "src" / "main" / "c")

  os.proc("make").call(
    cwd = T.dest,
    env = Map(
      "CC" -> "clang",
      "AR" -> "ar",
      "JNI_PLATFORM" -> "darwin",
      "LIB_DIR" -> "lib-out",
      "OBJ_DIR" -> "obj-out",
      "MACOSX_DEPLOYMENT_TARGET" -> "10.9",
      "CFLAGS" -> Seq(
        "-mmacosx-version-min=10.9", "-O3", "-Werror", "-Wno-attributes", "-fPIC",
        "-fno-omit-frame-pointer", "-Wunused-variable", "-fvisibility=hidden",
        "-I" + sys.props("java.home") + "/include/",
        "-I" + sys.props("java.home") + "/include/darwin",
        "-I" + sys.props("java.home") + "/include/linux",
      ).mkString(" "),
      "LD_FLAGS" -> "-Wl,--no-as-needed -lrt -Wl,-platform_version,macos,10.9,10.9",
      "LIB_NAME" -> "libnetty-unix-common"
    )
  )

  (PathRef(T.dest / "lib-out"), PathRef(T.dest / "obj-out"))
}
Diagram

In Mill, we define the makefile, cSources, cHeaders, and make tasks. The bulk of the logic is in def make, which prepares the makefile and C sources, resolves the netty-jni-util source jar and unpacks it with jar xf, and calls make with the given environment variables. Both cHeaders and the output of make are used in downstream modules.

Again, the Maven XML and Mill code contains exactly the same logic, and neither is much more concise or verbose than the other. Rather, what is interesting is that it is much easier to work with this kind of build logic via direct code, rather than configuring a bunch of third-party plugins to try and achieve what you want.

Conclusion

Both the Mill and Maven builds we discussed in this case study do the same thing: they compile Java code, zip them into Jar files, run tests. Sometimes they compile and link C code or run make or Groovy. Mill doesn’t try to do more than Maven does, but it tries to do it better: faster compiles, shorter and easier to read configs, easier extensibility via libraries (e.g. org.codehaus.groovy:groovy) and subprocesses (e.g. make).

This case study demonstrates that