Scala Module Configuration
This page goes into more detail about the various configuration options
for ScalaModule
.
Many of the APIs covered here are listed in the Scaladoc:
Compilation & Execution Flags
import mill._, scalalib._
object foo extends RootModule with ScalaModule{
def scalaVersion = "2.13.8"
def scalacOptions = Seq("-Ydelambdafy:inline")
def forkArgs = Seq("-Xmx4g", "-Dmy.jvm.property=hello")
def forkEnv = Map("MY_ENV_VAR" -> "WORLD")
}
You can pass flags to the Scala compiler via scalacOptions
.
By default,
run
runs the compiled code in a subprocess, and you can pass in JVM flags
via forkArgs
or environment-variables via forkEnv
.
You can also run your code via
mill foo.runLocal
Which runs it in-process within an isolated classloader. This may be faster
since you avoid the JVM startup, but does not support forkArgs
or forkEnv
.
If you want to pass main-method arguments to run
or runLocal
, simply pass
them after the foo.run
/foo.runLocal
:
mill foo.run arg1 arg2 arg3
mill foo.runLocal arg1 arg2 arg3
> ./mill run
hello WORLD
Adding Ivy Dependencies
import mill._, scalalib._
object foo extends RootModule with ScalaModule {
def scalaVersion = "2.12.17"
def ivyDeps = Agg(
ivy"com.lihaoyi::upickle:3.1.0",
ivy"com.lihaoyi::pprint:0.8.1",
ivy"${scalaOrganization()}:scala-reflect:${scalaVersion()}"
)
}
You can define the ivyDeps
field to add ivy dependencies to your module.
-
Single
:
syntax (e.g."ivy"org.testng:testng:6.11"
) defines Java dependencies
-
Double
::
syntax (e.g.ivy"com.lihaoyi::upickle:0.5.1"
) defines Scala dependencies -
Triple
:::
syntax (e.g.ivy"org.scalamacros:::paradise:2.1.1"
) defines dependencies cross-published against the full Scala version e.g.2.12.4
instead of just2.12
. These are typically Scala compiler plugins or similar.
To select the test-jars from a dependency use the following syntax:
-
ivy"org.apache.spark::spark-sql:2.4.0;classifier=tests
.
Please consult the Library Dependencies in Mill section for even more details.
> ./mill run i am cow
pretty-printed using PPrint: Array("i", "am", "cow")
serialized using uPickle: ["i","am","cow"]
Runtime and Compile-time Dependencies
If you want to use additional dependencies at runtime or override
dependencies and their versions at runtime, you can do so with
runIvyDeps
.
import mill._, scalalib._
object foo extends ScalaModule {
def scalaVersion = "2.13.8"
def moduleDeps = Seq(bar)
def runIvyDeps = Agg(
ivy"javax.servlet:servlet-api:2.5",
ivy"org.eclipse.jetty:jetty-server:9.4.42.v20210604",
ivy"org.eclipse.jetty:jetty-servlet:9.4.42.v20210604"
)
def mainClass = Some("bar.Bar")
}
You can also declare compile-time-only dependencies with compileIvyDeps
.
These are present in the compile classpath, but will not propagated to the
transitive dependencies.
object bar extends ScalaModule {
def scalaVersion = "2.13.8"
def compileIvyDeps = Agg(
ivy"javax.servlet:servlet-api:2.5",
ivy"org.eclipse.jetty:jetty-server:9.4.42.v20210604",
ivy"org.eclipse.jetty:jetty-servlet:9.4.42.v20210604"
)
}
Typically, Mill assumes that a module with compile-time dependencies will
only be run after someone includes the equivalent run-time dependencies in
a later build step. e.g. in the case above, bar
defines the compile-time
dependencies, and foo
then depends on bar
and includes the runtime
dependencies. That is why we can run foo
as show below:
> ./mill foo.runBackground
> curl http://localhost:8079
<html><body>Hello World!</body></html>
Compile-time dependencies are translated to provided -scoped
dependencies when publish to Maven or Ivy-Repositories.
|
Keeping up-to-date with Scala Steward
It’s always a good idea to keep your dependencies up-to-date.
If your project is hosted on GitHub, GitLab, or Bitbucket, you can use Scala Steward to automatically open a pull request to update your dependencies whenever there is a newer version available.
Scala Steward can also keep your Mill version up-to-date. |
Test Dependencies
Mill has no |
You might be used to test-scoped dependencies from other build tools like
Maven, Gradle or sbt. As test modules in Mill are just regular modules,
there is no special need for a dedicated test-scope. You can use ivyDeps
and runIvyDeps
to declare dependencies in test modules, and test modules
can use their moduleDeps
to also depend on each other
import mill._, scalalib._
object qux extends ScalaModule {
def scalaVersion = "2.13.8"
def moduleDeps = Seq(baz)
object test extends ScalaTests {
def ivyDeps = Agg(ivy"com.lihaoyi::utest:0.7.11")
def testFramework = "utest.runner.Framework"
def moduleDeps = super.moduleDeps ++ Seq(baz.test)
}
}
object baz extends ScalaModule {
def scalaVersion = "2.13.8"
object test extends ScalaTests {
def ivyDeps = Agg(ivy"com.lihaoyi::utest:0.7.11")
def testFramework = "utest.runner.Framework"
}
}
In this example, not only does qux
depend on baz
, but we also make
qux.test
depend on baz.test
. That lets qux.test
make use of the
BazTestUtils
class that baz.test
defines, allowing us to re-use this
test helper throughout multiple modules' test suites
> ./mill qux.test
Using BazTestUtils.bazAssertEquals
... qux.QuxTests.simple ...
...
> ./mill baz.test
Using BazTestUtils.bazAssertEquals
... baz.BazTests.simple ...
...
Scala Compiler Plugins
import mill._, scalalib._
object foo extends RootModule with ScalaModule {
def scalaVersion = "2.13.8"
def compileIvyDeps = Agg(ivy"com.lihaoyi:::acyclic:0.3.6")
def scalacOptions = Seq("-P:acyclic:force")
def scalacPluginIvyDeps = Agg(ivy"com.lihaoyi:::acyclic:0.3.6")
}
You can use Scala compiler plugins by setting scalacPluginIvyDeps
. The above
example also adds the plugin to compileIvyDeps
, since that plugin’s artifact
is needed on the compilation classpath (though not at runtime).
Remember that compiler plugins are published against the full Scala
version (eg. 2.13.8 instead of just 2.13), so when including them make sure to
use the ::: syntax shown above in the example.
|
> ./mill compile
...
error: Unwanted cyclic dependency
error: ...src/Foo.scala...
error: def y = Bar.z
error: ...src/Bar.scala...
error: def x = Foo.y
Scaladoc Config
To generate API documenation you can use the docJar
task on the module you’d
like to create the documenation for, configured via scalaDocOptions
or
javadocOptions
:
import mill._, scalalib._
object foo extends ScalaModule {
def scalaVersion = "3.1.3"
def scalaDocOptions = Seq("-siteroot", "mydocs", "-no-link-warnings")
}
> ./mill show foo.docJar
> unzip -p out/foo/docJar.dest/out.jar foo/Foo.html
...
...My Awesome Docs for class Foo...
When using Scala 3 you’re also able to use Scaladoc to generate a full static site next to your API documention. This can include general documenation for your project and even a blog. While you can find the full documenation for this in the Scala 3 docs, below you’ll find some useful information to help you generate this with Mill.
By default, Mill will consider the site root as it’s called in
Scala 3
docs, to be the value of docResources()
. It will look there for your
_docs/
and your _blog/
directory if any exist. Given a
project called bar
:
object bar extends ScalaModule {
def scalaVersion = "3.1.3"
}
Your project structure for this would look something like this:
. ├── build.sc ├── bar │ ├── docs │ │ ├── _blog │ │ │ ├── _posts │ │ │ │ └── 2022-08-14-hello-world.md │ │ │ └── index.md │ │ └── _docs │ │ ├── getting-started.md │ │ ├── index.html │ │ └── index.md │ └── src │ └── example │ └── Hello.scala
After generating your docs with mill example.docJar
you’ll find by opening
your out/app/docJar.dest/javadoc/index.html
locally in your browser you’ll
have a full static site including your API docs, your blog, and your
documentation.
> ./mill show bar.docJar
> unzip -p out/bar/docJar.dest/out.jar bar/Bar.html
...
...<p>My Awesome Docs for class Bar</p>...
Unmanaged Jars
import mill._, scalalib._
object foo extends RootModule with ScalaModule {
def scalaVersion = "2.13.8"
def unmanagedClasspath = T {
if (!os.exists(millSourcePath / "lib")) Agg()
else Agg.from(os.list(millSourcePath / "lib").map(PathRef(_)))
}
}
You can override unmanagedClasspath
to point it at any jars you place on the
filesystem, e.g. in the above snippet any jars that happen to live in the
lib/
folder.
> ./mill run '{"name":"John","age":30}' # mac/linux
Key: name, Value: John
Key: age, Value: 30
Specifying the Main Class
import mill._, scalalib._
object foo extends RootModule with ScalaModule {
def scalaVersion = "2.13.8"
def mainClass = Some("foo.Qux")
}
Mill’s foo.run
by default will discover which main class to run from your
compilation output, but if there is more than one or the main class comes from
some library you can explicitly specify which one to use. This also adds the
main class to your foo.jar
and foo.assembly
jars.
> ./mill run
Hello Qux
Downloading Non-Maven Jars
import mill._, scalalib._
object foo extends RootModule with ScalaModule {
def scalaVersion = "2.13.8"
def unmanagedClasspath = T {
os.write(
T.dest / "fastjavaio.jar",
requests.get.stream(
"https://github.com/williamfiset/FastJavaIO/releases/download/1.1/fastjavaio.jar"
)
)
Agg(PathRef(T.dest / "fastjavaio.jar"))
}
}
You can also override unmanagedClasspath
to point it at jars that you want to
download from arbitrary URLs. Note that targets like unmanagedClasspath
are
cached, so your jar is downloaded only once and re-used indefinitely after that.
> ./mill run "textfile.txt" # mac/linux
I am cow
hear me moo
I weigh twice as much as you
Customizing the Assembly
import mill._, scalalib._
import mill.scalalib.Assembly._
object foo extends ScalaModule {
def moduleDeps = Seq(bar)
def scalaVersion = "2.13.8"
def ivyDeps = Agg(ivy"com.lihaoyi::os-lib:0.9.1")
def assemblyRules = Seq(
// all application.conf files will be concatenated into single file
Rule.Append("application.conf"),
// all *.conf files will be concatenated into single file
Rule.AppendPattern(".*\\.conf"),
// all *.temp files will be excluded from a final jar
Rule.ExcludePattern(".*\\.temp"),
// the `shapeless` package will be shaded under the `shade` package
Rule.Relocate("shapeless.**", "shade.shapless.@1")
)
}
object bar extends ScalaModule {
def scalaVersion = "2.13.8"
}
When you make a runnable jar of your project with assembly
command,
you may want to exclude some files from a final jar (like signature files,
and manifest files from library jars), and merge duplicated files (for
instance reference.conf
files from library dependencies).
By default mill excludes all *.sf
, *.dsa
, *.rsa
, and
META-INF/MANIFEST.MF
files from assembly, and concatenates all
reference.conf
files. You can also define your own merge/exclude rules.
> ./mill foo.assembly
> unzip -p ./out/foo/assembly.dest/out.jar application.conf
Bar Application Conf
Foo Application Conf
> java -jar ./out/foo/assembly.dest/out.jar\
Loaded application.conf from resources:...
...Foo Application Conf
...Bar Application Conf
Repository Config
By default, dependencies are resolved from maven central, but you can add
your own resolvers by overriding the repositoriesTask
task in the module:
import mill._, scalalib._
import mill.define.ModuleRef
import coursier.maven.MavenRepository
val sonatypeReleases = Seq(
MavenRepository("https://oss.sonatype.org/content/repositories/releases")
)
object foo extends ScalaModule {
def scalaVersion = "2.13.8"
def ivyDeps = Agg(
ivy"com.lihaoyi::scalatags:0.12.0",
ivy"com.lihaoyi::mainargs:0.6.2"
)
def repositoriesTask = T.task {
super.repositoriesTask() ++ sonatypeReleases
}
}
Mill read coursier config files automatically.
It is possible to setup mirror with mirror.properties
central.from=https://repo1.maven.org/maven2
central.to=http://example.com:8080/nexus/content/groups/public
Note theses default config file locatations:
-
Linux:
~/.config/coursier/mirror.properties
-
MacOS:
~/Library/Preferences/Coursier/mirror.properties
-
Windows:
C:\Users\<user_name>\AppData\Roaming\Coursier\config\mirror.properties
You can also set the environment variable COURSIER_MIRRORS
or the jvm property coursier.mirrors
to specify config file location.
To add custom resolvers to the initial bootstrap of the build, you can create a
custom ZincWorkerModule
, and override the zincWorker
method in your
ScalaModule
by pointing it to that custom object:
object CustomZincWorkerModule extends ZincWorkerModule with CoursierModule {
def repositoriesTask = T.task { super.repositoriesTask() ++ sonatypeReleases }
}
object bar extends ScalaModule {
def scalaVersion = "2.13.8"
def zincWorker = ModuleRef(CustomZincWorkerModule)
// ... rest of your build definitions
def repositoriesTask = T.task {super.repositoriesTask() ++ sonatypeReleases}
}
> ./mill foo.run --text hello
> ./mill bar.compile
Maven Central: Blocked!
Under some circumstances (e.g. corporate firewalls), you may not have access maven central. The typical symptom will be error messages which look like this;
1 targets failed mill.scalalib.ZincWorkerModule.classpath Resolution failed for 1 modules: -------------------------------------------- com.lihaoyi:mill-scalalib-worker_2.13:0.11.1 not found: C:\Users\partens\.ivy2\local\com.lihaoyi\mill-scalalib-worker_2.13\0.11.1\ivys\ivy.xml download error: Caught java.io.IOException (Server returned HTTP response code: 503 for URL: https://repo1.maven.org/maven2/com/lihaoyi/mill-scalalib-worker_2.13/0.11.1/mill-scalalib-worker_2.13-0.11.1.pom) while downloading https://repo1.maven.org/maven2/com/lihaoyi/mill-scalalib-worker_2.13/0.11.1/mill-scalalib-worker_2.13-0.11.1.pom
It is expected that basic commands (e.g. clean) will not work, as Mill saying it is unable to resolve it’s own, fundamental, dependancies. Under such circumstances, you will normally have access to some proxy, or other corporate repository which resolves maven artefacts. The strategy is simply to tell mill to use that instead.
The idea is to set an environment variable COURSIER_REPOSITORIES (see coursier docs). The below command should set the environment variable for the current shell, and then run the mill command.
COURSIER_REPOSITORIES=https://packages.corp.com/artifactory/maven/ mill resolve _
If you are using millw, a more permanent solution could be to set the environment variable at the top of the millw script, or as a user environment variable etc.
Scoverage
import mill._, scalalib._
import $ivy.`com.lihaoyi::mill-contrib-scoverage:`
import mill.contrib.scoverage._
object foo extends RootModule with ScoverageModule {
def scoverageVersion = "2.1.0"
def scalaVersion = "2.13.11"
def ivyDeps = Agg(
ivy"com.lihaoyi::scalatags:0.12.0",
ivy"com.lihaoyi::mainargs:0.6.2"
)
object test extends ScoverageTests /*with TestModule.Utest */{
def ivyDeps = Agg(ivy"com.lihaoyi::utest:0.7.11")
def testFramework = "utest.runner.Framework"
}
}
This is a basic Mill build for a single ScalaModule
, enhanced with
Scoverage plugin. The root module extends the ScoverageModule
and
specifies the version of scoverage version to use here: 2.1.0
. This
version can be changed if there is a newer one. Now you can call the
scoverage targets to produce coverage reports.
The sub test module extends ScoverageTests
to transform the
execution of the various testXXX targets to use scoverage and produce
coverage data.
This lets us perform the coverage operations but before that you
must first run the test.
./mill test
then ./mill scoverage.consoleReport
and get your
coverage into your console output.
You can download this example project using the download link above
if you want to try out the commands below yourself. The only requirement is
that you have some version of the JVM installed; the ./mill
script takes
care of any further dependencies that need to be downloaded.
> ./mill test # Run the tests and produce the coverage data
...
+ foo.FooTests.simple ... <h1>hello</h1>
+ foo.FooTests.escaping ... <h1><hello></h1>
> ./mill resolve scoverage._ # List what tasks are available to run from scoverage
...
scoverage.consoleReport
...
scoverage.htmlReport
...
scoverage.xmlCoberturaReport
...
scoverage.xmlReport
...
> ./mill scoverage.consoleReport
...
Statement coverage.: 16.67%
Branch coverage....: 100.00%
Unidoc
import mill._, scalalib._
object foo extends ScalaModule with UnidocModule{
def scalaVersion = "2.13.8"
def moduleDeps = Seq(bar, qux)
object bar extends ScalaModule{
def scalaVersion = "2.13.8"
}
object qux extends ScalaModule {
def scalaVersion = "2.13.8"
def moduleDeps = Seq(bar)
}
def unidocVersion = Some("0.1.0")
def unidocSourceUrl = Some("https://github.com/lihaoyi/test/blob/master")
}
This example demonstrates use of mill.scalalib.UnidocModule
. This can be
mixed in to any ScalaModule
, and generates a combined Scaladoc for the
module and all its transitive dependencies. Two targets are provided:
-
.unidocLocal
: this generates a site suitable for local browsing. If unidocSourceUrl is provided, the scaladoc provides links back to the local sources -
.unidocSite
: this generates a site suitable for local browsing. If unidocSourceUrl is provided, the scaladoc provides links back to the sources as browsable from theunidocSourceUrl
base (e.g. on Github)
> ./mill show foo.unidocLocal
".../out/foo/unidocLocal.dest"
> cat out/foo/unidocLocal.dest/foo/Foo.html
...
...My Eloquent Scaladoc for Foo...
> cat out/foo/unidocLocal.dest/foo/qux/Qux.html
...
...My Excellent Scaladoc for Qux...
> cat out/foo/unidocLocal.dest/foo/bar/Bar.html
...
...My Lucid Scaladoc for Bar...
> ./mill show foo.unidocSite
Reformatting your code
Mill supports code formatting via scalafmt out of the box.
To have a formatting per-module you need to make your module extend mill.scalalib.scalafmt.ScalafmtModule
:
build.sc
import mill._, scalalib._, scalafmt._
object foo extends ScalaModule with ScalafmtModule {
def scalaVersion = "2.13.14"
}
Now you can reformat code with mill foo.reformat
command, or only check for misformatted files with mill foo.checkFormat
.
You can also reformat your project’s code globally with mill mill.scalalib.scalafmt.ScalafmtModule/reformatAll __.sources
command,
or only check the code’s format with mill mill.scalalib.scalafmt.ScalafmtModule/checkFormatAll __.sources
.
It will reformat all sources that matches __.sources
query.
If you add a .scalafmt.conf
file at the root of you project, it will be used
to configure formatting. It can contain a version
key to specify the scalafmt
version used to format your code. See the
scalafmt configuration documentation
for details.
Using the Ammonite Repl / Scala console
All ScalaModule
s have a console
and a repl
target, to start a Scala console or an Ammonite Repl.
When using the console
, you can configure its scalac
options using the consoleScalacOptions
target.
For example, you may want to inherit all of your regular scalacOptions
but disable -Xfatal-warnings
:
consoleScalacOptions
to disable fatal warningsimport mill._, scalalib._
object foo extends ScalaModule {
def consoleScalacOptions = scalacOptions().filterNot(o => o == "-Xfatal-warnings")
}
To use the repl
, you can (and sometimes need to) customize the Ammonite version to work with your selected Scala version.
Mill provides a default Ammonite version,
but depending on the Scala version you are using, there may be no matching Ammonite release available.
In order to start the repl, you may have to specify a different available Ammonite version.
ammoniteVersion
to select a release compatible to the scalaVersion
import mill._. scalalib._
object foo extends ScalaModule {
def scalaVersion = "2.12.6"
def ammoniteVersion = "2.4.0"
}
Why is Ammonite tied to the exact Scala version? This is because Ammonite depends on the Scala compiler. In contrast to the Scala library, compiler releases do not guarantee any binary compatibility between releases. As a consequence, Ammonite needs full Scala version specific releases. The older your used Mill version or the newer the Scala version you want to use, the higher is the risk that the default Ammonite version will not match. |
Disabling incremental compilation with Zinc
By default all `ScalaModule`s use incremental compilation via Zinc to only recompile sources that have changed since the last compile, or ones that have been invalidated by changes to upstream sources.
If for any reason you want to disable incremental compilation for a module, you can override and set
zincIncrementalCompilation
to false
build.sc
import mill._, scalalib._
object foo extends ScalaModule {
def zincIncrementalCompilation = false
}