Announcing the Gradle Summit 2017! 50+ sessions on Gradle at Scale, Android performance, and more! close

Blazing Fast Android Builds

At Google I/O today, the Android Studio team released the first preview version of the Android Gradle plugin 3.0, based on Gradle 4.0 M2. It brings major performance improvements, especially for builds with plenty of subprojects. In this blog post, we will explain what you can expect from this preview version and how the Android Studio and Gradle team achieved these improvements. Before diving into this let’s look back at what goals led to the creation of the current Android build system.

The Complexity of Mobile Development

Developing mobile applications is inherently more complex than building traditional web or server applications of similar size. An app needs to support a wide array of devices with different peripherals, different screen sizes, and comparatively slow hardware. The popular freemium model adds another layer of variety, requiring different code paths for free and paid versions of the app. In order to provide a fast, slim app for every device and target audience, the build system needs to do a lot of the heavy lifting up front.

To improve developer productivity and to reduce runtime overhead, the Android build tools provide several languages and source generators, e.g. Java, RenderScript, AIDL and Native code. Packaging an app together with its libraries involves highly customizable merging and shrinking steps. The Android Studio team was faced with the challenge of automating all of these without exposing the underlying complexity to developers. Developers can focus on writing their production code.

Last but not least, developers expect a build tool to manage their dependencies, be extensible and provide deep IDE integration.

Gradle is ideally suited for those challenges and the Android Studio team created a fantastic Android build tool on top of the Gradle platform.

The performance challenge

No matter how elegant and extensible the plugin and no matter how seamless the IDE integration, when things take too long, developers become unproductive and frustrated. The Android Studio team has made steady progress on performance over the last years. The emulators became much faster, the time to deploy an app decreased by orders of magnitude with Instant Run and other improvements. These steps have now exposed the build itself as the final bottleneck. The Android Studio team and the Gradle team have continuously improved the performance of the plugin and the platform, but so far this has not been enough. Fundamental design issues preventing great performance.

So Gradle Inc. and Google teamed up in late 2016 to get this situation under control. The work was split up into three areas:

  • General improvements to Gradle and its Java support: Faster up-to-date checking, compile avoidance, stable incremental compilation and parallel dependency downloads.
  • General improvements to the Android tools, like dex and code shrinking, including incremental dexing.
  • New APIs for variant aware dependency management in Gradle and an Android plugin that uses these new APIs.

The latter allowed the Android Studio team to finally get rid of a lot of inefficient workarounds that they had to build because of these missing APIs.

To understand why variant aware dependency management is so important, imagine you have an app which depends on a single library. Both of them support ARM and x86 architectures, both have a free and a paid version and both of them can be built for debug and production. This creates a total of 8 variants. But at any given point in time, a developer is only working on exactly one variant, e.g. the “free x86 debug” variant.

Up until now, the Android plugin had to inspect the app’s dependencies very early in the build lifecycle to select the right variant of the library to build. This early phase is called configuration time, during which Gradle determines what tasks it needs to run in what order. More work at configuration time means slower builds no matter which tasks the user selected. It also affects how long it takes to synchronize the build with the IDE. The Android plugin’s eager dependency inspection lead to a combinatorial explosion of configuration time as more subprojects were added to a build.

This completely changes with Gradle’s new variant aware dependency management. The Android plugin can now provide matching strategies for the different variant dimensions (like product flavor and build type), which Gradle uses during dependency resolution to select the correct variant of the upstream library. This completely removes the need to resolve dependencies at configuration time and also allows the Android plugin to only build the parts of the library that the app needs.

In a particularly large app with 130 subprojects, the time it took to configure the project dropped from 3 minutes to 10 seconds with Android 2.3 tools to under 2 seconds with Android 3.0. The clean build time dropped from over 5 minutes to about 1 minute. The effect on incremental builds is dramatic when combined with new compile avoidance functionality. Making a single-line change and assembling the project is down to about 9 seconds. For monolithic projects these numbers won’t be as impressive, but they show that the build system now works very efficiently with modularized apps.

Android performance comparison

Last but not least, the Android Studio team is going to make the Android plugin 3.0 compatible with the Gradle build cache. The build cache allows build outputs to be reused across clean builds and across machine boundaries. This means that developers can reuse build outputs generated by CI and build pipelines can reuse results from earlier stages. It also speeds up switching between feature branches on developer machines. Preliminary tests are promising, the clean build for the large Android app mentioned above dropped from 60s to about 20s when using the cache.

Give it a try

The Android Studio team has written up a comprehensive migration guide. There may be compatibility issues with community plugins, as many of them depended on internals that work differently now.

If you are developing Android projects, give the preview a try and tell us how much your build times improved out of the box. Try modularizing your app a bit more and splitting api and implementation dependencies for even bigger performance gains. You can use Build Scans and its timeline view to get deep insight into the performance of your build, which tasks were executed and how long they took.

If you are an Android plugin author, the new version might require some changes for your plugin to stay compatible. Please file an issue if you encounter any problems while migrating.

What’s next?

You can expect more improvements on the Gradle side. For instance, we are currently working on parallel task execution by default.

It is also safe to expect more performance smartness from the Android Studio team including Android Studio optimizations to do as little work as possible when syncing the project. The Gradle and Android Studio teams are collaborating on this as well.

Support for community plugins will improve as the alpha versions mature and plugin authors adjust to it. The more people provide feedback, the faster these great improvements can be released as stable.

Introducing Gradle Build Cache Beta

Introduced in Gradle 3.5 to reduce build time.

What does it do?

The build cache reuses the outputs of Gradle tasks locally and shares task outputs between machines. In many cases, this will accelerate the average build time.

The build cache is complementary to Gradle’s incremental build features, which optimizes build performance for local changes that have not been built already. Many Gradle tasks are designed to be incremental, so that if the inputs and outputs of the task do not change, Gradle can skip the task. Even when the task’s inputs have changed, some tasks can rebuild only the parts that have changed. Of course, these techniques only work if there are already outputs from previous local builds. In the past, building on fresh checkouts or executing “clean” builds required building everything from scratch again, even if the result of those builds had already been created locally or on another machine (such as the continuous integration server).

Now, Gradle uses the inputs of a task as a key to uniquely identify the outputs for a task. With the build cache feature enabled, if Gradle can find that key in a build cache, Gradle will skip task execution and directly copy the outputs from the cache into the build directory. This can be much faster than executing the task again.

In particular, if you’re using a continuous integration server, you can configure Gradle to push task outputs to a shared build cache. When a developer builds, task outputs already built on CI are copied to the developer’s machine. This can greatly improve the developer’s local build experience.

When using the local build cache, instead of rebuilding large parts of the project whenever you switch branches, Gradle can skip task execution and pull the previous outputs from the local cache.

How does it work?

A cacheable Gradle task is designed to declare everything that can affect the output of the task as an input. Gradle calculates a build cache key by hashing over all of the inputs to a task. That build cache key uniquely identifies the outputs of the task. This is an opt-in feature for each task implementation, so not every task is cacheable or needs to be. Several built-in Gradle tasks (JavaCompile, Test, Checkstyle) have caching enabled to speed up the typical Java project.

The build cache key for a task takes into account:

  • The values of all inputs defined by the task via annotations (e.g. @InputFiles) or the runtime TaskInputs API.
  • The contents (and relative paths) of any file inputs.
  • The classpath of the task, which includes any plugins and Gradle version used.
  • The classpath of any task actions, which can include the build script.

When the build cache feature is enabled, Gradle will check if any of the configured build caches contain a match for the task’s build cache key when a task is not up-to-date. If Gradle does not find a match, the task will be executed as normal. After execution, Gradle will gather all of the task’s outputs and push them to the build caches, if configured to do so. If Gradle does find a match, the task’s outputs are deleted and the previous outputs are copied into the output directories.

Does it help?

We have been using the build cache for the Gradle CI builds since November 2016. We also have some partners who have been trying the build cache in their builds. We can’t share their data directly, but they’ve seen similar improvements in CI and developer builds as we have. On average, we see a 25% reduction in total time spent building each commit, but some commits are even better (80% reduction) and the median build saw a 15% reduction.

Stage 3 %-Improved

Here’s another look at the number of minutes spent between the cached and non-cached builds for Gradle. You can see how the reductions translates into about 90 minutes saved in a 360 minute build for us.

Stage 3 comparison

The build cache is a generic feature that avoids re-executing a task when it can, so builds large and small can benefit in some way. The structure of your project will influence how much you can gain overall. If your project consists of a single monolithic module, Gradle has other features that may also help, such as incremental compilation or composite builds. We’ll provide more information about how to get the most out of the build cache in a future blog post and at the Gradle Summit.

Make your build faster today

The Gradle 3.5 release is the first release to include the build cache feature.

We expect that the build cache feature will have general availability in the next release, but we would like for every project to give the build cache beta a try. To do that, we’d like you to try 3 things for us.

1) Try it on a simple project

After upgrading to 3.5, pick a simple Java project and run:

gradle --build-cache clean assemble
gradle --build-cache clean assemble

The second build should be faster because some task outputs are reused from the first build. These outputs will be pulled from your local build cache, which is located in a directory in your GRADLE_USER_HOME.

2) Try to share outputs between machines

To use a shared, remote cache, we provide a recommended configuration that uses your continuous integration builds to populate a shared build cache and allows all developers to pull from that build cache.

You’ll need a remote build cache backend to share between developers. We have put together an example nginx configuration you can use. We will also soon provide an integrated build cache backend in Gradle Enterprise.

3) Give us feedback

If you have feedback, we’d love to hear it. If you have a build scan you can share, that’s even better.

We’re excited to get the Gradle Build Cache feature out for feedback in Gradle 3.5, but we know there’s more we need to do to make the build cache stable and performant. We have some known issues that you should check before raising new issues on GitHub.

At this time, we don’t recommend that you leave the build cache enabled for production builds without understanding the risks. There are known issues that can cause your builds to fail or produce incorrect output, but your feedback on the types of problems or successes are very valuable to maturing the build cache feature. You can configure the build cache in your build and enable it on a trial basis by setting org.gradle.caching=true or running with --build-cache without impacting all builds.

For dogfooding the build cache for Gradle, we used a separate CI job to run a build with the build cache enabled. This allowed us to compare the build times with and without the build cache for the same set of changes.

Thanks and roadmap

After trying the build cache, you’ll probably have some questions about why more parts of your build are not cacheable. Regardless of the build cache backend you are using, Gradle Enterprise 2017.2 comes with features to understand build cache usage and behavior by collecting data, whether the build cache is enabled or not. Build scans keep track of the reason that a task was not cached. A task might not be cached if it has particular problems, like if it has no outputs or cacheability is not enabled for it. You can search the build scan timeline for each of these reasons.

In future versions of Gradle and Gradle Enterprise, we’ll collect more information related to the build cache and task cacheability, to make it easier to diagnose build failures or poor build cache performance.

For the next release, Gradle 4.0, we intend to focus on making the build cache safe to enable for all well behaved builds and providing feedback for situations where Gradle cannot safely cache outputs from a task. This also means we’ll be providing a well-behaved local build cache and several validation checks.

For the releases following that, we intend to spend time on expanding our documentation and Gradle guides to make it easier for you to cache more tasks and develop cache-friendly tasks.

Thanks for your continued help and support. Please consider making your build faster with the build cache with the three steps we outline above.

Announcing Gradle Enterprise 2017.1

We are excited to announce the release of Gradle Enterprise 2017.1. This release includes many new features and bug fixes, further expanding the build insights that build scans provide you and your team. Here are some of the highlights of this release. Contact us if you’re interested in a demo or a trial.

Easily find changes to dependencies between two builds

Dependency changes between builds can be a common source of problems. For example, upgrading a version of one library can unintentionally bring in different versions of transitive dependencies into your project. In turn, these newer versions can cause you all kinds of frustration by breaking compatibility with other libraries that your project uses.

The new build comparison feature allows you to quickly find dependency changes between builds, including differences in transitive dependencies.

You can easily select two builds to compare:

Select builds for dependency comparison

And quickly see the dependency differences between the two builds:

Dependency comparison

Visualize your build’s task execution with the timeline

When trying to make your build faster, it can be really helpful to know whether all processes are utilized efficiently. Are there optimization opportunities such as long-running tasks that could be split into smaller tasks and run in parallel? To find these optimization opportunities you first need to identify where the bottlenecks are in your build.

The new timeline feature gives you a visual representation of the tasks executed during your build. Using this visualization you can quickly identify bottleneck tasks in your build, places in your build where you could speed up execution by running more tasks in parallel, and other optimization opportunities.

Timeline

You can also filter tasks by name/path, type and more, making it easy to inspect and highlight particular tasks.

Try out the timeline with this example scan.

View dependency downloads

Time spent downloading dependencies can have a significant impact on your build time. The new “Network Activity” tab in the “Performance” section shows all the downloads triggered by dependency resolution in your build, including the size of each download and how long it took.

You can identify big or slow downloads that are dragging down your build speed. Are there downloads from slow remote repositories that you could cache on-site? Or large downloads that are no longer needed in your build and could be removed entirely?

Also, you can see the overall number of downloads in your build, total download size, and average download speed across the downloads to quickly gauge overall network performance during your build.

Network activity

This feature requires the upcoming Gradle version 3.5 and build scan plugin 1.6 or later.

See network activity on this example scan.

Integrate your build data with other systems

The new Export API provides a mechanism for consuming the raw build data that powers build scans. It is a HTTP interface based on Server Sent Events (SSE) that supports real time data integration. Libraries for consuming SSE streams are available for most programming languages.

The video below demonstrates a real time build duration dashboard built on the Export API. The code for this is available as part of the gradle-enterprise-export-api-samples repository on GitHub.

See why a task wasn’t cacheable

Gradle 3.3 introduced the build cache feature, which saves you time by reusing task outputs from other builds without needing to execute the task on your machine. For a given task to use the build cache, certain conditions must be met. Gradle Enterprise now indicates which tasks are cacheable and not cacheable.

To give you the opportunity to make more tasks cacheable and improve your build performance, you can see the reasons why tasks were not cacheable. The “Settings and Suggestions” tab of the “Performance” section now indicates if there were tasks that were not cacheable.

Not-cacheable tasks suggestion

And in the new timeline view you can search for cacheable and non-cacheable tasks as well as see why individual tasks were not cacheable.

Not-cacheable task

This feature requires Gradle version 3.4 and build scan plugin 1.6 or later.

Try it out with this example scan.

Better understand task performance

Gradle can save you build time by not re-executing tasks that don’t need to be executed again. For example, tasks that are already up-to-date, or where the outputs can be pulled from the build cache.

The “Task Execution” tab of the “Performance” section summarizes which tasks were executed and which were avoided. The summary gives you an understanding how well cacheable your build currently is, making it easier for you to find optimization opportunities by tuning tasks to make them cacheable. You can also click from the summary into the timeline to see all tasks in a particular category.

Task execution breakdown

Try it out with this example scan.

Find builds by absence of a tag

You can annotate a build scan with one or more tags to easily categorize the build. For example, to indicate which builds were executed on your continuous integration server.

Previously you could find scans that had one or more specific tags, and now you can also do the inverse - find scans that don’t have a specific tag. To do that, use the not: prefix when searching tags. For example, if you tag all of your continuous integration builds with the “CI” tag, you can find all non-CI builds by searching not:CI.

Negative tag filtering

Please see this Custom Data in Build Scans post for more about how and when to use tags.

Find builds faster

Gradle Enterprise gives you the ability to find exactly the builds you need by filtering builds by project name, start time, outcome, and more. With the latest release, searching for build scans is now much faster - especially when you are searching through a large number of builds. This makes it faster to find exactly the builds you are looking for.

Try it today!

We hope you are as excited as we are about these great new features. Contact us today for a trial! You can also check out the release notes to see what else is new.

Incremental Compilation, the Java Library Plugin, and other performance features in Gradle 3.4

We are very proud to announce that the newly released Gradle 3.4 has significantly improved support for building Java applications, for all kind of users. This post explains in details what we fixed, improved and added. We will in particular focus on:

  • Extremely fast incremental builds
  • The end of the dreaded compile classpath leakage

The improvements we made can dramatically improve your build times. Here’s what we measured:

The benchmarks are public, and you can try them out yourself and are synthetic projects representing real world issues reported by our consumers. In particular, what matters in a continuous development process is being incremental (making a small change should never result in a long build):

For those who work on a single project with lots of sources:

  • changing a single file, in a big monolithic project and recompiling
  • changing a single file, in a medium-sized monolithic project and recompiling

For multi-project builds:

  • making a change in an ABI-compatible way (change the body of a method, for example, but not method signatures) in a subproject, and recompiling
  • making a change in an ABI-incompatible way (change a public method signature, for example) in a subproject, and recompiling

For all those scenarios, Gradle 3.4 is much faster. Let’s see how we did this.

Compile avoidance for all

One of the greatest changes in Gradle 3.4 regarding Java support just comes for free: upgrade to Gradle 3.4 and benefit from compile avoidance. Compile avoidance is different from incremental compilation, which we will cover later. So what does it mean? It’s actually very simple. Imagine that your project app depends on project core, which itself depends on project utils:

In app:

public class Main {
   public static void main(String... args) {
        WordCount wc = new WordCount();
        wc.collect(new File(args[0]);
        System.out.println("Word count: " + wc.wordCount());
   }
}

In core:

public class WordCount {  // WordCount lives in project `core`
   // ...
   void collect(File source) {
       IOUtils.eachLine(source, WordCount::collectLine);
   }
}

In utils:

public class IOUtils { // IOUtils lives in project `utils`
    void eachLine(File file, Callable<String> action) {
        try {
            try (BufferedReader reader = new BufferedReader(new FileReader(file))) {
                // ...
            }
        } catch (IOException e) {
            // ...
        }
    }
}

Then, change the implementation of IOUtils. For example, change the body of eachLine to introduce the expected charset:

public class IOUtils { // IOUtils lives in project `utils`
    void eachLine(File file, Callable<String> action) {
        try {
            try (BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(file), "utf-8") )) {
                // ...
            }
        } catch (IOException e) {
            // ...
        }
    }
}

Now rebuild app. What happens? Until now, utils had to be recompiled, but then it also triggered the recompilation of core and eventually app, because of the dependency chain. It sounds reasonable at first glance, but is it really?

What changed in IOUtils is purely an internal detail. The implementation of eachLine changed, but its public API didn’t. Any class file previously compiled against IOUtils is still valid. Gradle is now smart enough to realize that. This means that if you make such a change, Gradle will only recompile utils, and nothing else! And while this example may sound simple, it’s actually a very common pattern: typically, a core project is shared by many subprojects, and each subproject has dependencies on different subprojects. A change to core would trigger a recompilation of all projects. With Gradle 3.4 this will no longer be the case, meaning that it recognizes ABI (Application Binary Interface) breaking changes, and will trigger recompilation only in that case.

This is what we call compilation avoidance. But even in the case when the compilation can not be avoided, Gradle 3.4 will make things much faster with the help of incremental compile.

Improved incremental compilation

For years, Gradle has supported an experimental incremental compiler for Java. In Gradle 3.4, not only is this compiler stable, but we also have significantly improved both its robustness and performance! Use it now: we’re going to make it the default soon! To enable Java incremental compilation, all you need to do is to set it on the compile options:

tasks.withType(JavaCompile) {
   options.incremental = true // one flag, and things will get MUCH faster
}

If we add the following class in project core:

public class NGrams {  // NGrams lives in project `core`
   // ...
   void collect(String source, int ngramLength) {
       collectInternal(StringUtils.sanitize(source), ngramLength);
   }
   // ...
}

and this class in project utils:

public class StringUtils {
   static String sanitize(String dirtyString) { ... }
}

Imagine that we change the class StringUtils and recompile our project. can easily see that we only need to recompile StringUtils and NGrams but not WordCount. NGrams is a dependent class of StringUtils. WordCount doesn’t use StringUtils, so why would it need to be recompiled? This is what the incremental compiler does: it analyzes the dependencies between classes, and only recompiles a class when it has changed, or one of the classes it depends on has changed.

Those of you who have already tried the incremental Java compiler before may have seen that it wasn’t very smart when a changed class contained a constant. For example, this class contains a constant:

public class SomeClass {
    public static int MAGIC_NUMBER = 123;
}

If this class was changed, then Gradle gave up and recompiled not just all the classes of that project but also all the classes in projects that depend on that project. If you wonder why, you have to understand that the Java compiler inlines constants like this. So when we analyze the result of compilation, and that the bytecode of a class contains the literal 123, we have no idea where the literal was defined. It could be in the class itself, or a constant of any dependency found anywhere on its classpath. In Gradle 3.4, we made that behavior much smarter, and only recompile classes which could potentially be affected by the change. In other words, if the class is changed, but the constant is not, we don’t need to recompile. Similarly, if the constant is changed, but that the dependents didn’t have a literal in their bytecode of the old value, we don’t need to recompile them: we would only recompile the classes that have candidate literals. This also means that not all constants are born equal: a constant value of 0 is much more likely to trigger a full recompilation when changed, than a constant value 188847774

Our incremental compiler is also now backed with in-memory caches that live in the Gradle daemon across builds, and thus make it significantly faster than it used to be: extracting the ABI of a Java class is an expensive operation that used to be cached, but on disk only.

If you combine all those incremental compilation improvements with the compile avoidance that we described earlier in this post, Gradle is now really fast when recompiling Java code. Even better, it also works for external dependencies. Imagine that you upgrade from foo-1.0.0 to foo-1.0.1. If the only difference between the two versions of the library is, for example, a bugfix, and that the API hasn’t changed, compile avoidance will kick in and this change in an external dependency will not trigger a recompile of your code. If the new version of the external dependency has a modified public API, Gradle’s incremental compiler will analyze the dependencies of your project on individual classes of the external dependency, and only recompile where necessary.

About annotation processors

Annotation processors are a very powerful mechanism that allows generation of code just by annotating sources. Typical use cases include dependency injection (Dagger) or boilerplate code reduction (Lombok, Autovalue, Butterknife, …). However, using annotation processors can have a very negative impact on the performance of your builds.

What does an annotation processor do?

Basically, an annotation processor is a Java compiler plugin. It is triggered whenever the Java compiler recognizes an annotation that is handled by a processor. From the build tool point of view, it’s a black box: we don’t know what it’s going to do, in particular what files it’s going to generate, and where.

Therefore whenever the annotation processor implementation changes, Gradle needs to recompile everything. That is not that bad by itself, as this probably doesn’t happen very often. But for reasons explained soon things are much worse and Gradle has to disable compile avoidance when annotation processors are not declared explicitly. But first let’s understand what’s going on. Typically today annotation processors are added to the compile classpath.

While Gradle can detect which jar contains annotation processors, what it cannot detect is which other jars in the compile classpath are used by the annotation processor implementation. They also have dependencies. That means potentially any change in the compile classpath may affect the behavior of the annotation processor in a way Gradle can not understand. Therefore any change in the compile classpath will trigger a full recompile and we are back to square one.

But there is a solution to this.

Explicitly declaring the annotation processor classpath

Should the fact that an annotation processor, which is a compiler plugin that uses external dependencies, influence your compile classpath? No, the dependencies of the annotation processor should never leak into your compile classpath. That’s why javac has a specific -processorpath option which is distinct from -classpath. Here is how you can declare this with Gradle:

configurations {
    apt
}
dependencies {
    // The dagger compiler and its transitive dependencies will only be found on annotation processing classpath
    apt 'com.google.dagger:dagger-compiler:2.8'

    // And we still need the Dagger annotations on the compile classpath itself
    compileOnly 'com.google.dagger:dagger:2.8'
}

compileJava {
    options.annotationProcessorPath = configurations.apt
}

Here, we’re creating a configuration, apt, that will contain all the annotation processors we use, and therefore also their specific transitive dependencies. Then we set the annotationProcessorPath to this configuration. What this enables is two-fold:

  • it disables automatic annotation processor detection on the compile classpath, making the task start faster (faster up-to-date checks)
  • it will make use of the processorpath option of the Java compiler, and properly separate compile dependencies from the annotation processing path
  • it will enable compile avoidance : by explicitly saying that you use annotation processors, we can now make sure that everything that is found on classpath is only binary interfaces

In particular, you will notice how Dagger cleanly separates its compiler from its annotations: we have dagger-compiler as an annotation processing dependency, and dagger (the annotations themselves) as compile dependencies. For Lombok, you would typically have to put the same dependency both in compile and apt to benefit from compile avoidance again.

However, some annotation processors do not separate these concerns properly and thus leak their implementation classes onto your classpath. Compile avoidance still works in this scenario: you need just put the jar on both the apt and compileOnly configurations.

Incremental compile with annotation processors

As said above, with annotation processors, Gradle does not know which files they are going to generate. Neither does it know where and based on what conditions. Therefore Grade disables the Java incremental compiler if annotation processors are in use, even if you declare them explicitly as we just have done. It is however possible to limit the impact of this to the set of classes that really use annotation processors. In short, you can declare a different source set, with a different compile task, that will use the annotation processor, and leave the other compile tasks without any kind of annotation processing: any change to a class that doesn’t use annotation processors would therefore benefit from incremental compilation, whereas any change to the sources that use annotations would trigger a full recompilation, but of that source set only. Here’s an example how to do it:

configurations {
    apt
    aptCompile
}
dependencies {
    apt 'com.google.dagger:dagger-compiler:2.8'
    aptCompile 'com.google.dagger:dagger:2.8'
}

sourceSets {
   processed {
       java {
          compileClasspath += configurations.aptCompile
       }
   }
   main {
       java {
          compileClasspath += processed.output
       }
   }
}

compileProcessedJava {
    options.annotationProcessorPath = configurations.apt
}

In practice this may not be an easy split to perform, dependending on how much the main sources depend on classes found in the processed classes. We are, however, exploring options to enable incremental compilation when annotation processors are present, which means that this shouldn’t be an issue in the future.

Java libraries

We at Gradle have been explaining for a long time why the Maven dependency model is broken, but it’s often hard to realize without a concrete example, because users just get used to the defect and deal with it as if it was natural. In particular, the pom.xml file is used both for building a component and for its publication metadata. Gradle has always worked differently, by having build scripts which are the “recipe” to build a component, and publications, which can be done to Maven, Ivy, or whatever other repositories you need to support. The publication contains metadata about how to consume the project, meaning that we clearly separate what you need to build a component from what you need as its consumer. Separating the two roles is extremely important, and it allows Gradle 3.4 to add a fundamental improvement to Java dependency management. There are multiple benefits you get with this new feature. One is better performance, as it complements the other performance features we have described above, but there are more.

We’ve all been doing it wrong

When building a Java project, there are two things being considered:

  • what do I need to compile the project itself?
  • what do I need at runtime to execute the project?

Which drives us naturally to declaring dependencies in two distinct scopes:

  • compile : the dependencies I need to compile the project
  • runtime : the dependencies I need to run the project

Maven and Gradle have both been using this for years. But since the beginning, we knew we were wrong. This view is over simplistic, because it doesn’t consider the consumers of your project. In particular, there are (at least) two kinds of projects in the Java world:

  • applications, which are standalone, executable, and don’t expose any API
  • libraries, which are used by other libraries, or other applications, as bricks to build software, and therefore expose an API

The problem with the simplistic approach of having two configurations (Gradle) or scopes (Maven) is that you don’t consider what is required in your API versus what is required by your implementation. In other words, you are leaking the compile dependencies of your component to downstream consumers.

Imagine that we are building an IoT application home-automation which depends on a heat-sensor library that has commons-math3.jar and guava.jar on its compile classpath. Then the compile classpath of home-automation will include commons-math3.jar and guava.jar. There are several consequences to this:

  • home-automation may start using classes from commons-math3.jar or guava.jar without really realizing they are transitive dependencies of heat-sensor (transitive dependency leakage).
  • the compile classpath of home-automation is bigger:
    • this increases the time spend on dependency resolution, up-to-date checking, classpath analysis and javac.
    • the new Gradle compile avoidance will be less efficient because changes in the classpath are more likely to happen and compile avoidance will not kick in. Specially, when you are using annotation processors where Gradle incremental compile is disabled, this comes with a high cost.
  • you are increasing the chances of dependency hell (different versions of the same dependency on classpath)

But the worst issue is that if the usage of guava.jar is a purely internal detail for heat-sensor, and that home-automation starts using it because it was found on classpath, then it becomes very hard to evolve heat-sensor because it would break consumers. The leakage of dependencies is a dreaded issue that leads to slowly evolving software and feature freeze, for the sake of backwards compatibility.

We know we’ve been doing this wrong, it’s time to fix it, and introduce the new Java Library plugin!

Introducing the Java Library plugin

Starting from Gradle 3.4, if you build a Java library, that is to say a component aimed at being consumed by other components (a component that is a dependency of another), then you should use the new Java Library plugin. Instead of writing:

apply plugin: 'java'

use:

apply plugin: 'java-library'

They both share a common infrastructure, but the java-library plugin exposes the concept of an API. Let’s migrate our heat-sensor library, which itself has 2 dependencies:

dependencies {
   compile 'org.apache.commons:commons-math3:3.6.1'
   compile 'com.google.guava:guava:21.0'
}

When you study the code in heat-sensor, you understand that commons-math3 is exposed in the public API, while guava is purely internal:

import com.google.common.collect.Lists;
import org.apache.commons.math3.stat.descriptive.SummaryStatistics;

public class HeatSensor {
    public SummaryStatistics getMeasures(int lastHours) {
         List<Measurement> measures = Lists.newArrayList(); // Google Guava is used internally, but doesn't leak into the public API
         // ...
         return stats;
    }
}

It means that if tomorrow, heat-sensor wants to switch from Guava to another collections library, it can do it without any impact to its consumers. But in practice, it’s only possible if we cleanly separate those dependencies into 2 buckets:

dependencies {
   api 'org.apache.commons:commons-math3:3.6.1'
   implementation 'com.google.guava:guava:21.0'
}

The api bucket is used to declare dependencies that should transitively be visible by downstream consumers when they are compiled. The implementation bucket is used to declare dependencies which should not leak into the compile classpath of consumers (because they are purely internal details).

Now, when a consumer of heat-sensor is going to be compiled, it will only find commons-math3.jar on compile classpath, not guava.jar. So if home-automation accidently tries to use a class from Google Guava, it will fail at compile time, and the consumer needs to decide whether it really wants to introduce Guava as a dependency or not. On the other hand, if it tries to use a class from Apache Math3, which is an API dependency, then will succeed, because API dependencies are absolutely required at compile time.

Better POMs than Maven

So when does implementation matter? It matters at runtime only! This is why, now, the pom.xml file that Gradle generates whenever you choose to publish on a Maven repository is cleaner than what Maven can offer! Let’s look at what we generate for heat-sensor, using the maven-publish plugin:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.acme</groupId>
  <artifactId>heat-sensor</artifactId>
  <version>1.0.0-SNAPSHOT</version>
  <dependencies>
    <dependency>
      <groupId>org.apache.commons</groupId>
      <artifactId>commons-math3</artifactId>
      <version>3.6.1</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>com.google.guava</groupId>
      <artifactId>guava</artifactId>
      <version>21.0</version>
      <scope>runtime</scope>
    </dependency>
  </dependencies>
</project>

What you see is the pom.xml file that is published, and therefore used by consumers. And what does it say?

  • to compile against heat-sensor, you need commons-math3 on compile classpath
  • to run against heat-sensor, you need guava on runtime classpath

This is very different from having the same pom.xml for both compiling the component and consuming it. Because to compile heat-sensor itself, you would need guava in compile. In short: Gradle generates better POM files than Maven, because it makes the difference between the producer and the consumer.

More uses cases, more configurations

You might be aware of the compileOnly configuration that was introduced in Gradle 2.12, which can be used to declare dependencies which are only required when compiling a component, but not at runtime (a typical use case is libraries which are embedded into a fat jar or shadowed). The java-library plugin provides a smooth migration path from the java plugin: if you are building an application, you can continue to use the java plugin. Otherwise, if it’s a library, just use the java-library plugin. But in both cases:

  • instead of the compile configuration, you should use implementation instead
  • instead of the runtime configuration, you should use runtimeOnly configuration to declare dependencies which should only be visible at runtime
  • to resolve the runtime of a component, use runtimeClasspath instead of runtime.

Impact on performance

To show you what the impact on performance can be, we added a benchmark which compares two scenarios:

  • making an ABI-compatible change in a library, then recompile
  • making an ABI-incompatible change in a library, then recompile

Only Gradle 3.4 supports the concept of library, and therefore uses the Java Library Plugin. And to make it even clearer, this benchmark does not use the incremental compiler (which would make things even faster, updates would almost be a no-op):

As you can see, in addition to better modelling, there’s a strong impact on performance!

Conclusion

Gradle 3.4 brings dramatic improvements to the Java ecosystem. Better incremental compilation and compile avoidance will significantly improve your productivity, while clean separation of API and implementation dependencies will avoid accidental leakage of dependencies and help you better model your software. Note that we have more goodness to come. In particular, separation of API and implementation is key to Java 9 success, with the awakening of Project Jigsaw. We’re going to add a way to declare what packages belong to your API, making it even closer to what Jigsaw will offer, but supported on older JDKs too.

In addition, Gradle 4.0 will ship with a build cache, which will strongly benefit from the improvements described in this post: it’s a mechanism which allows reusing, and sharing, the result of execution of tasks on a local machine or over the network. Typical use cases include switching branches, or simply checking out a project which has already been built by a colleague or on CI. Said differently, if you, or someone else, has already built something you need, you would get it from the cache instead of having to build it locally. For this, the build cache needs to generate a cache key which is, for java compile task, typically sensitive to the compile classpath. The improvements that ship in 3.4 will make this cache key more likely to be hit, because we would ignore what is not relevant to consumers (only ABI matters).

We encourage you to upgrade now, take a look at the documentation of the new Java Library plugin and discover all it can do for you!

Announcing Buildship 2.0

We are pleased to announce that version 2.0 of Buildship—our official Gradle support for Eclipse—is now available via the Eclipse Marketplace. This release adds support for composite builds, greatly reducing development turnaround time. The UI has been redesigned based on plenty of community feedback during the 1.x line. Project synchronization is now more accurate and project import requires one less step. We’ve added support for Gradle’s offline mode (thanks Rodrigue!), and last but not least, third-party integrators can take advantage of our new InvocationCustomizer extension point. Read on for details about each of these new features.

Composite build support

What is a composite build?

The composite build feature in Gradle allows you to handle several distinct Gradle builds as if they were one big multi-project build. This dramatically shortens the turnaround time when you need to work on several projects that are normally developed separately.

Let’s assume you have written a Java library lib, used by many of your applications. You find a bug which only manifests itself in the special-app. The traditional development workflow would be to change some code in lib and install a snapshot into the local repository. Then you would have to change the build script of special-app to use that new snapshot and check if the bug is actually fixed.

With composite builds, however, you can tell Gradle to treat both of these projects as one. This will let special-app depend directly on the output of the lib project.

You can learn more about composite builds in this introductory blog post.

Composite builds in the IDE

If you develop special-app you probably have it imported in Eclipse with lib referenced as a binary dependency.

Buildship workspace

There is not much difference between working with composite builds at the command line and working with them within Eclipse. To include lib you need only add an entry to your settings.gradle file, telling Gradle from which folder the additional build should be included.

includeBuild '../lib'

Then, to apply the changes right-click on the project and select Gradle > Refresh Project. After the synchronization finishes, you should see two things: the project from the included build is imported and the binary dependency is replaced with a project dependency.

Imported composite

Now, you can make changes to both projects with the benefit of complete IDE support: error markers, code completion, refactoring and more. Also, if you execute Gradle tests—or any other build task—from the Gradle Tasks view, the execution considers changes from the included builds.

Task execution with included builds

Limitations

When using composite builds from the IDE you should be aware of the following limitations:

  • Composite builds support only works if the imported project uses Gradle 3.3 or above.
  • Task execution is disabled on included builds due to a task addressing limitation in Gradle.
  • Including WTP projects into a composite is not supported.

Design overhaul

We updated the Buildship user interface to align it with current Gradle branding as well as with the Eclipse design guidelines. The icons are now distinguishable by color-blind people and work well with Eclipse’s dark theme. Finally, high-definition images have been put in place for use with High-DPI displays.

New Buildship design

Import wizard simplification

We removed JAVA_HOME, program arguments, and JVM arguments configuration from the import and new project wizards. Users can still configure these properties via the gradle.properties file.

More accurate project synchronization

In Buildship 1.x if the project being imported had Eclipse descriptors then a dialog was shown to determine if those descriptors should be updated or deleted. This behavior was error-prone and distracting for users.

To avoid showing a dialog, we improved the project synchronization algorithm the following way: If the Gradle version used by the project can provide a specific attribute (e.g. project natures), it is completely overwritten. Manual modifications are only kept if Gradle provides no information about that attribute. This allows users of older Gradle versions to work around missing information in the model, while giving users of new Gradle versions a much more consistent experience.

Offline mode support

Users can now set Buildship to work offline via the workspace preferences. Once enabled, all Gradle invocations will receive an extra --offline argument.

Offline mode support in preferences

InvocationCustomizer extension point

The InvocationCustomizer extension point enables Eclipse plugins to contribute extra arguments to Gradle builds. This allows integrators to add init scripts or control project properties from the IDE. For a sample implementation check out the Buildship documentation.

Breaking changes

This release introduces the following breaking changes:

  • Minimum Java version set to 1.7
  • Minimum Eclipse version is set to 4.2
  • Project renaming is disabled for projects that are located directly under the Eclipse workspace location.
  • Projects migrating from Eclipse Mars directly to Eclipse Oxygen need to be reimported
  • JAVA_HOME can no longer be configured on import, please use gradle.properties instead
  • Java arguments and Gradle properties can no longer be configured on import, please use gradle.properties instead

Installation

Buildship 2.0 is available from the Eclipse Marketplace or from the eclipse.org update sites. Please note that the update site URL has changed therefore no automatic update is available for this release.

Custom Data in Build Scans

Build scans are a great way to easily share data about your build, but what if your team wants to add their own data to those build scans? They can! In addition to the extensive information automatically captured in build scans, you can attach your own custom data to provide even deeper insights into your build. This custom data can take the form of tags, links, and arbitrary custom values in a key-value format.

By adding custom data to your build scans you can make it easy to find builds of a certain type, give quick links to the applicable source code commit on GitHub, add helpful CI build information, and much more. Then, when you share the single build scan link with a teammate, they get quick and easy access to a plethora of information about your build, making it easier for them to diagnose build environment issues, fix test failures, and so on.

If build scans are new to you, you can learn about them in our introductory blog post on the topic. You can also find more details in the Build Scans User Manual, explore some example build scans or experiment with this sample build scan project.

Now let’s go through some examples of adding custom data into your build scans (see the user manual for additional examples).

Tags

Let’s start with the simplest type of custom data: tags. Tags are a way to add simple pieces of metadata to your build scan. You can use tags to add context to your build, such as whether the build was run locally or on a CI server, whether the build had any local changes, the error type of a failing build, etc. Here is an example build scan that tags the build as having:

  • run on CI
  • come from the master branch
  • included local code changes (“dirty”)

For example, to attach a tag showing whether the build ran locally or on a CI server, you can add the following to its build script:

if (System.getenv("CI")) {
    buildScan.tag "CI"
} else {
    buildScan.tag "LOCAL"
}

The tag is then displayed under the project name when viewing the build scan:

Build scan tag

In addition to tags, you can include links that readers of your build scan might find useful. For example, you could include a convenient link to the project source on GitHub or a link to the CI results of the Gradle build. This example build scan demonstrates what such links look like.

Let’s say your CI tool makes the build results URL available as an environment variable. You could grab that value and add it as a custom link by using the following code in your build script:

if (System.getenv("CI")) {
    buildScan.link "CI build", System.getenv("BUILD_URL")
}

You also have the flexibility to add a link to the current revision or commit of the project’s source code. The following example links the build scan to its corresponding commit on GitHub (as long as the Git command line tools are available):

String commitId = 'git rev-parse --verify HEAD'.execute().text.trim()

buildScan.link "Source", "https://github.com/gradle/gradle-build-scan-quickstart/tree/" + commitId

Links are displayed in the top section when viewing the build scan:

Build scan links

Custom values

Custom values can be used to make any information part of the build scan. In this example build scan, you can see the corresponding CI build date, CI build number and the name of the Git branch as custom values. These values are available when viewing the build scan or when searching for build scans in Gradle Enterprise. Let’s go through a couple of examples showing how you can add custom values to your build scan.

In our first example, we assume your CI tool injects build information into the build via environment variables. You could then use the following code in your build script to attach the build number and date to the build scan:

if (System.getenv("CI")) {
    buildScan.value "CI build number", System.getenv("BUILD_NUMBER")
    buildScan.value "CI build date", System.getenv("BUILD_DATE")
}

Since we are setting these custom values from inside a Gradle build script, you have the power to do things like run external commands to capture more information about the project status. For example, you could add the current Git branch of the build by running a Git command and setting a custom value with the result:

String branchName = 'git rev-parse --abbrev-ref HEAD'.execute().text.trim()

buildScan.value "Git branch", branchName

The custom values are displayed on the main page when viewing the build scan:

Build scan custom values

Command line

To give you greater flexibility in how you pass custom data to your build scan, you can also specify tags, links, and custom values on the command line. For example, you can quickly attach ad-hoc information to your build scan in order to:

  • help debug a specific local build failure
  • tag an experimental build
  • add CI-specific custom data without modifying your build script

You do this by specifying system properties with the appropriate names, as demonstrated by these examples:

$ gradle build -Dscan.tag.EXPERIMENTAL

$ gradle build -Dscan.link.buildUrl=$CI_BUILD_URL

$ gradle build -Dscan.value.buildNumber=$CI_BUILD_NUMBER

The first adds a tag named “EXPERIMENTAL”, the second adds a link titled “buildUrl”, and the third adds a custom value called “buildNumber”.

Searching based on custom data

When using build scans on-premises with Gradle Enterprise, you can search for build scans based on custom data such as tags and custom values. For example, you can search for all builds that ran on CI against the master branch using the terms shown in this screenshot:

Build scan serch results

Live Demo

For a live demo of adding custom data with even more examples, check out this recent talk by Luke Daley and Etienne Studer at the Bay Area Gradle Users meetup. The video starts with an overview of build scans and dives into the details of adding custom data around the 22:30 mark.

Adding custom data to your build scans gives you the power and flexibility to augment your build scans with tags, links, or other data tailored to your team’s specific needs. Then you have even more information available to easily share with your teammates in a build scan—reducing the guesswork of diagnosing build failures. Happy scanning!

Save the Date: Free Gradle Training in January

Getting started with a new technology can be daunting. Learning the basics by reading manuals and blog posts and searching forums can be time-consuming. And getting a whole team up to speed is a challenge all its own. That’s why for years now, we’ve offered a range of Gradle training courses to help teams fast-track the process of learning and mastering Gradle.

Our flagship Introduction to Gradle course has always been our most popular, and in October we ran an experiment with it: we gave it away for free. That’s a steep discount from the usual price of $900 per seat, but we wanted to see just how many people we could help to learn Gradle if cost were not a factor.

We’re happy to report that this first free training was an overwhelming success, and are even happier to announce that we’ll offer a second free Introduction to Gradle training on January 11th and 12th. The course will be led by Gradle veterans Gary Hale and Ben Muschko over these two consecutive, 4-hour training days.

In it, you and your colleagues will get everything you need to start creating and running your own Gradle builds with confidence. There will be plenty of hands-on labs and opportunies for Q&A with the instructors. We look forward to seeing you there!

Details:

  • When? 8:30am–12:30am PST on January 11–12, 2017
  • Where? Online via Zoom webinar - Register Now

November 15th Bay Area Gradle Users Meetup: Recap and Videos

We’d like to thank everyone that came along to our Bay Area Gradle Users meetup last week, and we’d like to thank LinkedIn once again for hosting us—it was a great event! For those who were unable to attend for reasons of distance, time or anything else, we filmed both sessions and are delighted to make the videos available to everyone.

As described in that earlier blog post, Szczepan Faber and Hans Dockter talked in detail about Gradle’s new composite build feature:

In particular, Szczepan demonstrated the potential for working with multi-repository projects in an IDE as if they were part of the same multi-project build. You’ll find that in the first 10 minutes of the video.

That talk was followed by Luke Daley and Etienne Studer presenting the advantages of using custom data in your build scans:

It’s well worth setting aside some time to watch both of these if you can.

We hope you find these videos useful and we look forward to seeing many of you at the next meetup!

Webinar: Customizing Build Scan Data

For those of you who can’t make our Bay Area meetup on November 15th, we’re putting on a webinar a couple days later that will cover one of the same topics: customizing build scan data. Even better, the webinar will be delivered by one of Gradle’s best: Mark Vieira! So come join us for a half-hour of valuable learning and discover how to maximize the benefit of your build scans.

Details:

  • When? 11:00 - 11:30am PT on November 17, 2016
  • Where? Online via Zoom webinar - Register Now

There will be an opportunity for Q&A and a recording of the webinar will be made available to attendees. We hope you’ll join us!

November 15th Bay Area Gradle Users Meetup

Everyone has an opportunity to engage with the Gradle team online through a variety of channels, but nothing beats meeting people face to face. If you are around the Silicon Valley area on November 15th, you can meet three of the team at the Bay Area Gradle Users meetup along with an expert user and build master from LinkedIn.

We have two great talks lined up, the first of which introduces you to an exciting new feature within Gradle—composite builds—while the second shows you how to get more out of your build scans with custom data.

Details:

  • Who? Hans Dockter, Szczepan Faber (LinkedIn), Luke Daley, Etienne Studer
  • When? 6:00pm PT on November 15, 2016
  • Where? LinkedIn, 605 W. Maude, Sunnyvale, CA
  • RSVP: If you plan to attend, please register beforehand via Eventbrite

We hope to see you there!

Hans Dockter and Szczepan Faber on Composite Builds

Many of you will be familiar with Gradle’s multi-project build support which allows you set up dependencies between projects, e.g. where the output of one project—say, a JAR file—is required by another. But this only works if the projects are part of the same multi-project build.

In this talk, Hans and Szczepan will explain how Gradle’s new support for composite builds extends this behavior to projects that are otherwise independent of one another. Want to test a local fix to a library one of your projects depends on? Now you can—without having to publish a new version of that library. Composite builds also enable structuring your projects in new ways, since you no longer need to keep all projects in one repository or directory hierarchy.

Hans won’t just talk about the topic, he’ll also show you how the feature works in practice. You’ll come away with a firm understanding of the value of composite builds and how you might put them to use in your own projects.

Luke Daley and Etienne Studer on Customizing Build Scan Data

Build scans already provide deep insights into your build by reporting key metrics and data. These are incredibly useful on their own, but Gradle is designed around the understanding that no two builds are the same. That’s why build scans allow you to add custom tags, links, and values from your builds. These custom annotations can help you bring out insights that are very specific to your build and to the environment in which your build is run.

In this talk, Luke and Etienne will show examples of how you can use this feature to add extra value to your build scans. The aim is to sow the seeds of inspiration for your own builds using a feature that you might otherwise overlook.