Introducing the new C++ plugins

This post introduces some new plugins for C++ that we’ve been working on. These plugins can build C++ libraries and applications. They work on macOS, Linux, and Windows with GCC, Clang and Visual C++/Visual Studio.

The plugins will eventually replace the software model plugins and take advantage of many new features baked into Gradle core, such as a rich dependency management engine, build cache, composite builds, finer grained parallel execution, build scans, and more. For background, see our post on the State and Future of the Gradle Software Model.

We welcome any feedback you may have about these plugins. You can leave feedback on the Gradle forums or raise issues on the Gradle native GitHub repository.

Building an application

You can find all of the samples from this post in the Gradle native samples GitHub repository. Let’s look at building a simple application.

The build script should look familiar to anyone who has used Gradle’s Java plugins:

plugins {
    id 'cpp-application'

This application has no dependencies, and the C++ source files and headers live in the default location: the src/main/cpp directory. Since this is Gradle, you can easily configure the source locations to match whatever layout your project has, including the common pattern of putting everything in one directory.

Here’s the result of running ./gradlew assemble on this sample:

./gradlew assemble

Take a look at the build scan for this build to see what happened in more detail.

compile debug timeline

The plugins automatically find the compiler, linker and other tools to build the application. The result ends up installed in the build/install directory ready to run.

IDE support

Xcode is currently supported for C++ projects. You can just run ./gradlew xcode and open the generated workspace. Support for generating Visual Studio solutions will be added early this year and support for other IDEs will be gradually added after that.

Here is the result of running ./gradlew xcode on the sample:

./gradlew xcode

This is how the workspace looks in Xcode:

Xcode integration


The plugin uses Gradle’s dependency management features, just like other plugins such as the Java or Android plugins. This means, for example, transitive dependencies work just fine.

Let’s add a dependency on a library to the application. In this sample, the C++ library is downloaded from a Maven repository. You don’t have to install the library anywhere manually, and everyone who runs the build will use the version specified in the build script, rather than whatever version happens to be installed on their machine.

The build script defines a Maven repository and declares a dependency on another sample C++ library:

repositories {
    maven {
        // In this sample, we used a local Maven repository, 
        // but Maven Central or Artifactory server can be used.
        url 'http://localhost:8000/'

dependencies {
    implementation 'org.gradle.cpp-samples:math:1.5'

Here is the result of running ./gradlew assemble. Gradle downloads the headers and shared library binary and compiles and links against these:

./gradlew assemble

The build scan shows more detail, including the downloads.

app assemble build scan network activity

Here is how this project looks in Xcode:

Xcode integration


Basic unit testing is supported out of the box. Here is a sample that uses Google Test, downloaded from a Maven repository. We published the binaries using this fork of Google Test, which simply adds a Gradle build.

The build script declares a dependency on Google test and a Maven repository that can be used to locate the Google test binaries:

plugins {
    id 'cpp-unit-test'
repositories {
    maven {
        url ''
dependencies {
    // Currently we have to encode the operating system and architecture in 
    // the dependency name. This will disappear in later releases
    unitTestImplementation 'org.gradle.cpp-samples:googletest_macosx_x86-64_4.5:1.9.0-SNAPSHOT'

Here is the result of running ./gradlew check. Gradle downloads the Google test library, compiles the C++ source and tests and then runs the tests:

./gradlew check

build scan for math check

Richer reporting, build scan support, parallel execution and filtering for Google Test will be added this year with support for other C++ testing frameworks after that.

Fast builds

The plugins can produce debug and release builds of the application or library using Gradle’s new variant-aware dependency management, so that debug builds are compiled and linked against debug library binaries, and release builds are compiled and linked against release library binaries. When you build the debug build, which is the default, Gradle builds only the debug builds of the libraries that you need, rather than building everything.

Developer and CI builds are fast. C++ compilation is a cacheable task, so you can avoid unnecessary and long compilation times when using the build cache. Gradle Enterprise comes with a build cache backend. You don’t need to use the --parallel option as Gradle does incremental and parallel compilation and linking by default.

Let’s run some clean builds that use the build cache:

./gradlew assemble with build cache

You can see that the second build is faster, as the result is fetched from the build cache rather than recompiled. Build scans for non-cached build and cached build.

cached vs non-cached assemble

Publishing C++ libraries

The new plugins can publish C++ libraries to a Maven or Ivy repository. Support for other kinds of repositories will be added later. Here is the build for the library we saw earlier.

The build script adds a Maven repository to publish the binaries to:

plugins {
    id 'cpp-library'
    id 'maven-publish'

group = 'org.gradle.cpp-samples'
version = '1.5'

publishing {
    repositories {
        maven {
            // In this sample, we used a local maven repository, 
            // but Maven Central or Artifactory server can be used.
            url 'http://localhost:8000/'

Here is the result of running ./gradlew publish:

./gradlew publish

Composite builds

Composite builds also work the same as in Java projects. This sample using a composite build combines builds so they can be worked on together: = 'app'

includeBuild 'list-library'
includeBuild 'utilities-library'

Here’s the result in Xcode. The application and the libraries it uses are available to edit, build and test together:

Xcode integration

Finally, it’s easy to set up a CI build for our application. We’ve added configuration for Travis CI.

Native Samples Travis CI

Your Feedback Wanted

These plugins are a work in progress and have some limitations. For example, binary publishing doesn’t understand operating system or architecture yet. We’ll continue to improve these plugins, make them stable, and eventually will deprecate the software model plugins.

Please try these plugins out and let us know what you think. The easiest way to get started is to clone the native samples repository and follow the instructions. Our samples use a Gradle nightly build, so you’ll see the latest and greatest developments there. Some changes are already showing up in the 4.5 release candidates.

We’d love to hear what you think works well, what’s confusing, and what is missing that would block you from using Gradle to build C++ software. You can also leave feedback on the Gradle forums or raise issues on the Gradle native GitHub repository.

State and future of the Gradle Software Model

We’ve received many inquiries about the status and direction of Gradle’s Software Model, especially from users building native libraries and applications.

In this blog post, we will explain the current state and future of the Software Model, and in particular how it relates to native development with Gradle. A lot of exciting improvements are planned for the remainder of 2017; see the roadmap below.

Situation with the Software Model

In a nutshell, the Software Model is a very declarative way to describe how a piece of software is built and the other components it needs as dependencies in the process. It also provides a new, rule-based engine for configuring a Gradle build. When we started to implement the Software Model we set ourselves the following goals:

  • Improve configuration and execution time performance.
  • Make customizations of builds with complex tool chains easier.
  • Provide a richer, more standardized way to model different software ecosystems

As we were developing the Software Model, the Gradle engineering team constantly tried to dogfood the concepts into existing software ecosystems. The Gradle plugins for building native applications is currently fully-based on the Software Model. Similarly, experimental Software Model-based plugins were developed for ecosystems like Android and Java.

Gradle adoption in the native ecosystem is picking up, and so is our investment. Since it’s inception, Gradle’s native support has proved itself as a welcome alternative for builds using Make. With its declarative and expressive model, support for different tool chains and platforms as well as features like parallel compilations, it offers a revolutionary way for building native libraries and applications.

It took us longer than expected to evolve the new configuration and Software Model and make it as powerful as the current Gradle model for Java and Android. Meanwhile, Gradle adoption skyrocketed, there are many complex builds out there using the current model and a vibrant ecosystem of 1500+ community plugins as well. We underestimated the complexity for those builds and plugins to migrate to the new model and saw understandable resistance with many of our partners to undergo this migration.

In hindsight, the scope of the new software and configuration model was too big. That is why at the Gradle Summit 2016, Hans Dockter announced that we were backporting many of its features to the current model. One year later, most of the features for the Java and Android ecosystem have been backported. This includes variant-aware dependency resolution and separation of API and implementation for Java components. Those features were game changers in terms of work avoidance and performance. Furthermore, we found other ways to drastically improve Gradle configuration performance, with more to come. There is no longer any need for a drastic, incompatible change in how Gradle builds are configured.

A way forward

You may therefore be wondering what is happening to the Software Model. We’re in the process of porting the configuration DSL of the native support to the current model. So the declarative nature and strong modelling language will be the same. The rule engine that was part of the Software Model will be deprecated. Everything under the model block will be ported as extensions to the current model. Native users will no longer have a separate extension model compared to the rest of the Gradle community, and they will be able to make use of the new variant aware dependency management.

What does the roadmap look like? Here are the areas of focus until the end of the year:

  • Current model support for native. New sets of plugins based on the current model are in development and are improved with every nightly release. They still need more work to achieve feature parity and stability, but already provide a lot of functionality. Try them out and give us feedback.
  • Parallel-by-default for compile and link tasks. Performance improvements are planned for the native ecosystem by enabling parallelism-by-default to compile and link tasks. This will have a positive impact on everyone building native with Gradle.
  • Transitive dependency resolution. We are porting this powerful feature from our JVM ecosystem to help native developers declare rich dependencies between native projects.
  • New native plugins on current model. Our plan is to have plugins that have most of the functionality of the Software Model plugins and will also have substantial new features like build caching and external source dependencies for native.
  • Improved tool chain support. We are looking at ironing out some of the wrinkles with tool chain declaration which is particularly important for embedded development.

For the most complete and up-to-date progress, we recommend having a look at the gradle-native project, the home for the native ecosystem feature planning.

User migration from Software Model plugins to the new ones will be pretty seamless. All core native tasks will be reused and the tool chain concept will be ported to the current model. We expect that a lot of your build logic can be simply reused. We will support the Software Model-based plugins for an extended period of time to ensure everyone has a successful migration.

If you are currently using, or are planning to use, Gradle to build native projects, by all means keep doing so. Gradle’s native support has proven time and time again to be more performant, flexible, and easier to use than currently available tools.

Exciting things afoot

Today, we’re working on IDE integration and XCTest support, with out-of-the-box HTML report generation and full build scan support. Tool chain definition will also be improved to allow easier integration with alternative tool chain families; this is especially exciting for users invested in the embedded world.

For multi-repository developers, you will be happy to learn that composite builds will work for all native projects.

C++ Build Scan

The new plugins will integrate with the Kotlin DSL which gives Gradle users proper IDE support including auto-completion and refactoring.

We will first implement the complete workflow for native development in the current model without any customization - i.e. no platforms, build types or tool chains configuration. By workflow we mean everything related to building binaries, as well as testing, packaging, deploying and integration with your favorite IDE. At first, the workflow will work for the most common cases. In the subsequent releases, we will proceed by adding customization incrementally to the whole workflow.

Community involvement

Our native community is one of the most active and engaged on the forum, and we want to encourage and grow that engagement even more. Please keep helping each other find the answers you are seeking in the forum, but also engage with us by, trying the various native sample projects, subscribing to the gradle-native project and filing issues, voting on issues that are most important to you, and even consider submitting pull requests if you’re excited to roll up your sleeves and pitch in.

The best ways to stay passively up-to-date on native platform support are to subscribe to the monthly newsletter or more frequent announcements on Twitter.

We look forward to working with you to develop the best native build tool!

Blazing Fast Android Builds

At Google I/O today, the Android Studio team released the first preview version of the Android Gradle plugin 3.0, based on Gradle 4.0 M2. It brings major performance improvements, especially for builds with plenty of subprojects. In this blog post, we will explain what you can expect from this preview version and how the Android Studio and Gradle team achieved these improvements. Before diving into this let’s look back at what goals led to the creation of the current Android build system.

The Complexity of Mobile Development

Developing mobile applications is inherently more complex than building traditional web or server applications of similar size. An app needs to support a wide array of devices with different peripherals, different screen sizes, and comparatively slow hardware. The popular freemium model adds another layer of variety, requiring different code paths for free and paid versions of the app. In order to provide a fast, slim app for every device and target audience, the build system needs to do a lot of the heavy lifting up front.

To improve developer productivity and to reduce runtime overhead, the Android build tools provide several languages and source generators, e.g. Java, RenderScript, AIDL and Native code. Packaging an app together with its libraries involves highly customizable merging and shrinking steps. The Android Studio team was faced with the challenge of automating all of these without exposing the underlying complexity to developers. Developers can focus on writing their production code.

Last but not least, developers expect a build tool to manage their dependencies, be extensible and provide deep IDE integration.

Gradle is ideally suited for those challenges and the Android Studio team created a fantastic Android build tool on top of the Gradle platform.

The performance challenge

No matter how elegant and extensible the plugin and no matter how seamless the IDE integration, when things take too long, developers become unproductive and frustrated. The Android Studio team has made steady progress on performance over the last years. The emulators became much faster, the time to deploy an app decreased by orders of magnitude with Instant Run and other improvements. These steps have now exposed the build itself as the final bottleneck. The Android Studio team and the Gradle team have continuously improved the performance of the plugin and the platform, but so far this has not been enough. Fundamental design issues preventing great performance.

So Gradle Inc. and Google teamed up in late 2016 to get this situation under control. The work was split up into three areas:

  • General improvements to Gradle and its Java support: Faster up-to-date checking, compile avoidance, stable incremental compilation and parallel dependency downloads.
  • General improvements to the Android tools, like dex and code shrinking, including incremental dexing.
  • New APIs for variant aware dependency management in Gradle and an Android plugin that uses these new APIs.

The latter allowed the Android Studio team to finally get rid of a lot of inefficient workarounds that they had to build because of these missing APIs.

To understand why variant aware dependency management is so important, imagine you have an app which depends on a single library. Both of them support ARM and x86 architectures, both have a free and a paid version and both of them can be built for debug and production. This creates a total of 8 variants. But at any given point in time, a developer is only working on exactly one variant, e.g. the “free x86 debug” variant.

Up until now, the Android plugin had to inspect the app’s dependencies very early in the build lifecycle to select the right variant of the library to build. This early phase is called configuration time, during which Gradle determines what tasks it needs to run in what order. More work at configuration time means slower builds no matter which tasks the user selected. It also affects how long it takes to synchronize the build with the IDE. The Android plugin’s eager dependency inspection lead to a combinatorial explosion of configuration time as more subprojects were added to a build.

This completely changes with Gradle’s new variant aware dependency management. The Android plugin can now provide matching strategies for the different variant dimensions (like product flavor and build type), which Gradle uses during dependency resolution to select the correct variant of the upstream library. This completely removes the need to resolve dependencies at configuration time and also allows the Android plugin to only build the parts of the library that the app needs.

In a particularly large app with 130 subprojects, the time it took to configure the project dropped from 3 minutes to 10 seconds with Android 2.3 tools to under 2 seconds with Android 3.0. The clean build time dropped from over 5 minutes to about 1 minute. The effect on incremental builds is dramatic when combined with new compile avoidance functionality. Making a single-line change and assembling the project is down to about 9 seconds. For monolithic projects these numbers won’t be as impressive, but they show that the build system now works very efficiently with modularized apps.

Android performance comparison

Last but not least, the Android Studio team is going to make the Android plugin 3.0 compatible with the Gradle build cache. The build cache allows build outputs to be reused across clean builds and across machine boundaries. This means that developers can reuse build outputs generated by CI and build pipelines can reuse results from earlier stages. It also speeds up switching between feature branches on developer machines. Preliminary tests are promising, the clean build for the large Android app mentioned above dropped from 60s to about 20s when using the cache.

Give it a try

The Android Studio team has written up a comprehensive migration guide. There may be compatibility issues with community plugins, as many of them depended on internals that work differently now.

If you are developing Android projects, give the preview a try and tell us how much your build times improved out of the box. Try modularizing your app a bit more and splitting api and implementation dependencies for even bigger performance gains. You can use Build Scans and its timeline view to get deep insight into the performance of your build, which tasks were executed and how long they took.

If you are an Android plugin author, the new version might require some changes for your plugin to stay compatible. Please file an issue if you encounter any problems while migrating.

What’s next?

You can expect more improvements on the Gradle side. For instance, we are currently working on parallel task execution by default.

It is also safe to expect more performance smartness from the Android Studio team including Android Studio optimizations to do as little work as possible when syncing the project. The Gradle and Android Studio teams are collaborating on this as well.

Support for community plugins will improve as the alpha versions mature and plugin authors adjust to it. The more people provide feedback, the faster these great improvements can be released as stable.

Introducing Gradle Build Cache Beta

Introduced in Gradle 3.5 to reduce build time.

What does it do?

The build cache reuses the outputs of Gradle tasks locally and shares task outputs between machines. In many cases, this will accelerate the average build time.

The build cache is complementary to Gradle’s incremental build features, which optimizes build performance for local changes that have not been built already. Many Gradle tasks are designed to be incremental, so that if the inputs and outputs of the task do not change, Gradle can skip the task. Even when the task’s inputs have changed, some tasks can rebuild only the parts that have changed. Of course, these techniques only work if there are already outputs from previous local builds. In the past, building on fresh checkouts or executing “clean” builds required building everything from scratch again, even if the result of those builds had already been created locally or on another machine (such as the continuous integration server).

Now, Gradle uses the inputs of a task as a key to uniquely identify the outputs for a task. With the build cache feature enabled, if Gradle can find that key in a build cache, Gradle will skip task execution and directly copy the outputs from the cache into the build directory. This can be much faster than executing the task again.

In particular, if you’re using a continuous integration server, you can configure Gradle to push task outputs to a shared build cache. When a developer builds, task outputs already built on CI are copied to the developer’s machine. This can greatly improve the developer’s local build experience.

When using the local build cache, instead of rebuilding large parts of the project whenever you switch branches, Gradle can skip task execution and pull the previous outputs from the local cache.

How does it work?

A cacheable Gradle task is designed to declare everything that can affect the output of the task as an input. Gradle calculates a build cache key by hashing over all of the inputs to a task. That build cache key uniquely identifies the outputs of the task. This is an opt-in feature for each task implementation, so not every task is cacheable or needs to be. Several built-in Gradle tasks (JavaCompile, Test, Checkstyle) have caching enabled to speed up the typical Java project.

The build cache key for a task takes into account:

  • The values of all inputs defined by the task via annotations (e.g. @InputFiles) or the runtime TaskInputs API.
  • The contents (and relative paths) of any file inputs.
  • The classpath of the task, which includes any plugins and Gradle version used.
  • The classpath of any task actions, which can include the build script.

When the build cache feature is enabled, Gradle will check if any of the configured build caches contain a match for the task’s build cache key when a task is not up-to-date. If Gradle does not find a match, the task will be executed as normal. After execution, Gradle will gather all of the task’s outputs and push them to the build caches, if configured to do so. If Gradle does find a match, the task’s outputs are deleted and the previous outputs are copied into the output directories.

Does it help?

We have been using the build cache for the Gradle CI builds since November 2016. We also have some partners who have been trying the build cache in their builds. We can’t share their data directly, but they’ve seen similar improvements in CI and developer builds as we have. On average, we see a 25% reduction in total time spent building each commit, but some commits are even better (80% reduction) and the median build saw a 15% reduction.

Stage 3 %-Improved

Here’s another look at the number of minutes spent between the cached and non-cached builds for Gradle. You can see how the reductions translates into about 90 minutes saved in a 360 minute build for us.

Stage 3 comparison

The build cache is a generic feature that avoids re-executing a task when it can, so builds large and small can benefit in some way. The structure of your project will influence how much you can gain overall. If your project consists of a single monolithic module, Gradle has other features that may also help, such as incremental compilation or composite builds. We’ll provide more information about how to get the most out of the build cache in a future blog post and at the Gradle Summit.

Make your build faster today

The Gradle 3.5 release is the first release to include the build cache feature.

We expect that the build cache feature will have general availability in the next release, but we would like for every project to give the build cache beta a try. To do that, we’d like you to try 3 things for us.

1) Try it on a simple project

After upgrading to 3.5, pick a simple Java project and run:

gradle --build-cache clean assemble
gradle --build-cache clean assemble

The second build should be faster because some task outputs are reused from the first build. These outputs will be pulled from your local build cache, which is located in a directory in your GRADLE_USER_HOME.

2) Try to share outputs between machines

To use a shared, remote cache, we provide a recommended configuration that uses your continuous integration builds to populate a shared build cache and allows all developers to pull from that build cache.

You’ll need a remote build cache backend to share between developers. We provide a build cache node docker image which operates as a remote Gradle build cache, and can connect with Gradle Enterprise for centralized management. The cache node can also be used without a Gradle Enterprise installation with restricted functionality.

3) Give us feedback

If you have feedback, we’d love to hear it. If you have a build scan you can share, that’s even better.

We’re excited to get the Gradle Build Cache feature out for feedback in Gradle 3.5, but we know there’s more we need to do to make the build cache stable and performant. We have some known issues that you should check before raising new issues on GitHub.

At this time, we don’t recommend that you leave the build cache enabled for production builds without understanding the risks. There are known issues that can cause your builds to fail or produce incorrect output, but your feedback on the types of problems or successes are very valuable to maturing the build cache feature. You can configure the build cache in your build and enable it on a trial basis by setting org.gradle.caching=true or running with --build-cache without impacting all builds.

For dogfooding the build cache for Gradle, we used a separate CI job to run a build with the build cache enabled. This allowed us to compare the build times with and without the build cache for the same set of changes.

Thanks and roadmap

After trying the build cache, you’ll probably have some questions about why more parts of your build are not cacheable. Regardless of the build cache backend you are using, Gradle Enterprise 2017.2 comes with features to understand build cache usage and behavior by collecting data, whether the build cache is enabled or not. Build scans keep track of the reason that a task was not cached. A task might not be cached if it has particular problems, like if it has no outputs or cacheability is not enabled for it. You can search the build scan timeline for each of these reasons.

In future versions of Gradle and Gradle Enterprise, we’ll collect more information related to the build cache and task cacheability, to make it easier to diagnose build failures or poor build cache performance.

For the next release, Gradle 4.0, we intend to focus on making the build cache safe to enable for all well behaved builds and providing feedback for situations where Gradle cannot safely cache outputs from a task. This also means we’ll be providing a well-behaved local build cache and several validation checks.

For the releases following that, we intend to spend time on expanding our documentation and Gradle guides to make it easier for you to cache more tasks and develop cache-friendly tasks.

Thanks for your continued help and support. Please consider making your build faster with the build cache with the three steps we outline above.

Announcing Gradle Enterprise 2017.1

We are excited to announce the release of Gradle Enterprise 2017.1. This release includes many new features and bug fixes, further expanding the build insights that build scans provide you and your team. Here are some of the highlights of this release. Contact us if you’re interested in a demo or a trial.

Easily find changes to dependencies between two builds

Dependency changes between builds can be a common source of problems. For example, upgrading a version of one library can unintentionally bring in different versions of transitive dependencies into your project. In turn, these newer versions can cause you all kinds of frustration by breaking compatibility with other libraries that your project uses.

The new build comparison feature allows you to quickly find dependency changes between builds, including differences in transitive dependencies.

You can easily select two builds to compare:

Select builds for dependency comparison

And quickly see the dependency differences between the two builds:

Dependency comparison

Visualize your build’s task execution with the timeline

When trying to make your build faster, it can be really helpful to know whether all processes are utilized efficiently. Are there optimization opportunities such as long-running tasks that could be split into smaller tasks and run in parallel? To find these optimization opportunities you first need to identify where the bottlenecks are in your build.

The new timeline feature gives you a visual representation of the tasks executed during your build. Using this visualization you can quickly identify bottleneck tasks in your build, places in your build where you could speed up execution by running more tasks in parallel, and other optimization opportunities.


You can also filter tasks by name/path, type and more, making it easy to inspect and highlight particular tasks.

Try out the timeline with this example scan.

View dependency downloads

Time spent downloading dependencies can have a significant impact on your build time. The new “Network Activity” tab in the “Performance” section shows all the downloads triggered by dependency resolution in your build, including the size of each download and how long it took.

You can identify big or slow downloads that are dragging down your build speed. Are there downloads from slow remote repositories that you could cache on-site? Or large downloads that are no longer needed in your build and could be removed entirely?

Also, you can see the overall number of downloads in your build, total download size, and average download speed across the downloads to quickly gauge overall network performance during your build.

Network activity

This feature requires the upcoming Gradle version 3.5 and build scan plugin 1.6 or later.

See network activity on this example scan.

Integrate your build data with other systems

The new Export API provides a mechanism for consuming the raw build data that powers build scans. It is a HTTP interface based on Server Sent Events (SSE) that supports real time data integration. Libraries for consuming SSE streams are available for most programming languages.

The video below demonstrates a real time build duration dashboard built on the Export API. The code for this is available as part of the gradle-enterprise-export-api-samples repository on GitHub.

See why a task wasn’t cacheable

Gradle 3.3 introduced the build cache feature, which saves you time by reusing task outputs from other builds without needing to execute the task on your machine. For a given task to use the build cache, certain conditions must be met. Gradle Enterprise now indicates which tasks are cacheable and not cacheable.

To give you the opportunity to make more tasks cacheable and improve your build performance, you can see the reasons why tasks were not cacheable. The “Settings and Suggestions” tab of the “Performance” section now indicates if there were tasks that were not cacheable.

Not-cacheable tasks suggestion

And in the new timeline view you can search for cacheable and non-cacheable tasks as well as see why individual tasks were not cacheable.

Not-cacheable task

This feature requires Gradle version 3.4 and build scan plugin 1.6 or later.

Try it out with this example scan.

Better understand task performance

Gradle can save you build time by not re-executing tasks that don’t need to be executed again. For example, tasks that are already up-to-date, or where the outputs can be pulled from the build cache.

The “Task Execution” tab of the “Performance” section summarizes which tasks were executed and which were avoided. The summary gives you an understanding how well cacheable your build currently is, making it easier for you to find optimization opportunities by tuning tasks to make them cacheable. You can also click from the summary into the timeline to see all tasks in a particular category.

Task execution breakdown

Try it out with this example scan.

Find builds by absence of a tag

You can annotate a build scan with one or more tags to easily categorize the build. For example, to indicate which builds were executed on your continuous integration server.

Previously you could find scans that had one or more specific tags, and now you can also do the inverse - find scans that don’t have a specific tag. To do that, use the not: prefix when searching tags. For example, if you tag all of your continuous integration builds with the “CI” tag, you can find all non-CI builds by searching not:CI.

Negative tag filtering

Please see this Custom Data in Build Scans post for more about how and when to use tags.

Find builds faster

Gradle Enterprise gives you the ability to find exactly the builds you need by filtering builds by project name, start time, outcome, and more. With the latest release, searching for build scans is now much faster - especially when you are searching through a large number of builds. This makes it faster to find exactly the builds you are looking for.

Try it today!

We hope you are as excited as we are about these great new features. Contact us today for a trial! You can also check out the release notes to see what else is new.

Incremental Compilation, the Java Library Plugin, and other performance features in Gradle 3.4

We are very proud to announce that the newly released Gradle 3.4 has significantly improved support for building Java applications, for all kind of users. This post explains in details what we fixed, improved and added. We will in particular focus on:

  • Extremely fast incremental builds
  • The end of the dreaded compile classpath leakage

The improvements we made can dramatically improve your build times. Here’s what we measured:

The benchmarks are public, and you can try them out yourself and are synthetic projects representing real world issues reported by our consumers. In particular, what matters in a continuous development process is being incremental (making a small change should never result in a long build):

For those who work on a single project with lots of sources:

  • changing a single file, in a big monolithic project and recompiling
  • changing a single file, in a medium-sized monolithic project and recompiling

For multi-project builds:

  • making a change in an ABI-compatible way (change the body of a method, for example, but not method signatures) in a subproject, and recompiling
  • making a change in an ABI-incompatible way (change a public method signature, for example) in a subproject, and recompiling

For all those scenarios, Gradle 3.4 is much faster. Let’s see how we did this.

Compile avoidance for all

One of the greatest changes in Gradle 3.4 regarding Java support just comes for free: upgrade to Gradle 3.4 and benefit from compile avoidance. Compile avoidance is different from incremental compilation, which we will cover later. So what does it mean? It’s actually very simple. Imagine that your project app depends on project core, which itself depends on project utils:

In app:

public class Main {
   public static void main(String... args) {
        WordCount wc = new WordCount();
        wc.collect(new File(args[0]);
        System.out.println("Word count: " + wc.wordCount());

In core:

public class WordCount {  // WordCount lives in project `core`
   // ...
   void collect(File source) {
       IOUtils.eachLine(source, WordCount::collectLine);

In utils:

public class IOUtils { // IOUtils lives in project `utils`
    void eachLine(File file, Callable<String> action) {
        try {
            try (BufferedReader reader = new BufferedReader(new FileReader(file))) {
                // ...
        } catch (IOException e) {
            // ...

Then, change the implementation of IOUtils. For example, change the body of eachLine to introduce the expected charset:

public class IOUtils { // IOUtils lives in project `utils`
    void eachLine(File file, Callable<String> action) {
        try {
            try (BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(file), "utf-8") )) {
                // ...
        } catch (IOException e) {
            // ...

Now rebuild app. What happens? Until now, utils had to be recompiled, but then it also triggered the recompilation of core and eventually app, because of the dependency chain. It sounds reasonable at first glance, but is it really?

What changed in IOUtils is purely an internal detail. The implementation of eachLine changed, but its public API didn’t. Any class file previously compiled against IOUtils is still valid. Gradle is now smart enough to realize that. This means that if you make such a change, Gradle will only recompile utils, and nothing else! And while this example may sound simple, it’s actually a very common pattern: typically, a core project is shared by many subprojects, and each subproject has dependencies on different subprojects. A change to core would trigger a recompilation of all projects. With Gradle 3.4 this will no longer be the case, meaning that it recognizes ABI (Application Binary Interface) breaking changes, and will trigger recompilation only in that case.

This is what we call compilation avoidance. But even in the case when the compilation can not be avoided, Gradle 3.4 will make things much faster with the help of incremental compile.

Improved incremental compilation

For years, Gradle has supported an experimental incremental compiler for Java. In Gradle 3.4, not only is this compiler stable, but we also have significantly improved both its robustness and performance! Use it now: we’re going to make it the default soon! To enable Java incremental compilation, all you need to do is to set it on the compile options:

tasks.withType(JavaCompile) {
   options.incremental = true // one flag, and things will get MUCH faster

If we add the following class in project core:

public class NGrams {  // NGrams lives in project `core`
   // ...
   void collect(String source, int ngramLength) {
       collectInternal(StringUtils.sanitize(source), ngramLength);
   // ...

and this class in project utils:

public class StringUtils {
   static String sanitize(String dirtyString) { ... }

Imagine that we change the class StringUtils and recompile our project. can easily see that we only need to recompile StringUtils and NGrams but not WordCount. NGrams is a dependent class of StringUtils. WordCount doesn’t use StringUtils, so why would it need to be recompiled? This is what the incremental compiler does: it analyzes the dependencies between classes, and only recompiles a class when it has changed, or one of the classes it depends on has changed.

Those of you who have already tried the incremental Java compiler before may have seen that it wasn’t very smart when a changed class contained a constant. For example, this class contains a constant:

public class SomeClass {
    public static int MAGIC_NUMBER = 123;

If this class was changed, then Gradle gave up and recompiled not just all the classes of that project but also all the classes in projects that depend on that project. If you wonder why, you have to understand that the Java compiler inlines constants like this. So when we analyze the result of compilation, and that the bytecode of a class contains the literal 123, we have no idea where the literal was defined. It could be in the class itself, or a constant of any dependency found anywhere on its classpath. In Gradle 3.4, we made that behavior much smarter, and only recompile classes which could potentially be affected by the change. In other words, if the class is changed, but the constant is not, we don’t need to recompile. Similarly, if the constant is changed, but that the dependents didn’t have a literal in their bytecode of the old value, we don’t need to recompile them: we would only recompile the classes that have candidate literals. This also means that not all constants are born equal: a constant value of 0 is much more likely to trigger a full recompilation when changed, than a constant value 188847774

Our incremental compiler is also now backed with in-memory caches that live in the Gradle daemon across builds, and thus make it significantly faster than it used to be: extracting the ABI of a Java class is an expensive operation that used to be cached, but on disk only.

If you combine all those incremental compilation improvements with the compile avoidance that we described earlier in this post, Gradle is now really fast when recompiling Java code. Even better, it also works for external dependencies. Imagine that you upgrade from foo-1.0.0 to foo-1.0.1. If the only difference between the two versions of the library is, for example, a bugfix, and that the API hasn’t changed, compile avoidance will kick in and this change in an external dependency will not trigger a recompile of your code. If the new version of the external dependency has a modified public API, Gradle’s incremental compiler will analyze the dependencies of your project on individual classes of the external dependency, and only recompile where necessary.

About annotation processors

Annotation processors are a very powerful mechanism that allows generation of code just by annotating sources. Typical use cases include dependency injection (Dagger) or boilerplate code reduction (Lombok, Autovalue, Butterknife, …). However, using annotation processors can have a very negative impact on the performance of your builds.

What does an annotation processor do?

Basically, an annotation processor is a Java compiler plugin. It is triggered whenever the Java compiler recognizes an annotation that is handled by a processor. From the build tool point of view, it’s a black box: we don’t know what it’s going to do, in particular what files it’s going to generate, and where.

Therefore whenever the annotation processor implementation changes, Gradle needs to recompile everything. That is not that bad by itself, as this probably doesn’t happen very often. But for reasons explained soon things are much worse and Gradle has to disable compile avoidance when annotation processors are not declared explicitly. But first let’s understand what’s going on. Typically today annotation processors are added to the compile classpath.

While Gradle can detect which jar contains annotation processors, what it cannot detect is which other jars in the compile classpath are used by the annotation processor implementation. They also have dependencies. That means potentially any change in the compile classpath may affect the behavior of the annotation processor in a way Gradle can not understand. Therefore any change in the compile classpath will trigger a full recompile and we are back to square one.

But there is a solution to this.

Explicitly declaring the annotation processor classpath

Should the fact that an annotation processor, which is a compiler plugin that uses external dependencies, influence your compile classpath? No, the dependencies of the annotation processor should never leak into your compile classpath. That’s why javac has a specific -processorpath option which is distinct from -classpath. Here is how you can declare this with Gradle:

configurations {
dependencies {
    // The dagger compiler and its transitive dependencies will only be found on annotation processing classpath
    apt ''

    // And we still need the Dagger annotations on the compile classpath itself
    compileOnly ''

compileJava {
    options.annotationProcessorPath = configurations.apt

Here, we’re creating a configuration, apt, that will contain all the annotation processors we use, and therefore also their specific transitive dependencies. Then we set the annotationProcessorPath to this configuration. What this enables is two-fold:

  • it disables automatic annotation processor detection on the compile classpath, making the task start faster (faster up-to-date checks)
  • it will make use of the processorpath option of the Java compiler, and properly separate compile dependencies from the annotation processing path
  • it will enable compile avoidance : by explicitly saying that you use annotation processors, we can now make sure that everything that is found on classpath is only binary interfaces

In particular, you will notice how Dagger cleanly separates its compiler from its annotations: we have dagger-compiler as an annotation processing dependency, and dagger (the annotations themselves) as compile dependencies. For Lombok, you would typically have to put the same dependency both in compile and apt to benefit from compile avoidance again.

However, some annotation processors do not separate these concerns properly and thus leak their implementation classes onto your classpath. Compile avoidance still works in this scenario: you need just put the jar on both the apt and compileOnly configurations.

Incremental compile with annotation processors

As said above, with annotation processors, Gradle does not know which files they are going to generate. Neither does it know where and based on what conditions. Therefore Grade disables the Java incremental compiler if annotation processors are in use, even if you declare them explicitly as we just have done. It is however possible to limit the impact of this to the set of classes that really use annotation processors. In short, you can declare a different source set, with a different compile task, that will use the annotation processor, and leave the other compile tasks without any kind of annotation processing: any change to a class that doesn’t use annotation processors would therefore benefit from incremental compilation, whereas any change to the sources that use annotations would trigger a full recompilation, but of that source set only. Here’s an example how to do it:

configurations {
dependencies {
    apt ''
    aptCompile ''

sourceSets {
   processed {
       java {
          compileClasspath += configurations.aptCompile
   main {
       java {
          compileClasspath += processed.output

compileProcessedJava {
    options.annotationProcessorPath = configurations.apt

In practice this may not be an easy split to perform, dependending on how much the main sources depend on classes found in the processed classes. We are, however, exploring options to enable incremental compilation when annotation processors are present, which means that this shouldn’t be an issue in the future.

Java libraries

We at Gradle have been explaining for a long time why the Maven dependency model is broken, but it’s often hard to realize without a concrete example, because users just get used to the defect and deal with it as if it was natural. In particular, the pom.xml file is used both for building a component and for its publication metadata. Gradle has always worked differently, by having build scripts which are the “recipe” to build a component, and publications, which can be done to Maven, Ivy, or whatever other repositories you need to support. The publication contains metadata about how to consume the project, meaning that we clearly separate what you need to build a component from what you need as its consumer. Separating the two roles is extremely important, and it allows Gradle 3.4 to add a fundamental improvement to Java dependency management. There are multiple benefits you get with this new feature. One is better performance, as it complements the other performance features we have described above, but there are more.

We’ve all been doing it wrong

When building a Java project, there are two things being considered:

  • what do I need to compile the project itself?
  • what do I need at runtime to execute the project?

Which drives us naturally to declaring dependencies in two distinct scopes:

  • compile : the dependencies I need to compile the project
  • runtime : the dependencies I need to run the project

Maven and Gradle have both been using this for years. But since the beginning, we knew we were wrong. This view is over simplistic, because it doesn’t consider the consumers of your project. In particular, there are (at least) two kinds of projects in the Java world:

  • applications, which are standalone, executable, and don’t expose any API
  • libraries, which are used by other libraries, or other applications, as bricks to build software, and therefore expose an API

The problem with the simplistic approach of having two configurations (Gradle) or scopes (Maven) is that you don’t consider what is required in your API versus what is required by your implementation. In other words, you are leaking the compile dependencies of your component to downstream consumers.

Imagine that we are building an IoT application home-automation which depends on a heat-sensor library that has commons-math3.jar and guava.jar on its compile classpath. Then the compile classpath of home-automation will include commons-math3.jar and guava.jar. There are several consequences to this:

  • home-automation may start using classes from commons-math3.jar or guava.jar without really realizing they are transitive dependencies of heat-sensor (transitive dependency leakage).
  • the compile classpath of home-automation is bigger:
    • this increases the time spend on dependency resolution, up-to-date checking, classpath analysis and javac.
    • the new Gradle compile avoidance will be less efficient because changes in the classpath are more likely to happen and compile avoidance will not kick in. Specially, when you are using annotation processors where Gradle incremental compile is disabled, this comes with a high cost.
  • you are increasing the chances of dependency hell (different versions of the same dependency on classpath)

But the worst issue is that if the usage of guava.jar is a purely internal detail for heat-sensor, and that home-automation starts using it because it was found on classpath, then it becomes very hard to evolve heat-sensor because it would break consumers. The leakage of dependencies is a dreaded issue that leads to slowly evolving software and feature freeze, for the sake of backwards compatibility.

We know we’ve been doing this wrong, it’s time to fix it, and introduce the new Java Library plugin!

Introducing the Java Library plugin

Starting from Gradle 3.4, if you build a Java library, that is to say a component aimed at being consumed by other components (a component that is a dependency of another), then you should use the new Java Library plugin. Instead of writing:

apply plugin: 'java'


apply plugin: 'java-library'

They both share a common infrastructure, but the java-library plugin exposes the concept of an API. Let’s migrate our heat-sensor library, which itself has 2 dependencies:

dependencies {
   compile 'org.apache.commons:commons-math3:3.6.1'
   compile ''

When you study the code in heat-sensor, you understand that commons-math3 is exposed in the public API, while guava is purely internal:

import org.apache.commons.math3.stat.descriptive.SummaryStatistics;

public class HeatSensor {
    public SummaryStatistics getMeasures(int lastHours) {
         List<Measurement> measures = Lists.newArrayList(); // Google Guava is used internally, but doesn't leak into the public API
         // ...
         return stats;

It means that if tomorrow, heat-sensor wants to switch from Guava to another collections library, it can do it without any impact to its consumers. But in practice, it’s only possible if we cleanly separate those dependencies into 2 buckets:

dependencies {
   api 'org.apache.commons:commons-math3:3.6.1'
   implementation ''

The api bucket is used to declare dependencies that should transitively be visible by downstream consumers when they are compiled. The implementation bucket is used to declare dependencies which should not leak into the compile classpath of consumers (because they are purely internal details).

Now, when a consumer of heat-sensor is going to be compiled, it will only find commons-math3.jar on compile classpath, not guava.jar. So if home-automation accidently tries to use a class from Google Guava, it will fail at compile time, and the consumer needs to decide whether it really wants to introduce Guava as a dependency or not. On the other hand, if it tries to use a class from Apache Math3, which is an API dependency, then will succeed, because API dependencies are absolutely required at compile time.

Better POMs than Maven

So when does implementation matter? It matters at runtime only! This is why, now, the pom.xml file that Gradle generates whenever you choose to publish on a Maven repository is cleaner than what Maven can offer! Let’s look at what we generate for heat-sensor, using the maven-publish plugin:

<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="" xmlns=""

What you see is the pom.xml file that is published, and therefore used by consumers. And what does it say?

  • to compile against heat-sensor, you need commons-math3 on compile classpath
  • to run against heat-sensor, you need guava on runtime classpath

This is very different from having the same pom.xml for both compiling the component and consuming it. Because to compile heat-sensor itself, you would need guava in compile. In short: Gradle generates better POM files than Maven, because it makes the difference between the producer and the consumer.

More uses cases, more configurations

You might be aware of the compileOnly configuration that was introduced in Gradle 2.12, which can be used to declare dependencies which are only required when compiling a component, but not at runtime (a typical use case is libraries which are embedded into a fat jar or shadowed). The java-library plugin provides a smooth migration path from the java plugin: if you are building an application, you can continue to use the java plugin. Otherwise, if it’s a library, just use the java-library plugin. But in both cases:

  • instead of the compile configuration, you should use implementation instead
  • instead of the runtime configuration, you should use runtimeOnly configuration to declare dependencies which should only be visible at runtime
  • to resolve the runtime of a component, use runtimeClasspath instead of runtime.

Impact on performance

To show you what the impact on performance can be, we added a benchmark which compares two scenarios:

  • making an ABI-compatible change in a library, then recompile
  • making an ABI-incompatible change in a library, then recompile

Only Gradle 3.4 supports the concept of library, and therefore uses the Java Library Plugin. And to make it even clearer, this benchmark does not use the incremental compiler (which would make things even faster, updates would almost be a no-op):

As you can see, in addition to better modelling, there’s a strong impact on performance!


Gradle 3.4 brings dramatic improvements to the Java ecosystem. Better incremental compilation and compile avoidance will significantly improve your productivity, while clean separation of API and implementation dependencies will avoid accidental leakage of dependencies and help you better model your software. Note that we have more goodness to come. In particular, separation of API and implementation is key to Java 9 success, with the awakening of Project Jigsaw. We’re going to add a way to declare what packages belong to your API, making it even closer to what Jigsaw will offer, but supported on older JDKs too.

In addition, Gradle 4.0 will ship with a build cache, which will strongly benefit from the improvements described in this post: it’s a mechanism which allows reusing, and sharing, the result of execution of tasks on a local machine or over the network. Typical use cases include switching branches, or simply checking out a project which has already been built by a colleague or on CI. Said differently, if you, or someone else, has already built something you need, you would get it from the cache instead of having to build it locally. For this, the build cache needs to generate a cache key which is, for java compile task, typically sensitive to the compile classpath. The improvements that ship in 3.4 will make this cache key more likely to be hit, because we would ignore what is not relevant to consumers (only ABI matters).

We encourage you to upgrade now, take a look at the documentation of the new Java Library plugin and discover all it can do for you!

Announcing Buildship 2.0

We are pleased to announce that version 2.0 of Buildship—our official Gradle support for Eclipse—is now available via the Eclipse Marketplace. This release adds support for composite builds, greatly reducing development turnaround time. The UI has been redesigned based on plenty of community feedback during the 1.x line. Project synchronization is now more accurate and project import requires one less step. We’ve added support for Gradle’s offline mode (thanks Rodrigue!), and last but not least, third-party integrators can take advantage of our new InvocationCustomizer extension point. Read on for details about each of these new features.

Composite build support

What is a composite build?

The composite build feature in Gradle allows you to handle several distinct Gradle builds as if they were one big multi-project build. This dramatically shortens the turnaround time when you need to work on several projects that are normally developed separately.

Let’s assume you have written a Java library lib, used by many of your applications. You find a bug which only manifests itself in the special-app. The traditional development workflow would be to change some code in lib and install a snapshot into the local repository. Then you would have to change the build script of special-app to use that new snapshot and check if the bug is actually fixed.

With composite builds, however, you can tell Gradle to treat both of these projects as one. This will let special-app depend directly on the output of the lib project.

You can learn more about composite builds in this introductory blog post.

Composite builds in the IDE

If you develop special-app you probably have it imported in Eclipse with lib referenced as a binary dependency.

Buildship workspace

There is not much difference between working with composite builds at the command line and working with them within Eclipse. To include lib you need only add an entry to your settings.gradle file, telling Gradle from which folder the additional build should be included.

includeBuild '../lib'

Then, to apply the changes right-click on the project and select Gradle > Refresh Project. After the synchronization finishes, you should see two things: the project from the included build is imported and the binary dependency is replaced with a project dependency.

Imported composite

Now, you can make changes to both projects with the benefit of complete IDE support: error markers, code completion, refactoring and more. Also, if you execute Gradle tests—or any other build task—from the Gradle Tasks view, the execution considers changes from the included builds.

Task execution with included builds


When using composite builds from the IDE you should be aware of the following limitations:

  • Composite builds support only works if the imported project uses Gradle 3.3 or above.
  • Task execution is disabled on included builds due to a task addressing limitation in Gradle.
  • Including WTP projects into a composite is not supported.

Design overhaul

We updated the Buildship user interface to align it with current Gradle branding as well as with the Eclipse design guidelines. The icons are now distinguishable by color-blind people and work well with Eclipse’s dark theme. Finally, high-definition images have been put in place for use with High-DPI displays.

New Buildship design

Import wizard simplification

We removed JAVA_HOME, program arguments, and JVM arguments configuration from the import and new project wizards. Users can still configure these properties via the file.

More accurate project synchronization

In Buildship 1.x if the project being imported had Eclipse descriptors then a dialog was shown to determine if those descriptors should be updated or deleted. This behavior was error-prone and distracting for users.

To avoid showing a dialog, we improved the project synchronization algorithm the following way: If the Gradle version used by the project can provide a specific attribute (e.g. project natures), it is completely overwritten. Manual modifications are only kept if Gradle provides no information about that attribute. This allows users of older Gradle versions to work around missing information in the model, while giving users of new Gradle versions a much more consistent experience.

Offline mode support

Users can now set Buildship to work offline via the workspace preferences. Once enabled, all Gradle invocations will receive an extra --offline argument.

Offline mode support in preferences

InvocationCustomizer extension point

The InvocationCustomizer extension point enables Eclipse plugins to contribute extra arguments to Gradle builds. This allows integrators to add init scripts or control project properties from the IDE. For a sample implementation check out the Buildship documentation.

Breaking changes

This release introduces the following breaking changes:

  • Minimum Java version set to 1.7
  • Minimum Eclipse version is set to 4.2
  • Project renaming is disabled for projects that are located directly under the Eclipse workspace location.
  • Projects migrating from Eclipse Mars directly to Eclipse Oxygen need to be reimported
  • JAVA_HOME can no longer be configured on import, please use instead
  • Java arguments and Gradle properties can no longer be configured on import, please use instead


Buildship 2.0 is available from the Eclipse Marketplace or from the update sites. Please note that the update site URL has changed therefore no automatic update is available for this release.

Custom Data in Build Scans

Build scans are a great way to easily share data about your build, but what if your team wants to add their own data to those build scans? They can! In addition to the extensive information automatically captured in build scans, you can attach your own custom data to provide even deeper insights into your build. This custom data can take the form of tags, links, and arbitrary custom values in a key-value format.

By adding custom data to your build scans you can make it easy to find builds of a certain type, give quick links to the applicable source code commit on GitHub, add helpful CI build information, and much more. Then, when you share the single build scan link with a teammate, they get quick and easy access to a plethora of information about your build, making it easier for them to diagnose build environment issues, fix test failures, and so on.

If build scans are new to you, you can learn about them in our introductory blog post on the topic. You can also find more details in the Build Scans User Manual, explore some example build scans or experiment with this sample build scan project.

Now let’s go through some examples of adding custom data into your build scans (see the user manual for additional examples).


Let’s start with the simplest type of custom data: tags. Tags are a way to add simple pieces of metadata to your build scan. You can use tags to add context to your build, such as whether the build was run locally or on a CI server, whether the build had any local changes, the error type of a failing build, etc. Here is an example build scan that tags the build as having:

  • run on CI
  • come from the master branch
  • included local code changes (“dirty”)

For example, to attach a tag showing whether the build ran locally or on a CI server, you can add the following to its build script:

if (System.getenv("CI")) {
    buildScan.tag "CI"
} else {
    buildScan.tag "LOCAL"

The tag is then displayed under the project name when viewing the build scan:

Build scan tag

In addition to tags, you can include links that readers of your build scan might find useful. For example, you could include a convenient link to the project source on GitHub or a link to the CI results of the Gradle build. This example build scan demonstrates what such links look like.

Let’s say your CI tool makes the build results URL available as an environment variable. You could grab that value and add it as a custom link by using the following code in your build script:

if (System.getenv("CI")) { "CI build", System.getenv("BUILD_URL")

You also have the flexibility to add a link to the current revision or commit of the project’s source code. The following example links the build scan to its corresponding commit on GitHub (as long as the Git command line tools are available):

String commitId = 'git rev-parse --verify HEAD'.execute().text.trim() "Source", "" + commitId

Links are displayed in the top section when viewing the build scan:

Build scan links

Custom values

Custom values can be used to make any information part of the build scan. In this example build scan, you can see the corresponding CI build date, CI build number and the name of the Git branch as custom values. These values are available when viewing the build scan or when searching for build scans in Gradle Enterprise. Let’s go through a couple of examples showing how you can add custom values to your build scan.

In our first example, we assume your CI tool injects build information into the build via environment variables. You could then use the following code in your build script to attach the build number and date to the build scan:

if (System.getenv("CI")) {
    buildScan.value "CI build number", System.getenv("BUILD_NUMBER")
    buildScan.value "CI build date", System.getenv("BUILD_DATE")

Since we are setting these custom values from inside a Gradle build script, you have the power to do things like run external commands to capture more information about the project status. For example, you could add the current Git branch of the build by running a Git command and setting a custom value with the result:

String branchName = 'git rev-parse --abbrev-ref HEAD'.execute().text.trim()

buildScan.value "Git branch", branchName

The custom values are displayed on the main page when viewing the build scan:

Build scan custom values

Command line

To give you greater flexibility in how you pass custom data to your build scan, you can also specify tags, links, and custom values on the command line. For example, you can quickly attach ad-hoc information to your build scan in order to:

  • help debug a specific local build failure
  • tag an experimental build
  • add CI-specific custom data without modifying your build script

You do this by specifying system properties with the appropriate names, as demonstrated by these examples:

$ gradle build -Dscan.tag.EXPERIMENTAL

$ gradle build$CI_BUILD_URL

$ gradle build -Dscan.value.buildNumber=$CI_BUILD_NUMBER

The first adds a tag named “EXPERIMENTAL”, the second adds a link titled “buildUrl”, and the third adds a custom value called “buildNumber”.

Searching based on custom data

When using build scans on-premises with Gradle Enterprise, you can search for build scans based on custom data such as tags and custom values. For example, you can search for all builds that ran on CI against the master branch using the terms shown in this screenshot:

Build scan serch results

Live Demo

For a live demo of adding custom data with even more examples, check out this recent talk by Luke Daley and Etienne Studer at the Bay Area Gradle Users meetup. The video starts with an overview of build scans and dives into the details of adding custom data around the 22:30 mark.

Adding custom data to your build scans gives you the power and flexibility to augment your build scans with tags, links, or other data tailored to your team’s specific needs. Then you have even more information available to easily share with your teammates in a build scan—reducing the guesswork of diagnosing build failures. Happy scanning!

Introducing Composite Builds

It’s not every day that we get to announce a feature that revolutionizes several software engineering workflows, but today is that day. Composite builds, a new feature in Gradle 3.1, enables an entirely new dimension in project organization.

Composite builds are a way to join multiple independent Gradle builds and build them together. The brevity of that statement does not fully convey all of the new possibilities, so let me show you how this will make your life as a developer a lot easier.

Joining projects

Many organizations split their code base into several independent projects, each having a dedicated repository and release cycle. Integration between the projects is managed using binary dependencies, e.g. JAR files published to a binary repository like Artifactory. This approach has many advantages, but can be inefficient when trying to rapidly develop and test changes that affect two or more of these projects at once.

Imagine for a moment that you are fixing a bug in a Java library that your application depends on. Your workflow probably looks something like the following:

  1. Change the library
  2. Publish the library to a local repository
  3. Add the local repository to your application’s repositories
  4. Change your application’s dependency to the new library version
  5. Test your application
  6. Repeat until the problem is fixed or you lose your mind

With composite builds, you can short-circuit this workflow by including the library’s build into your application’s build. Gradle will then automatically replace the binary dependency on the library with a project dependency—meaning that changes you make to the library become available to the application instantaneously:

The same approach works for plugins that your project depends on. You can now include a locally checked-out version of a plugin into your project’s build and get into the same kind of tight development loop between them:

The new includeBuild() API in settings.gradle even lets you write a Gradle build that dynamically includes other builds if they are available on the local file system. You could then import this composite into your IDE and do cross-repository refactoring or debugging.

Splitting Monoliths

Organizations that want to avoid the integration pains of multiple repositories tend to use a “monorepo”—a repository containing all projects, often including their dependencies and necessary tools. The upside is that all code is in one place and downstream breakages become visible immediately. But this convenience can come at the cost of productivity: a given developer will usually work only on a small part of a monorepo, but will still be forced to build all upstream projects, and that can mean a lot of waiting and wasted time. Likewise, importing large monorepo projects into an IDE often results in an unresponsive and overwhelming experience.

With composite builds, you can break your monorepo up into several independent builds within the same repository. Developers can work with the individual builds to get fast turnarounds or work with the whole composite when they want to ensure that everything still plays well together:

If you’re planning to move from a monolithic application to multiple independent ones, composite builds now offer a seamless migration strategy.

This is just the beginning

We plan to add a number of improvements to composite builds in upcoming releases:

  • Targeting tasks in an included build from the command line
  • Richer dependency substitution API with support for custom publications
  • Executing included builds in parallel
  • Integration with Gradle’s continuous build capabilities
  • Out of the box support for composite builds in IntelliJ and Eclipse

Of course, nothing is more important than feedback from real world usage. So please give composite builds a try in your own projects or have a look at the samples. Let us know about any problems, suggestions and cool things you built with it on the Gradle Forum.

Introducing Build Scans

A few months ago at this year’s Gradle Summit conference, we announced a new part of the Gradle platform called Gradle Cloud Services. In this post, I want to introduce you to the first of these services—the Gradle Build Scan Service—and the build scans it makes possible.

What is a build scan?

A build scan is a representation of data captured as you run your build. The Build Scan Plugin does the work of capturing the data and sending it to the Build Scan Service. The service then transforms the data into information you can use and share with others. Here’s a quick example of using a build scan to investigate a failure:

Publishing and Viewing a Build Scan

It’s all about that link! Here—click it for yourself:

As you can see, the information that scans provide can be a big help when troubleshooting, collaborating on, or optimizing the performance of your builds. For example, with a build scan in the mix, it’s no longer necessary to copy and paste error messages or include all the details about your environment each time you want to ask a question on Stack Overflow or the Gradle Forum. Instead, just include a link to your latest build scan. It contains much, if not all, of the information the person answering your question might need to know. It’ll save you both time, and they’ll probably thank you for it.

Who’s using them?

We’re excited that a number of prominent open source projects like Hibernate and JUnit 5 have already integrated build scans into their workflow. You can take a look through sample scans from each of these projects at

Open Source Gradle Build Scans

Put build scans to use for yourself

If you’re new to build scans, now is a great time to start using them. We’re continually rolling out new features, and we’ll cover each of them in subsequent posts. In the meantime, you can learn how to enable build scans for your existing projects via our getting started instructions, or get up and running with a sample project by cloning our quick start repository and following the steps in its README.

Happy scanning, and we look forward to your feedback!

Gradle 3.0 M2: Initial Java 9 Support and Performance Improvements

The second milestone of Gradle 3.0 has just been released, and this version comes with initial support for Java 9!

It means that Gradle now runs properly when executed on the latest Java 9 EAP builds, but also that you can build and run tests using early versions of JDK 9. It is however important to understand that while you can compile and test applications with JDK 9, we do not support modules, nor any JDK 9 specific compile options (like -release or -modulepath) yet. However we would gladly appreciate any feedback with your own projects.

More performance improvements

This milestone is also a good moment to check out our latest and greatest performance improvements. It’s always better to perform measurements on real life builds, so the example below uses the Golo programming language as a guinea pig, and compares the execution time of a clean build of this project. The left pane is using Gradle 2.12 while the right pane is using Gradle 3.0 M2 with a “hot” daemon:

As you can see having the daemon by default makes your builds significantly snappier, although the performance improvements we’ve made since Gradle 2.12 go beyond just using the daemon. For those of you who were already enabling it in previous versions of Gradle, you should also see better performance, as this next screencast shows:

Since Gradle 2.12, we’ve made significant progress that can be summarized in a few lines:

  • configuration time is now faster, meaning that the time it will take from the moment you invoke a Gradle task and the moment the task is actually executed is much shorter. especially apparent on large multi-module builds.
  • execution with the daemon has been optimized, meaning that in Gradle 3.0 having it enabled by default, you will immediately benefit from faster builds
  • build script caching has been reworked so that subsequent builds are not only faster to configure, but also that builds running concurrently will no longer hang. This is particularly important for non-isolated builds running on a CI server

As an illustration of those improvements, we tried to execute gradle help with the daemon on on the Apereo CAS project, which consists of a large multiproject builds, which typically greatly benefits from those improvements. Again, the left side is using Gradle 2.12, while the right side uses 3.0 M2:

Last but not least, we also took a look at the rare cases where Gradle was still slower than Maven and fixed those. The following screencast is an illustration of what you can expect from Gradle 3.0 compared to Maven. This project features a build with 25 subprojects, each having around 200 files and unit tests. Then we ask both Gradle and Maven to assemble it without running tests.

Ultimately, one of the biggest differences between Gradle and Maven is that Gradle is aware of all input/outputs of tasks. As such, it’s smart enough to know about when it has to do something or not. So when we execute the same tasks again, it will not re-execute them if nothing changed:

Check out our performance guide

Having high performance builds is key to build happiness! As such, we focus heavily on performance improvements to Gradle itself. However, there are also many things that users can do to make their builds faster. To that end, we’re currently writing a performance guide, and we invite everyone to take a look at it. It’s in draft form at the moment, but already contains many valuable hints about how to make your Gradle builds even snappier. Please do give it a read, and we’d love to hear your feedback via the guide’s GitHub Issues.

Kotlin Meets Gradle

Many readers will be familiar with JetBrains’ excellent Kotlin programming language. It’s been under development since 2010, had its first public release in 2012, and went 1.0 GA earlier this year.

We’ve been watching Kotlin over the years, and have been increasingly impressed with what the language has to offer, as well as with its considerable uptake—particularly in the Android community.

Late last year, Hans sat down with a few folks from the JetBrains team, and they wondered together: what might it look like to have a Kotlin-based approach to writing Gradle build scripts and plugins? How might it help teams—especially big ones—work faster and write better structured, more maintainable builds?

The possibilities were enticing.

Because Kotlin is a statically-typed language with deep support in both IDEA and Eclipse, it could give Gradle users proper IDE support from auto-completion to refactoring and everything in-between. And because Kotlin is rich with features like first-class functions and extension methods, it could retain and improve on the best parts of writing Gradle build scripts—including a clean, declarative syntax and the ability to craft DSLs with ease.

So we got serious about exploring these possibilities, and over the last several months we’ve had the pleasure of working closely with the Kotlin team to develop a new, Kotlin-based build language for Gradle.

We call it Gradle Script Kotlin, and Hans just delivered the first demo of it onstage at JetBrains’ Kotlin Night event in San Francisco. We’ve published the first pre-release toward version 1.0 of this work today, along with open-sourcing its repository at

So what does it look like, and what can you do with it? At a glance, it doesn’t look too different from the Gradle build scripts you know today:


But things get very interesting when you begin to explore what’s possible in the IDE. You’ll find that, suddenly, the things you usually expect from your IDE just work, including:

  • auto-completion and content assist
  • quick documentation
  • navigation to source
  • refactoring and more

The effect is dramatic, and we think it’ll make a big difference for Gradle users. Now, you might be wondering about a few things at this point—like whether existing Gradle plugins will work with Gradle Script Kotlin (yes, they will), and whether writing build scripts in Groovy is deprecated (no, it’s not). You can find complete answers to these and other questions in the project FAQ. Do let us know if you have a question that’s not answered there.

Of course, all this is just the beginning. We’re happy to announce that Kotlin scripting support will be available in Gradle 3.0, and we’ll be publishing more information about our roadmap soon. In the meantime, there’s no need to wait—you can try out Gradle Script Kotlin for yourself right now by getting started with our samples.

And we hope you do, because we’d love your feedback. We’d love to hear what you think, and how you’d like to see this new work evolve. You can file issues via the project’s GitHub Issues and please come chat with us in the #gradle channel of the public Kotlin Slack.

I’d like to say a big thanks to my colleague Rodrigo B. de Oliveira for the last few months of working together on this project—it’s been a lot of fun! And a big thanks to the Kotlin team, in particular Ilya Chernikov and Ilya Ryzhenkov for being so responsive in providing us with everything we needed in the Kotlin compiler and Kotlin IDEA plugin. Onward!

Performance is a Feature

At Gradle Inc., we take build performance seriously. While we bundle performance improvements into every Gradle release, we’ve kicked off a concerted effort called a performance burst from Gradle 2.13 in order to make building software faster and more enjoyable for all of our users. In this blog post, we will explore how we approach performance issues, as well as what improvements to expect in the 2.13 release and beyond.

The fastest thing to do is nothing

Building software takes time, which is why the biggest performance improvement is cutting steps out of it entirely. That’s why, unlike traditional build tools such as Maven or Ant, Gradle focuses on incremental builds. Why would you ever run clean when you don’t need to? For some developers, running clean became a conditioned response to a broken build tool. Gradle doesn’t have such an issue: aware of all inputs and outputs of a task, it is reliably capable of handling incremental builds. Most builds will be incremental, and that’s why we focus so heavily on optimizing this case. One way that we accomplish this is through the Gradle daemon.

The Gradle daemon can dramatically improve your build performance by allowing build data to persist in memory between build invocations and avoiding JVM startup times on each build. The daemon is a hot JVM hosting the Gradle runtime, making it possible to run subsequent builds much faster: instead of spawning a new JVM for each build, we can benefit from all the goodness of having a cached JVM – in particular, we realize a strong benefit from JIT (just in time compilation). While turning on the daemon has a cost for the first build, the amount of time that you will gain for each subsequent build more than offsets the initial cost. In Gradle 2.13, we focused our improvements when the daemon is activated, and we’re preparing to enable this by default in Gradle 3.0. Other performance improvements we’ve implemented will benefit all users, independently of whether they use the daemon or not (and if you don’t use the daemon yet, we strongly encourage you to try it out!).

As you can read in our release notes, we’ve emphasized several categories of performance improvements:

  • reducing the build configuration time, that is to say, reducing the fixed cost of creating and configuring a Gradle build
  • reducing the test execution time; i.e., reducing the overhead of Gradle compared to just executing tests in an IDE
  • improving the performance of importing a project in an IDE
  • reducing communication latency between the interactive Gradle client and the daemon process

Reducing configuration time

Here’s an idea of the improvement you can expect:

many empty.png

So the example above yields a typical performance test metric: we’re comparing the average execution time of a build when we run gradle help for a project that contains a lot of subprojects (10000 projects). You can see that when we started optimizing configuration time, the master branch was slower than Gradle 2.7. Now, Gradle 2.13 is faster than ever! We have measured up to 25% reduction on our own builds! However, more than the improvement, it’s how we get to this that is important. Improving performance is a process, and here is how it works.

Performance test suite

The Gradle sources contain a sub-project dedicated to performance tests. This test suite is very particular, and allows us:

  • to compare the performance of the master branch with previous releases of Gradle
  • to compare various build scenarios against a single version of Gradle

So typically, in the example above, we’re comparing the average execution time of a build, when we run gradle help, in a specific scenario (an empty build with 10000 sub-projects), and compare it with previous Gradle releases. It’s worth noting that this performance test suite is executed daily, allowing us to catch performance regressions very early in the development phase.

Writing a performance test scenario

So how, in practice, do we write a performance test? It all starts with a scenario we want to test. For example, we want to make sure that we reduce the duration of test execution. The first step is then to write a build template that will let us test Gradle against this scenario. And a template has various parameters: the number of sub-projects, the number of (test) classes in source, external dependencies, … This let us generate sample Gradle builds that are used to measure performance. Of course, those performance test builds are generated with Gradle.

All the graphs you see below were generated using fully automated performance tests, and aimed at testing specific scenarios. Should you find a performance issue with Gradle, this is a great way to get started: create a new template, then send us a pull request to show the problem. Of course, all our performance tests are regular test cases, which means that we can fail the build if we introduce a regression.

Since Gradle 2.13 is primarily a performance-enhancing release, let’s focus on some of the improvements.

Gradle vs Maven

In this scenario, we are comparing the time it takes to execute gradle clean test vs mvn clean test. As we mentioned earlier, cleaning is not necessary in Gradle, but we do it here for the sake of comparison against Maven, and to assess the “cold build” time. Here are the results:

gradle vs maven clean build.png

At the end of February, Maven and Gradle were comparable. Since then, the new performance improvements in Gradle 2.13 have resulted in a 10% speedup! You can notice that the graph contains some glitches: on April 2nd, you can see that the time considerably increased. However, it increased in both scenarios: Maven and Gradle. So what you need to keep in mind when reading such graphs is that results are relative between them for a same date. This is important because:

  • we could change the templates between two executions of the performance build, resulting in an increase or decrease of the build time.
  • we could change hardware between two executions, leading to the same side effects

Profiling is better than guessing

So how did we manage to improve this? First of all, once a scenario is written and performance tests running, we need to profile the builds. For that purpose, we’re using different tools, from YourKit Java Profiler to Java Mission Control, the JIT logs or simply good old System.out.println statements. In the end, we try to identify what is causing slowdown and write a document summarizing our findings. Those documents are all public, and you can find them in our GitHub repository. Once we’ve identified hotspots and written down the profiling results, we extract stories for improvement and actually go to the implementation phase. This “profiling to stories” phase is very important, because while a profiler will be very helpful in identifying hotspots, it will be no help when it comes to interpreting the results: often, rewriting an algorithm can be much more efficient than trying to optimize a SAX parser…

Optimizing the communication between the daemon and the client

As we explained, we’re primarily (but not only) focusing on improving performance when the daemon is activated. One issue with the daemon is that you have a forked JVM. When you run gradle, the client process, the one from the command-line, starts communicating with a long-living process, the daemon, which is effectively executing the build. And typically, to see the logs as the build is running, you need to forward events from the daemon to the client. Before 2.13, this communication was synchronous. This means that the log messages were sent synchronously between the daemon and the client. This was inefficient, because we were blocking on network I/O where we could actually perform some build operations. In 2.13, not only is communication asynchronous, but we also optimized the protocol that is used to communicate between the client and the daemon and how the client responds to these events.

Forked processes start up faster

Another improvement that was made is visible in the following scenario:

gradle vs maven cleanTest test.png

This scenario is “unfair” to Gradle, and meant to compare what happens when we just want to re-execute the tests. As you may know, when running mvn test, Maven will re-execute the tests even if nothing changed. Gradle does nothing in that case, because everything is “up-to-date”. So to emulate the behavior of Maven, we need to clean-up the test results so that we re-execute the tests and re-generate the reports. As you can see, in this scenario, Gradle was significantly slower than Maven. Now, it is faster, while doing also more work: Gradle not only runs the tests, but also generates 3 types of reports: a binary one, an XML one (for CI integration) and eventually an HTML report (for use by us, poor humans, but you can disable this behavior.) Gradle 2.12 is 15% slower in this scenario, and a large amount of improvement has been done by optimizing the classpath of the forked JVMs used for tests. In 2.12, almost the whole Gradle classpath was used on forked VMs, when in reality we just need a subset of Gradle classes (basically to communicate between the forked VM and the daemon). By optimizing this classpath, we can now reduce classpath scanning and significantly improve the time it takes to execute tests. If you ever noticed a “pause” when Gradle was about to execute tests, it has now gone!

Reports are generated in parallel

Part of the improvement on test execution is also obtained thanks to parallel generation of reports. As we explained, Gradle generates more reports than Maven by default. This is usually what you want, because when you’re developing an application and run tests locally, having to decipher XML test reports can be very frustrating. With Gradle 2.13, now, the HTML and XML reports are generated in parallel, which significantly reduces the time required before starting the test suite of the next project. The more modules your project has, the more likely you will see a significant reduction in build duration.

Improving build startup time

Faster script compilation

When executing Gradle builds for the first time, you can see, as part of the “configuration” phase, that Gradle is actually compiling the build scripts. Despite being scripts, Gradle build files are written in Groovy and are nevertheless compiled to bytecode. This is time consuming, but has been optimized by the Gradle team. In particular, Gradle has to compile the scripts several times, with different classpaths, in order to compile scripts that contain references to remote resources such as plugins.

In Gradle 2.13, we changed the way Gradle scripts are compiled, and optimized two scenarios:

  • running several builds concurrently from the same directory (this often happens on CI). Before this, the “script cache” that Gradle uses was locked during the execution of a build, so if a build script was changed during the execution of a build, all concurrent builds were locked until the first one finishes.
  • re-use build scripts independently of their location. Imagine that you have multiple projects using the same remote scripts. This is typically the case in corporate environments, where a script defines some credentials, conventions, or plugins to be used in all builds of the company. Then, each project had to compile the script before being able to use it. Gradle 2.13 changed that, and now compiles script based on their actual contents (and classpath) rather than their location. It means that if you have 2 projects which have the same build files but in different locations, the script will only be compiled once. However, to be able to report build errors on the correct build file, we’re also using a “relocation technique”, which takes a compiled script class and remaps it to an actual script file so that errors are reported correctly.

Optimized classpath

Another work that has been done in 2.13 is improving the classpath of Gradle, so that services are located faster. When you have a lot of jars on classpath, ordering is important, and the number of classes is important. Even if you “only” gain 10ms, it can lead to significant differences when builds are often executed, in particular from the IDE, which leads to the last area of improvement we worked on in 2.13.


Sometimes, improving performance is a matter of serendipity. We recently discovered that some performance tests were executing significantly faster on our CI server than locally, but were unsure of the cause. After doing some profiling, we realized that the code to propagate properties from the various files to the actual Project was very inefficient: the more properties you had in your various file, the longer it would take to start the build! We identified the problem and fixed this.

Faster IDE integration

The Tooling API typically allows IDE vendors to integrate Gradle. This is what we do with Buildship. It has very specific needs and in particular, it has to be both backwards and forward compatible, meaning a certain version of the TAPI can execute Gradle builds for both older and newer versions of Gradle. Of course, a developer would only benefit from the latest improvements by using both the latest version of the Tooling API and Gradle, but it leads to interesting architecture.

In this case, the Tooling API heavily relies on reflection to invoke methods. In Gradle 2.13, we significantly improved caching, which led to spectacular results:


This scenario illustrates how long it takes to import, typically, a 500 sub-projects build into Eclipse. While it took 25s with the 2.12 version of the Tooling API, it’s now only 10s. And you can even see more spectacular results in IntelliJ IDEA, where they are using “custom models”. Imports/synchronizing projects would then be orders of magnitude faster.

There’s more to come!

We cannot close this blog post without illustrating what we mean by “doing nothing is better”. In the Maven vs Gradle examples above, we’ve tried to “emulate” the behavior of Maven with Gradle. Here is, typically, the graph that you would get when running proper incremental builds with Gradle. That is to say that you open and edit several files from different sub-modules then re-execute the tests. Remember, with Gradle, you no longer have to clean, but we were fair and didn’t clean with Maven either:

maven vs gradle incremental.png

Yes, Gradle is almost 6x as fast in this scenario. So now, imagine doing this 10, 100 times a day, multiplied by the number of developers in your company. And realize how much money it is.

Thanks for reading this, and don’t worry: there’s more to come, stay in touch for more performance improvements in Gradle 2.14!

Introducing Compile-Only Dependencies

One of the most highly-anticipated Gradle features has just arrived in Gradle 2.12: support for declaring compile-only dependencies. For Java developers familiar with Maven, compile-only dependencies function similarly to Maven’s provided scope, allowing you to declare non-transitive dependencies used only at compilation time. While a similar capability has been available for users of the Gradle War Plugin, compile-only dependencies can now be declared for all Java projects using the Java Plugin.

Compile-only dependencies address a number of use cases, including:

  • Dependencies required at compile time but never required at runtime, such as source-only annotations or annotation processors;
  • Dependencies required at compile time but required at runtime only when using certain features, a.k.a. optional dependencies;
  • Dependencies whose API is required at compile time but whose implementation is to be provided by a consuming library, application or runtime environment.

Compile-only dependencies are distinctly different than regular compile dependencies. They are not included on the runtime classpath and they are non-transitive, meaning they are not included in dependent projects. This is true when using Gradle project dependencies and also when publishing to Maven or Ivy repositories. In the latter case, compile-only dependencies are simply omitted from published metadata.

As part of our commitment to quality IDE support, compile-only dependencies continue to work with Gradle’s IDEA and Eclipse plugins. When used within IntelliJ IDEA, compile-only dependencies are mapped to IDEA’s own provided scope. Within Eclipse, compile-only dependencies are not exported via project dependencies.

In the Gradle model we consider tests to be a “consumer” of the production code. With this in mind, compile-only dependencies are not inherited by the test classpath. The intention is that tests, like any other runtime environment, should provide their own implementation, either in the form of mocks or some other dependency.

Declaring compile-only dependencies is simple—just assign dependencies to the new compileOnly configuration for the appropriate source set:

dependencies {
    compileOnly 'javax.servlet:servlet-api:2.5'

As a result of the addition of the compileOnly configuration, the compile configuration no longer represents a complete picture of all compile time dependencies. When it’s necessary to reference a compile classpath in build scripts or custom plugins, the appropriate source set’s compileClasspath property should be used instead.

For more information, see the Java Plugin chapter of the Gradle user guide. And as always, we welcome your feedback and questions in the comments below or via the Gradle Forum at

Introducing TestKit: A Toolkit for Functionally Testing Gradle Build Logic

Automated testing is a necessary prerequisite for enabling software development practices like refactoring, Continuous Integration and Delivery. While writing unit, integration and functional tests for application code has become an industry norm, it is fair to say that testing for the build automation domain hasn’t made its way into the mainstream yet.

But why is it that we don’t apply the same proven practice of testing to build logic? Ultimately, build logic is as important as application code. It helps us to deliver production software to the customer in an automated, reproducible and reliable fashion. There might be many reasons to skip testing; however, one of the reasons that stands out is the data definition format used to formulate build logic. In the past, writing tests for XML-based build logic definitions was a daunting, almost impossible task without the right tooling.

In this regard, Gradle makes your life easier. Build code can be structured properly, organized based on functional boundaries, and developed as actual class implementations with the help of concepts like custom tasks and binary plugins. Automated testing of build logic becomes approachable, and when combined with the appropriate tooling is easily attainable.

Meet TestKit

One way to test build logic is to declare and execute it the same way as the end user would. In practice, this means creating a build script, adding the configuration you want to test and executing it with the Gradle runtime. The outcome of the build, such as the console output, the executed tasks, and produced artifacts, can be inspected and verified against expected assertions. This type of testing is commonly referred to as functional testing.

Let’s have a look at an example. In the following build script, we apply the Java plugin.


apply plugin: 'java'

Executing this build script with the compileJava task should produce class files for the Java source files found in the directory src/main/java. As an end user, we’d expect these class files to be located in the directory build/classes/main. Of course, you could verify this behavior by executing the given build script manually with the Gradle command and inspect the output directory. I hope the last sentence gave you the itch. We are automation engineers, so obviously we’ll want to automate as much as we can.

Meet the Gradle TestKit: a toolkit for executing functional tests in an automated fashion. TestKit is bundled with Gradle starting with version 2.6 and is available to be used in your projects now.

Using TestKit

There are typically two different use cases for adding TestKit to a project.

  1. Cross-version compatibility testing. You want to verify if a build script is compatible with a specific Gradle version. Organizations often apply this technique in preparation for a Gradle version upgrade of an existing build or when multiple versions of Gradle must be supported by the same build logic.

  2. Custom build logic testing. You want to test if your custom task or plugin behaves as expected under certain conditions that resemble the real-world usage by a build script author. A typical example could be: “If a user applies this plugin and configures a property of my exposed extension, then a provided task should observe a specific runtime behavior and produce the output x when executed.” On top of this scenario cross-version compatibility could play a role as well.

Given the last example, let’s have a look how we can implement the scenario with the TestKit API. Note that the following test class uses the Spock test framework.


import org.gradle.testkit.runner.GradleRunner
import static org.gradle.testkit.runner.TaskOutcome.*
import org.junit.Rule
import org.junit.rules.TemporaryFolder
import spock.lang.Specification

class BuildLogicFunctionalTest extends Specification {
   @Rule final TemporaryFolder testProjectDir = new TemporaryFolder()
   File buildFile

   def setup() {
       buildFile = testProjectDir.newFile('build.gradle')

   def 'produces class files when compiling Java source code'() {
       buildFile << "apply plugin: 'java'"

       def result = GradleRunner.create()

       result.task(':compileJava').outcome == SUCCESS
       new File(testProjectDir.root, 'build/classes/main').exists()

Even if you haven’t used Groovy or Spock before, it becomes apparent how easy it is to formulate a functional test case with the help of TestKit.

Tell Me More About TestKit

The previous code example uses Spock for implementing a test case. If you are not familiar with Spock or prefer a different test framework, you can still use TestKit. By design, TestKit is test framework-agnostic. It’s up to you to pick the test framework you are most comfortable with whether that’s JUnit, TestNG or any other test framework out there in the wild.

For test scenarios that require you to execute a test with multiple Gradle distributions, e.g. in the context of cross-version compatibility testing, TestKit exposes API methods for providing the appropriate Gradle distribution information. You can either point to a local installation of Gradle, a distribution identified by version on a server hosted by Gradle Inc. or a distribution identifiable by URI.

As you execute the test, you might also want to step through the build logic under test for debugging purposes from the IDE of your choice. TestKit allows you to execute tests in debug mode to track down unexpected test runtime behavior.

What’s on the Roadmap for TestKit

There’s more to come for TestKit. In the future, we want to make it even more convenient to use the TestKit API. You can read all about it in the design document. Let us know if you are interested in contributing! We’d love to see TestKit evolve.

Buildship: From Inception to Inclusion in Eclipse

At Gradle, we believe that maintaining developer “flow state” is essential to building good software. And because we believe flow is essential, we assert that developers should not have to leave the IDE to build, and they should not have to know what functions are being performed by the IDE and what is delegated to the build system. It is also our vision that all build logic is kept exclusively in the build system and thus all work to calculate the project configuration, to build the project, to run the tests and to run an executable is delegated from the IDE to the build system. Hence the IDE maps the projects of the build, visualizes the build models, and displays the progress of executing a build.

To realize our vision of ideal IDE Gradle interaction, we resolved to build and offer our own reference implementation to guide implementers of other IDEs. Because of this, in the Fall of 2014, Gradle Inc. decided to provide its own Eclipse plugin for Gradle to give the users the best experience when working with Gradle from within Eclipse.

Buildship invited to join the Eclipse Mars release train

Soon after we had started with the implementation, the Eclipse Foundation asked us whether we would like to contribute an Eclipse plugin for Gradle and have the plugin become part of the Eclipse Mars release train. This fit well with our vision for demonstrating how IDEs and builds should interact and further allowed us to serve the large global group of Eclipse users with the best Gradle support possible. We agreed and the project onboarding process defined by started immediately.

Our first step was to find a name independent of either Gradle or Eclipse and get it trademarked, thus the Buildship project was born and made official. The suffix “ship” has a nice feel, denoting condition, character, office, skill such as “Fellowship” or “Statesmanship”, but also emphasizes the significance of shipping software. From the very beginning, Wayne Beaton from the Eclipse Foundation assisted us with all the countless steps involved in the formal process of going from a project without a name to a project that is part of the Eclipse Simultaneous Release.

Gradle builds Eclipse bundles

On the implementation side, in January 2015, we started by creating a Gradle build to compile, test, assemble, and deploy Eclipse bundles. No satisfying solutions existed that we could leverage. The work on our own new build was very incremental and the buildSrc feature of Gradle proved very valuable to quickly mature our build logic. Today, our build is very stable, its logic is packaged into Gradle plugins defined in the buildSrc folder, and the plugins are generic enough to be used by other Eclipse bundle projects. Looking at the advanced logic of our Gradle build, there is no way we could have achieved the same conciseness and expressiveness with Maven, which is still ubiquitous for Eclipse bundle projects.

Buildship debuts at EclipseCon NA in March

Once the Gradle build was established and a Continuous Integration pipeline set up on top of it, we were able to start focusing on the content of the Eclipse plugin for Gradle. We started with the Tasks View, took on the Project import, added Run Configurations and a Console View, and then integrated into the existing Eclipse Test Runner. This was the state that we presented at EclipseCon NA in March 2015.

Gradle dedicates more developers to Buildship

To accelerate the development of Buildship and to ensure that we would meet the deadline for the Eclipse Mars release in June 2015, we got Simon Scholz from Vogella GmbH to help us with work that required in-depth Eclipse knowledge, which proved to be invaluable. We also dedicated one more Gradle core developer to work on Buildship. Replacing our integration into the Eclipse Test Runner with our own Executions View was next. This view visualizes what happens when running a build, like what tasks are run, what tests are executed, and so on. The visualization happens based on events that Buildship receives from Gradle via the Tooling API. The Tooling API is a separate, standalone Gradle library with its own API that allows the IDE to communicate with a Gradle build through inter-process communication. The architecture where the Tooling API serves as a proxy of the Gradle build comes with many advantages, like process isolation, backward/forward compatibility, and contained build logic. Many enhancements have been added to the Tooling API during the development of Buildship, like event broadcasting, optimized classpath dependency calculation for Eclipse, and more.

Buildship released simultaneously with Eclipse Mars at EclipseCon France in June

With the Task View, the Executions View, the Console, View, the Run Configurations, and the Project Import Wizard in place, we were ready to become part of Eclipse Mars. Unfortunately, not everyone was convinced that we were ready yet to become part of the Simultaneous Release and so in June 2015, we shipped Buildship on the same day with Eclipse Mars but not yet in Eclipse Mars. The Eclipse Foundation had created a new entry in the Eclipse Marketplace for that purpose with a select few plugins hosted there. This allowed all users to select and install the Buildship plugin right from within Eclipse. This was the state that we presented at EclipseCon France in June 2015.

Gradle continues to invest in Buildship features based on your feedback

After the release, the work on enhancing Buildship continued without interruption. We focused on enhancing and polishing the existing functionality, primarily based on feedback we had received through our Buildship Forum. The Forum is actively used to report issues, to request new features, and to ask questions. One topic that came up repeatedly was to extend the import functionality and to allow the user to explicitly refresh imported projects. Thus, we invested a significant amount of work into consolidating and at the same time extending the logic of importing a project, explicitly refreshing a project, and opening a project. We also added a new feature that allows a user to execute tests through Gradle from the Executions View and from the Source Editor. This is the state that we will present at EclipseCon Europe in November 2015.

Community tracks and fixes a mysterious bug

In time for Eclipse Mars.1, we finished our enhancements, became part of the Simultaneous Release, were included in three important Eclipse distributions (EPPs), and the Buildship project itself graduated out of incubation to become a full project. Due to a bug that was found in Buildship during the quiet period of the Mars.1 release, we had to provide a new version of Buildship that delayed the final Mars.1 release by one week. The bug had been there since June but nobody had ever made the connection before between how the bug manifested itself and Buildship being the plugin causing the problem. The severity of the bug and how to go about fixing it were discussed and resolved openly on the Eclipse Mailing List.

Buildship released in Eclipse Mars.1

On October 2nd, 2015, Eclipse Mars.1 was officially released with Buildship being the most prominent new feature in the release. Developing Buildship over the past ten months has been very interesting to everyone involved since it included working on Gradle core, on the Tooling API, on Buildship itself, and participating in the formal process.

Even more to come

The work on Buildship will continue. We want to bring much more of our vision on how the IDE should integrate with Gradle to reality: visualizing detailed information about the rich model behind a Gradle build, supporting the project configuration for web applications, debugging tests and web applications, code assistance, and more.

Try it now

The easiest way to try out Buildship is to download Eclipse Mars for Java developers, and choose File > Import… > Gradle Project and point to an existing Gradle Java project. Let us know if you experience build happiness (or some other sensation) while working with Buildship.

Introducing Continuous Build Execution

Optimizing the build-edit-build loop

In the past, we’ve recommended that you enable the Gradle Daemon (and parallel execution, with some caveats) to get the best performance out of Gradle. We’ve also talked about using incremental builds to speed up your build-edit-build feedback loop by skipping unnecessary work. Now there’s another optimization available—one that allows you to get out of the way and let Gradle start the build for you.

As of 2.5, Gradle supports continuous build execution, which will automatically re-execute builds when changes are detected to its inputs. There have been a few community plugins that add support for a Gradle “watch” mode that do something similar.

With Maven, the same watch functionality needs to be implemented for each plugin or you have to use a plugin that has a predefined set of goals. We wanted to do better than that. We wanted any plugin to be able to leverage the power of continuous builds without having to supply additional information. We also wanted the set of tasks to execute to be completely ad-hoc. Since Gradle needs to know a task’s inputs and outputs for incremental builds, we had all the information necessary to start watching for changes.

Using continuous build

Continuous build can be used with any task or set of tasks that have defined inputs and outputs. If you’re using well-behaved tasks, this shouldn’t be a problem for most builds. If you find that your build isn’t rebuilding with continuous build as you think it should, it could point to a problem in your build script.

Command-line option

You enable continuous build with the -t or --continuous command-line option along with whichever tasks you want to run (we call these task selectors). At least one task that runs needs to define inputs to enter continuous build mode.

For example, on a typical Java project,

$ gradle -t test 

would enable continuous build and re-run tests any time the main sources or test sources change.

We’re not limited to a single task, so we could also re-run tests and FindBugs on the main sources using 

$ gradle -t test findBugsMain.

Determining when to run another build

When you run Gradle with the continuous build option, Gradle executes the build as usual, except Gradle also registers the inputs to all tasks with a file watch service. Even tasks that are UP-TO-DATE will have their inputs recorded, so all inputs can be considered when triggering a new build. This means that you don’t have to start from a clean build for Gradle to know which inputs could change in continuous build mode.

After the end of the build, Gradle will start watching for file system changes based on the collected inputs. The Gradle command-line interface will display the message Waiting for changes to input files of tasks on the console and will wait for changes to inputs. If any of the input files are changed or deleted, Gradle will execute another build with the identical set of task selectors. Gradle can detect changes to simple files (deleted, modified) and changes to directories (deleted or new files).

See a demo of this in action:

Exiting continuous build

Once Gradle is running in continuous build, it will not exit, even if the build is not successful. To get out of continuous build, you should use Ctrl-D to cancel the build. On Microsoft Windows, you must also press ENTER or RETURN after Ctrl-D.

If you use Ctrl-C, Gradle will exit abruptly and also kill the Gradle Daemon.

UPDATE: As of Gradle 3.1, Ctrl-C no longer kills the Gradle Daemon.


The User Guide chapter describes all limitations and quirks with continuous build.

Requires Java 7 or better

Gradle uses Java 7’s WatchService to watch for changes to inputs. This functionality is only available on JDK 7 or later.

Mac OS X performance

For GNU/Linux and Microsoft Windows, the file system change events are provided through a kernel service. For Mac OS X, Java falls back to a polling-based system. This means on Mac OS X only, change detection on a very, very large number of input files may be delayed and, in some cases, cause a deadlock. Both of these issues are tracked as JDK bugs: JDK-7133447 and JDK-8079620.

Changes to build scripts

Gradle doesn’t consider changes to your build logic when in continuous build mode. Build logic is created from build.gradle, settings.gradle, and other sources. If you make changes to your build scripts, you’ll have to exit continuous build and restart Gradle. Future versions of Gradle will make it easier to describe inputs to your build logic so that continuous build can work with this as well.

Future improvements

In addition to mitigating some of the limitations with the current implementation, there are other interesting things we can use continuous build to accomplish.

Right now, there are not any supported, public ways of managing a process started by Gradle that needs to exist between builds. Gradle expects that a process started (e.g., via Exec) will exit as part of the build.

In the next release (2.6), Play support is coming to Gradle, and with that you’ll be able to start Play applications in a separate JVM for local development. With continuous build enabled, Gradle will hot-reload the Play application whenever classes or assets are changed. The Play plugin accomplishes this by registering the Play JVM with Gradle in a way that survives between builds.

We want to eventually evolve this Play specific reload functionality into a general feature, so plugins can have their own “hot-reload”-like behavior.

Another opportunity for improvement is up-to-date checking. For very large projects, up-to-date checking can be time consuming for the no-op case. When looking for out-of-date files, Gradle must scan entire directories or recalculate file checksums. When using continuous build, Gradle must already keep track of file and directory changes, so in some cases, Gradle may be able to skip checks for files that are known to have not changed.


Please let us know on the forums if you run into any surprises with this new feature.

Introducing Incremental Build Support

Task inputs, outputs, and dependencies

Built-in tasks, like JavaCompile declare a set of inputs (Java source files) and a set of outputs (class files). Gradle uses this information to determine if a task is up-to-date and needs to perform any work. If none of the inputs or outputs have changed, Gradle can skip that task. Altogether, we call this behavior Gradle’s incremental build support.

To take advantage of incremental build support, you need to provide Gradle with information about your tasks’ inputs and outputs. It is possible to configure a task to only have outputs. Before executing the task, Gradle checks the outputs and will skip execution of the task if the outputs have not changed. In real builds, a task usually has inputs as well—including source files, resources, and properties. Gradle checks that neither the inputs nor outputs have changed before executing a task.

Often a task’s outputs will serve as the inputs to another task. It is important to get the ordering between these tasks correct, or the tasks will run in the wrong order or not at all. Gradle does not rely on the order that tasks are defined in the build script. New tasks are unordered, therefore execution order can change from build to build. You can explicitly tell Gradle about the order between two tasks by declaring a dependency between one task another, for example consumer.dependsOn producer.

Declaring explicit task dependencies

Let’s take a look at an example project that contains a common pattern. For this project, we need to create a zip file that contains the output from a generator task. The manner in which the generator task creates files is not interesting—it produces files that contain an incrementing number.


apply plugin: 'base'

task generator() {
    doLast {
        def generatedFileDir = file("$buildDir/generated")
        for (int i=0; i<10; i++) {
            new File(generatedFileDir, "${i}.txt").text = i

task zip(type: Zip) {
    dependsOn generator
    from "$buildDir/generated"

The build works, but the build script has some issues. The output directory for the generator task is repeated in the zip task, and dependencies of the zip task are explicitly set with dependsOn. Gradle appears to execute the generator task each time, but not the zip task. This is a good time to point out that Gradle’s up-to-date checking is different from other tools, such as Make. Gradle compares the checksum of the inputs and outputs instead of only the timestamp of the files. Even though the generator task runs each time and overwrites all of its output files, the content does not change and the zip task does not need to run again. The checksum of the zip task’s inputs have not changed. Skipping up-to-date tasks lets Gradle avoid unnecessary work and speeds up the development feedback loop.

Declaring task inputs and outputs

Now, let’s understand why the generator task seems to run every time. If we take a look at Gradle’s info-level logging output by running the build with --info, we will see the reason:

Executing task ':generator' (up-to-date check took 0.0 secs) due to:
Task has not declared any outputs.

We can see that Gradle does not know that the task produces any output. By default, if a task does not have any outputs, it must be considered out-of-date. Outputs are declared with the TaskOutputs. Task outputs can be files or directories. Note the use of outputs below:


task generator() {
    def generatedFileDir = file("$buildDir/generated")
    outputs.dir generatedFileDir
    doLast {
        for (int i=0; i<10; i++) {
            new File(generatedFileDir, "${i}.txt").text = i

If we run the build two more times, we will see that the generator task says it is up-to-date after the first run. We can confirm this if we look at the --info output again:

Skipping task ':generator' as it is up-to-date (took 0.007 secs).

But we have introduced a new problem. If we increase the number of files generated (say, from 10 to 20), the generator task does not re-run. We could work around this by doing a clean build each time we need to change that parameter, but this workaround is error-prone.

We can tell Gradle what can impact the generator task and require it to re-execute. We can use TaskInputs to declare certain properties as inputs to the task as well as input files. If any of these inputs change, Gradle will know to execute the task. Note the use of inputs below:


task generator() {
    def fileCount = 10 "fileCount", fileCount
    def generatedFileDir = file("$buildDir/generated")
    outputs.dir generatedFileDir
    doLast {
        for (int i=0; i<fileCount; i++) {
            new File(generatedFileDir, "${i}.txt").text = i

We can check this by examining the --info output after we change the value of the fileCount property:

Executing task ':generator' (up-to-date check took 0.007 secs) due to:
Value of input property 'fileCount' has changed for task ':generator'

Inferring task dependencies

So far, we have only worked on the generator task, but we have not reduced any of the repetition we have in the build script. We have an explicit task dependency and a duplicated output directory path. Let’s try removing the task dependencies by relying on how CopySpec#from evaluates arguments with Project#files. Gradle can automatically add task dependencies for us. This also adds the output of the generator task as inputs to the zip task.


task zip(type: Zip) {
    from generator

Inferred task dependencies can be easier to maintain than explicit task dependencies when there is a strong producer-consumer relationship between tasks. When you only need some of the output from another task, explicit task dependencies will usually be cleaner. There is nothing wrong with using both explicit task dependencies and inferred dependencies, if that is easier to understand.

Simplifying with a custom task

We call tasks like generator ad-hoc tasks. They do not have well-defined properties nor predefined actions to perform. It is okay to use ad-hoc tasks to perform simple actions, but a better practice is to move ad-hoc tasks into custom task classes. Custom tasks let you remove a lot of boilerplate and standardize common actions within your build.

Gradle makes it really easy to add new task types. You can start playing around with custom task types directly in your build file. When using annotations like @OutputDirectory, Gradle will create output directories before your task executes, so you do not have to worry about making the directories yourself. Other annotations, like @Input and @InputFiles, have the same effect as manually configuring a task’s TaskInputs.

Try creating a custom task class named Generate that produces the same output as the generator task above. Your build file should look like the following:


task generator(type: Generate) {
    fileCount = 20

task zip(type: Zip) {
    from generator

Here is our solution:


class Generate extends DefaultTask {
    int fileCount = 10

    File generatedFileDir = project.file("${project.buildDir}/generated")

    void perform() {
        for (int i=0; i<fileCount; i++) {
            new File(generatedFileDir, "${i}.txt").text = i

Notice that we no longer need to create the output directory manually. The annotation on generatedFileDir takes care of this for us. The annotation on fileCount tells Gradle that this property should be considered an input in the same way we used before. Finally, the annotation on perform() defines the action for Generate tasks.

Final notes about incremental builds

When developing your own build scripts, plugins and custom tasks, declaring task inputs and outputs is an important technique to keep in your toolbox. All of the core Gradle tasks use this to great effect. If you would like to learn about other ways to make your tasks incremental at a lower level, take a look at the incubating incremental task support. Incremental tasks provide a fine-grained way of building only what has changed when a task needs to execute.

Gradle's Support for Maven POM Profiles

Maven profiles provide the ability to customize build-time metadata under certain conditions, for example if a specific system property is set. A typical use case is applying certain configuration for different runtime environments such as Linux versus Windows. For example, the project being built may require different dependencies on these different platforms.

Implementing build-time profiles in Gradle projects

If you think about it, Maven profiles are logically nothing other than limited if statements. Gradle does not need a special construct for that. Why is this? Gradle utilizes a programming language, not XML, to define the build model. This gives you the full range of expression available in a programming language to define your criteria for certain parts of the configuration. Gradle’s model is richer and more flexible, allowing for a more fine-grained expression of the kinds of things that Maven profiles are typically used for. One very simple and well used pattern is to split up the conditional logic into different script plugins to make the conditional build logic as modular and maintainable as possible. The following code snippet demonstrates such an implementation:

if (project.hasProperty('env') && project.getProperty('env') == 'prod') {
    apply from: 'gradle/production.gradle'
} else {
    apply from: 'gradle/development.gradle'

The above approach allows ad-hoc expression of different configuration under whatever circumstances you need. This is analogous to the Maven profile approach, but cleaner and more flexible.

An alternative approach to conditional configuration for expressing variance is to make variance a first class citizen. The Android and C/C++ Gradle plugins take this approach. This is indicative of what the Gradle team believes to be a much better approach for most scenarios: making the variance in the build definition a first-class citizen as opposed to something that is expressed conditionally. This variance-as-first-class-citizen approach is being added to the JVM-based plugins to cater for variance such as JDK compatibility, and deeply embedded into Gradle’s dependency management engine.

Consuming Maven dependencies that rely on profiles

We have discussed the role of Maven profiles in configuring a build and alternative approaches in Gradle. As Gradle supports depending on POM-based artifacts we must also consider how Maven profiles affect dependency resolution.

Some published Maven artifacts, especially in the public Maven Central repository, depend on the information declared in their POM profiles for the dependency resolution process. Such a practice is considered to be an anti-pattern to be avoided as it influences the reproducibility of a build. Nevertheless, it is used in practice and Gradle 1.12 introduces support for some profile usage that affects dependency resolution.

Supported profile criteria in Gradle

There are two activation criteria Gradle considers:

  1. Profiles that are active by default (available with Gradle 1.12 and higher)
  2. Profiles that are active on the absence of a system property (available with Gradle 2.0 and higher)

Let’s have a look at examples that demonstrate both use cases.

Profiles that are active by default

Profiles can be considered active by default. This behavior is controlled by the activation element activeByDefault. Let’s consider the following profile section of a published POM file:


We can see that the element activeByDefault is set to true. Upon resolution of the corresponding artifact from Gradle, the profile named profile-1 will become active. This profile declares the dependency org.apache.commons:commons-lang3:3.3.2 as a transitive dependency. In turn, Gradle will take care of resolving this dependency as well.

Profiles that are active on the absence of a system property

An alternative to activeByDefault is to trigger the activation of a profile by the absence of a specific system property. Let’s have a look at another POM file that uses this type of profile activation:


This POM file will activate the profile named profile-2 whenever the system property env.type is not defined.

Alternative approaches to profiles for dependency resolution

We have seen how Maven profiles are used to express variance in build configuration and how that affects Gradle dependency resolution when consuming a Maven artifact that uses profiles as part of its runtime definition. We have also seen how the next generation of Gradle plugins (Android, C/C++) take a different approach to variance in making it first class, and that this concept is also coming to Gradle’s support for general JVM languages. This also has deep implications for dependency resolution, in a similar manner to how Maven profiles affect dependency resolution.

One significant benefit to making variance a first-class citizen in the build model, is that this information can be leveraged to provide variant-aware dependency resolution. For example, when building a binary for the x86 architecture with debug symbols, the x86 variants of the declared dependencies should be used and preferably a variant that includes debug symbols if available. This will be enforced automatically for the whole transitive dependency graph. There are countless cases of variance in the building of software and dependency management.

Hans Dockter, the founder of Gradle and CEO of Gradle Inc., recently published the 2014 Gradle Roadmap where you’ll see Variant Aware Dependency Management as a key item currently being worked on. Expect to see new Gradle features roll out over the coming year in this area.