Hello, Again

Welcome to the new Gradle blog. On behalf of the whole team, it’s my pleasure to write this first post and share a bit about what we’re up to.

Mainly, we’re putting together this new blog because we want to make sure users find out about the most important developments in Gradle-land. Our team has grown rapidly over the last couple years, and as a result we’ve shipped many new features and improvements. All too often, though, we find that would-be users never hear about them. This recent tweet provides a perfect example:

Cristian’s question is a fair one. We first shipped Play support over a year ago; we mentioned it in our Gradle 2.6 forum announcement and release notes, and we wrote a chapter about it in our user manual. Still, Cristian—and probably many others—missed it. How is that?

The answer is pretty straightforward. Forum announcements and release notes are useful documents, but they get buried as new releases pile up. Reference documentation is important too, but our user manual has grown large, meaning that Chapter 70: Building Play Applications is easy to miss if you’re not already looking for it.

This situation isn’t surprising, nor is it unique to Gradle. As any project grows, it becomes a challenge to communicate about it effectively. In any case, we can no longer expect every current and future Gradle user to dig through past release notes or to read our user manual cover to cover simply to discover what Gradle can do. It’s on us to find better ways to get the word out.

And that’s where this new blog comes in. We’ll post here whenever there’s something new in Gradle that we don’t want you to miss. We’ll keep it focused on things we think are important and worth your time. We hope it’ll become a trusted resource—not only for you to stay up to date with Gradle, but also for us to get feedback through your comments.

A better blog isn’t a silver bullet, but we think it’s a great place to start. Indeed, it’s just the first step in a much larger effort to make working with and learning about Gradle as easy and enjoyable as possible. In the weeks to come you’ll see further improvements, including new guides and tutorials, a new gradle.org website, and a simpler process for filing bugs and feature requests.

Naturally, we’ll announce each of these changes here on the blog. To stay tuned, subscribe to the blog’s Atom feed or follow @Gradle on Twitter, where we’ll link to new posts as we publish them.

Gradle 3.0 M2: Initial Java 9 Support and Performance Improvements

The second milestone of Gradle 3.0 has just been released, and this version comes with initial support for Java 9!

It means that Gradle now runs properly when executed on the latest Java 9 EAP builds, but also that you can build and run tests using early versions of JDK 9. It is however important to understand that while you can compile and test applications with JDK 9, we do not support modules, nor any JDK 9 specific compile options (like -release or -modulepath) yet. However we would gladly appreciate any feedback with your own projects.

More performance improvements

This milestone is also a good moment to check out our latest and greatest performance improvements. It’s always better to perform measurements on real life builds, so the example below uses the Golo programming language as a guinea pig, and compares the execution time of a clean build of this project. The left pane is using Gradle 2.12 while the right pane is using Gradle 3.0 M2 with a “hot” daemon:

As you can see having the daemon by default makes your builds significantly snappier, although the performance improvements we’ve made since Gradle 2.12 go beyond just using the daemon. For those of you who were already enabling it in previous versions of Gradle, you should also see better performance, as this next screencast shows:

Since Gradle 2.12, we’ve made significant progress that can be summarized in a few lines:

  • configuration time is now faster, meaning that the time it will take from the moment you invoke a Gradle task and the moment the task is actually executed is much shorter. especially apparent on large multi-module builds.
  • execution with the daemon has been optimized, meaning that in Gradle 3.0 having it enabled by default, you will immediately benefit from faster builds
  • build script caching has been reworked so that subsequent builds are not only faster to configure, but also that builds running concurrently will no longer hang. This is particularly important for non-isolated builds running on a CI server

As an illustration of those improvements, we tried to execute gradle help with the daemon on on the Apereo CAS project, which consists of a large multiproject builds, which typically greatly benefits from those improvements. Again, the left side is using Gradle 2.12, while the right side uses 3.0 M2:

Last but not least, we also took a look at the rare cases where Gradle was still slower than Maven and fixed those. The following screencast is an illustration of what you can expect from Gradle 3.0 compared to Maven. This project features a build with 25 subprojects, each having around 200 files and unit tests. Then we ask both Gradle and Maven to assemble it without running tests.

Ultimately, one of the biggest differences between Gradle and Maven is that Gradle is aware of all input/outputs of tasks. As such, it’s smart enough to know about when it has to do something or not. So when we execute the same tasks again, it will not re-execute them if nothing changed:

Check out our performance guide

Having high performance builds is key to build happiness! As such, we focus heavily on performance improvements to Gradle itself. However, there are also many things that users can do to make their builds faster. To that end, we’re currently writing a performance guide, and we invite everyone to take a look at it. It’s in draft form at the moment, but already contains many valuable hints about how to make your Gradle builds even snappier. Please do give it a read, and we’d love to hear your feedback via the guide’s GitHub Issues.

Gradle 3.0 M1: Unleash the Daemon!

The first milestone release toward Gradle 3.0 has just been published, and among many smaller improvements, it contains two major features we’d like to get your feedback on.

The first feature is support for writing Gradle build scripts in Kotlin, and you can read all about it in last month’s Kotlin meets Gradle blog post. While still in the early stages of development, this functionality is now available out of the box with Gradle 3.0 M1, and we’d love to hear what you think. See the Gradle Script Kotlin 0.1.0 release notes for complete details and getting started instructions.

The second feature is that the Gradle Daemon is now enabled by default. This is a big deal, and I want to focus on it for the rest of this post.

What is the Gradle Daemon?

The Gradle Daemon is a long-running background process—a “warm JVM”—that Gradle can connect to from the command line or from within your IDE. The daemon keeps track of all kinds of information about your project’s build and makes that information available instantaneously. The result is that Gradle doesn’t have to recalculate all that information each time you run gradle build or gradle test or any other task. And the benefit to you, of course, is that your build times get a whole lot faster.

The daemon has been available since way back in Gradle 0.9, but until now it’s been an opt-in feature, meaning that users have had to provide the --daemon option at the command line or otherwise set Gradle’s org.gradle.daemon property to true in order to take advantage of it. This approach worked perfectly well, but only if you knew about it. And while many Gradle users do know about the daemon, and make use of it every day, we know from interacting with our community that the majority of Gradle users do not yet take advantage of the daemon.

So if you haven’t been familiar with the daemon until now, or if you’ve struggled to make your colleagues aware of this feature, don’t worry—you’re not alone. It’s for these reasons that we’ve made enabling the daemon by default a key part of Gradle 3.0.

Unleashing the Daemon

Over the years, the daemon has become an advanced and highly capable piece of software. But enabling it by default isn’t as simple as just “flipping a switch”. This change is profound—it means that from 3.0 forward, every Gradle user will interact with the daemon on every build unless they explicitly opt out. That’s a whole lotta builds, and with numbers like that, it means that even the tiniest bugs and most obscure corner cases are likely to surface where they haven’t before. This is why we’ve invested significant engineering effort over the last several months to ensure the daemon is up to this new challenge—and it’s why we’re so eager to get your feedback on this first milestone.

Trying out 3.0 M1 and its auto-enabled daemon is easy. Just update your Gradle wrapper using the following command and then run your builds as you normally would:

gradle wrapper --gradle-version 3.0-milestone-1

And of course, if you have any problems, comments or questions, let us know!

The Daemon in action

The short asciicast below demonstrates upgrading a project’s build from Gradle 2.13 to 3.0 M1 and shows off the resulting performance gains side-by-side in a before-and-after comparison. As you’ll see, the results are pretty dramatic—the project’s clean build time is cut by more than half:

There’s more performance goodness to come

If you run Gradle builds at all, there’s a good chance you run them dozens of times every day. This means that every second counts, and that’s why enabling the daemon by default is just one of a number of performance-related efforts we’ve been up to. Indeed, consistent with the other meaning of the word daemon, you might say we’ve been hell-bent on making Gradle the fastest and most capable build system in the world. There’s plenty more work to do to achieve that goal, and your feedback will help every step of the way. To that end, we hope you’ll check out the 3.0 M1 release notes, try it out for yourself, and let us know how it goes. Thanks!

Kotlin Meets Gradle

Many readers will be familiar with JetBrains’ excellent Kotlin programming language. It’s been under development since 2010, had its first public release in 2012, and went 1.0 GA earlier this year.

We’ve been watching Kotlin over the years, and have been increasingly impressed with what the language has to offer, as well as with its considerable uptake—particularly in the Android community.

Late last year, Hans sat down with a few folks from the JetBrains team, and they wondered together: what might it look like to have a Kotlin-based approach to writing Gradle build scripts and plugins? How might it help teams—especially big ones—work faster and write better structured, more maintainable builds?

The possibilities were enticing.

Because Kotlin is a statically-typed language with deep support in both IDEA and Eclipse, it could give Gradle users proper IDE support from auto-completion to refactoring and everything in-between. And because Kotlin is rich with features like first-class functions and extension methods, it could retain and improve on the best parts of writing Gradle build scripts—including a clean, declarative syntax and the ability to craft DSLs with ease.

So we got serious about exploring these possibilities, and over the last several months we’ve had the pleasure of working closely with the Kotlin team to develop a new, Kotlin-based build language for Gradle.

We call it Gradle Script Kotlin, and Hans just delivered the first demo of it onstage at JetBrains’ Kotlin Night event in San Francisco. We’ve published the first pre-release toward version 1.0 of this work today, along with open-sourcing its repository at github.com/gradle/gradle-script-kotlin.

So what does it look like, and what can you do with it? At a glance, it doesn’t look too different from the Gradle build scripts you know today:

build.gradle.kts

But things get very interesting when you begin to explore what’s possible in the IDE. You’ll find that, suddenly, the things you usually expect from your IDE just work, including:

  • auto-completion and content assist
  • quick documentation
  • navigation to source
  • refactoring and more

The effect is dramatic, and we think it’ll make a big difference for Gradle users. Now, you might be wondering about a few things at this point—like whether existing Gradle plugins will work with Gradle Script Kotlin (yes, they will), and whether writing build scripts in Groovy is deprecated (no, it’s not). You can find complete answers to these and other questions in the project FAQ. Do let us know if you have a question that’s not answered there.

Of course, all this is just the beginning. We’re happy to announce that Kotlin scripting support will be available in Gradle 3.0, and we’ll be publishing more information about our roadmap soon. In the meantime, there’s no need to wait—you can try out Gradle Script Kotlin for yourself right now by getting started with our samples.

And we hope you do, because we’d love your feedback. We’d love to hear what you think, and how you’d like to see this new work evolve. You can file issues via the project’s GitHub Issues and please come chat with us in the #gradle channel of the public Kotlin Slack.

I’d like to say a big thanks to my colleague Rodrigo B. de Oliveira for the last few months of working together on this project—it’s been a lot of fun! And a big thanks to the Kotlin team, in particular Ilya Chernikov and Ilya Ryzhenkov for being so responsive in providing us with everything we needed in the Kotlin compiler and Kotlin IDEA plugin. Onward!

Performance is a Feature

At Gradle Inc., we take build performance seriously. While we bundle performance improvements into every Gradle release, we’ve kicked off a concerted effort called a performance burst from Gradle 2.13 in order to make building software faster and more enjoyable for all of our users. In this blog post, we will explore how we approach performance issues, as well as what improvements to expect in the 2.13 release and beyond.

The fastest thing to do is nothing

Building software takes time, which is why the biggest performance improvement is cutting steps out of it entirely. That’s why, unlike traditional build tools such as Maven or Ant, Gradle focuses on incremental builds. Why would you ever run clean when you don’t need to? For some developers, running clean became a conditioned response to a broken build tool. Gradle doesn’t have such an issue: aware of all inputs and outputs of a task, it is reliably capable of handling incremental builds. Most builds will be incremental, and that’s why we focus so heavily on optimizing this case. One way that we accomplish this is through the Gradle daemon.

The Gradle daemon can dramatically improve your build performance by allowing build data to persist in memory between build invocations and avoiding JVM startup times on each build. The daemon is a hot JVM hosting the Gradle runtime, making it possible to run subsequent builds much faster: instead of spawning a new JVM for each build, we can benefit from all the goodness of having a cached JVM – in particular, we realize a strong benefit from JIT (just in time compilation). While turning on the daemon has a cost for the first build, the amount of time that you will gain for each subsequent build more than offsets the initial cost. In Gradle 2.13, we focused our improvements when the daemon is activated, and we’re preparing to enable this by default in Gradle 3.0. Other performance improvements we’ve implemented will benefit all users, independently of whether they use the daemon or not (and if you don’t use the daemon yet, we strongly encourage you to try it out!).

As you can read in our release notes, we’ve emphasized several categories of performance improvements:

  • reducing the build configuration time, that is to say, reducing the fixed cost of creating and configuring a Gradle build
  • reducing the test execution time; i.e., reducing the overhead of Gradle compared to just executing tests in an IDE
  • improving the performance of importing a project in an IDE
  • reducing communication latency between the interactive Gradle client and the daemon process

Reducing configuration time

Here’s an idea of the improvement you can expect:

many empty.png

So the example above yields a typical performance test metric: we’re comparing the average execution time of a build when we run gradle help for a project that contains a lot of subprojects (10000 projects). You can see that when we started optimizing configuration time, the master branch was slower than Gradle 2.7. Now, Gradle 2.13 is faster than ever! We have measured up to 25% reduction on our own builds! However, more than the improvement, it’s how we get to this that is important. Improving performance is a process, and here is how it works.

Performance test suite

The Gradle sources contain a sub-project dedicated to performance tests. This test suite is very particular, and allows us:

  • to compare the performance of the master branch with previous releases of Gradle
  • to compare various build scenarios against a single version of Gradle

So typically, in the example above, we’re comparing the average execution time of a build, when we run gradle help, in a specific scenario (an empty build with 10000 sub-projects), and compare it with previous Gradle releases. It’s worth noting that this performance test suite is executed daily, allowing us to catch performance regressions very early in the development phase.

Writing a performance test scenario

So how, in practice, do we write a performance test? It all starts with a scenario we want to test. For example, we want to make sure that we reduce the duration of test execution. The first step is then to write a build template that will let us test Gradle against this scenario. And a template has various parameters: the number of sub-projects, the number of (test) classes in source, external dependencies, … This let us generate sample Gradle builds that are used to measure performance. Of course, those performance test builds are generated with Gradle.

All the graphs you see below were generated using fully automated performance tests, and aimed at testing specific scenarios. Should you find a performance issue with Gradle, this is a great way to get started: create a new template, then send us a pull request to show the problem. Of course, all our performance tests are regular test cases, which means that we can fail the build if we introduce a regression.

Since Gradle 2.13 is primarily a performance-enhancing release, let’s focus on some of the improvements.

Gradle vs Maven

In this scenario, we are comparing the time it takes to execute gradle clean test vs mvn clean test. As we mentioned earlier, cleaning is not necessary in Gradle, but we do it here for the sake of comparison against Maven, and to assess the “cold build” time. Here are the results:

gradle vs maven clean build.png

At the end of February, Maven and Gradle were comparable. Since then, the new performance improvements in Gradle 2.13 have resulted in a 10% speedup! You can notice that the graph contains some glitches: on April 2nd, you can see that the time considerably increased. However, it increased in both scenarios: Maven and Gradle. So what you need to keep in mind when reading such graphs is that results are relative between them for a same date. This is important because:

  • we could change the templates between two executions of the performance build, resulting in an increase or decrease of the build time.
  • we could change hardware between two executions, leading to the same side effects

Profiling is better than guessing

So how did we manage to improve this? First of all, once a scenario is written and performance tests running, we need to profile the builds. For that purpose, we’re using different tools, from YourKit Java Profiler to Java Mission Control, the JIT logs or simply good old System.out.println statements. In the end, we try to identify what is causing slowdown and write a document summarizing our findings. Those documents are all public, and you can find them in our GitHub repository. Once we’ve identified hotspots and written down the profiling results, we extract stories for improvement and actually go to the implementation phase. This “profiling to stories” phase is very important, because while a profiler will be very helpful in identifying hotspots, it will be no help when it comes to interpreting the results: often, rewriting an algorithm can be much more efficient than trying to optimize a SAX parser…

Optimizing the communication between the daemon and the client

As we explained, we’re primarily (but not only) focusing on improving performance when the daemon is activated. One issue with the daemon is that you have a forked JVM. When you run gradle, the client process, the one from the command-line, starts communicating with a long-living process, the daemon, which is effectively executing the build. And typically, to see the logs as the build is running, you need to forward events from the daemon to the client. Before 2.13, this communication was synchronous. This means that the log messages were sent synchronously between the daemon and the client. This was inefficient, because we were blocking on network I/O where we could actually perform some build operations. In 2.13, not only is communication asynchronous, but we also optimized the protocol that is used to communicate between the client and the daemon and how the client responds to these events.

Forked processes start up faster

Another improvement that was made is visible in the following scenario:

gradle vs maven cleanTest test.png

This scenario is “unfair” to Gradle, and meant to compare what happens when we just want to re-execute the tests. As you may know, when running mvn test, Maven will re-execute the tests even if nothing changed. Gradle does nothing in that case, because everything is “up-to-date”. So to emulate the behavior of Maven, we need to clean-up the test results so that we re-execute the tests and re-generate the reports. As you can see, in this scenario, Gradle was significantly slower than Maven. Now, it is faster, while doing also more work: Gradle not only runs the tests, but also generates 3 types of reports: a binary one, an XML one (for CI integration) and eventually an HTML report (for use by us, poor humans, but you can disable this behavior.) Gradle 2.12 is 15% slower in this scenario, and a large amount of improvement has been done by optimizing the classpath of the forked JVMs used for tests. In 2.12, almost the whole Gradle classpath was used on forked VMs, when in reality we just need a subset of Gradle classes (basically to communicate between the forked VM and the daemon). By optimizing this classpath, we can now reduce classpath scanning and significantly improve the time it takes to execute tests. If you ever noticed a “pause” when Gradle was about to execute tests, it has now gone!

Reports are generated in parallel

Part of the improvement on test execution is also obtained thanks to parallel generation of reports. As we explained, Gradle generates more reports than Maven by default. This is usually what you want, because when you’re developing an application and run tests locally, having to decipher XML test reports can be very frustrating. With Gradle 2.13, now, the HTML and XML reports are generated in parallel, which significantly reduces the time required before starting the test suite of the next project. The more modules your project has, the more likely you will see a significant reduction in build duration.

Improving build startup time

Faster script compilation

When executing Gradle builds for the first time, you can see, as part of the “configuration” phase, that Gradle is actually compiling the build scripts. Despite being scripts, Gradle build files are written in Groovy and are nevertheless compiled to bytecode. This is time consuming, but has been optimized by the Gradle team. In particular, Gradle has to compile the scripts several times, with different classpaths, in order to compile scripts that contain references to remote resources such as plugins.

In Gradle 2.13, we changed the way Gradle scripts are compiled, and optimized two scenarios:

  • running several builds concurrently from the same directory (this often happens on CI). Before this, the “script cache” that Gradle uses was locked during the execution of a build, so if a build script was changed during the execution of a build, all concurrent builds were locked until the first one finishes.
  • re-use build scripts independently of their location. Imagine that you have multiple projects using the same remote scripts. This is typically the case in corporate environments, where a script defines some credentials, conventions, or plugins to be used in all builds of the company. Then, each project had to compile the script before being able to use it. Gradle 2.13 changed that, and now compiles script based on their actual contents (and classpath) rather than their location. It means that if you have 2 projects which have the same build files but in different locations, the script will only be compiled once. However, to be able to report build errors on the correct build file, we’re also using a “relocation technique”, which takes a compiled script class and remaps it to an actual script file so that errors are reported correctly.

Optimized classpath

Another work that has been done in 2.13 is improving the classpath of Gradle, so that services are located faster. When you have a lot of jars on classpath, ordering is important, and the number of classes is important. Even if you “only” gain 10ms, it can lead to significant differences when builds are often executed, in particular from the IDE, which leads to the last area of improvement we worked on in 2.13.

Bugfixes

Sometimes, improving performance is a matter of serendipity. We recently discovered that some performance tests were executing significantly faster on our CI server than locally, but were unsure of the cause. After doing some profiling, we realized that the code to propagate properties from the various gradle.properties files to the actual Project was very inefficient: the more properties you had in your various gradle.properties file, the longer it would take to start the build! We identified the problem and fixed this.

Faster IDE integration

The Tooling API typically allows IDE vendors to integrate Gradle. This is what we do with Buildship. It has very specific needs and in particular, it has to be both backwards and forward compatible, meaning a certain version of the TAPI can execute Gradle builds for both older and newer versions of Gradle. Of course, a developer would only benefit from the latest improvements by using both the latest version of the Tooling API and Gradle, but it leads to interesting architecture.

In this case, the Tooling API heavily relies on reflection to invoke methods. In Gradle 2.13, we significantly improved caching, which led to spectacular results:

tapi.png

This scenario illustrates how long it takes to import, typically, a 500 sub-projects build into Eclipse. While it took 25s with the 2.12 version of the Tooling API, it’s now only 10s. And you can even see more spectacular results in IntelliJ IDEA, where they are using “custom models”. Imports/synchronizing projects would then be orders of magnitude faster.

There’s more to come!

We cannot close this blog post without illustrating what we mean by “doing nothing is better”. In the Maven vs Gradle examples above, we’ve tried to “emulate” the behavior of Maven with Gradle. Here is, typically, the graph that you would get when running proper incremental builds with Gradle. That is to say that you open and edit several files from different sub-modules then re-execute the tests. Remember, with Gradle, you no longer have to clean, but we were fair and didn’t clean with Maven either:

maven vs gradle incremental.png

Yes, Gradle is almost 6x as fast in this scenario. So now, imagine doing this 10, 100 times a day, multiplied by the number of developers in your company. And realize how much money it is.

Thanks for reading this, and don’t worry: there’s more to come, stay in touch for more performance improvements in Gradle 2.14!

Introducing Compile-Only Dependencies

One of the most highly-anticipated Gradle features has just arrived in Gradle 2.12: support for declaring compile-only dependencies. For Java developers familiar with Maven, compile-only dependencies function similarly to Maven’s provided scope, allowing you to declare non-transitive dependencies used only at compilation time. While a similar capability has been available for users of the Gradle War Plugin, compile-only dependencies can now be declared for all Java projects using the Java Plugin.

Compile-only dependencies address a number of use cases, including:

  • Dependencies required at compile time but never required at runtime, such as source-only annotations or annotation processors;
  • Dependencies required at compile time but required at runtime only when using certain features, a.k.a. optional dependencies;
  • Dependencies whose API is required at compile time but whose implementation is to be provided by a consuming library, application or runtime environment.

Compile-only dependencies are distinctly different than regular compile dependencies. They are not included on the runtime classpath and they are non-transitive, meaning they are not included in dependent projects. This is true when using Gradle project dependencies and also when publishing to Maven or Ivy repositories. In the latter case, compile-only dependencies are simply omitted from published metadata.

As part of our commitment to quality IDE support, compile-only dependencies continue to work with Gradle’s IDEA and Eclipse plugins. When used within IntelliJ IDEA, compile-only dependencies are mapped to IDEA’s own provided scope. Within Eclipse, compile-only dependencies are not exported via project dependencies.

In the Gradle model we consider tests to be a “consumer” of the production code. With this in mind, compile-only dependencies are not inherited by the test classpath. The intention is that tests, like any other runtime environment, should provide their own implementation, either in the form of mocks or some other dependency.

Declaring compile-only dependencies is simple—just assign dependencies to the new compileOnly configuration for the appropriate source set:

dependencies {
    compileOnly 'javax.servlet:servlet-api:2.5'
}

As a result of the addition of the compileOnly configuration, the compile configuration no longer represents a complete picture of all compile time dependencies. When it’s necessary to reference a compile classpath in build scripts or custom plugins, the appropriate source set’s compileClasspath property should be used instead.

For more information, see the Java Plugin chapter of the Gradle user guide. And as always, we welcome your feedback and questions in the comments below or via the Gradle Forum at discuss.gradle.org.

Introducing TestKit: A Toolkit for Functionally Testing Gradle Build Logic

Automated testing is a necessary prerequisite for enabling software development practices like refactoring, Continuous Integration and Delivery. While writing unit, integration and functional tests for application code has become an industry norm, it is fair to say that testing for the build automation domain hasn’t made its way into the mainstream yet.

But why is it that we don’t apply the same proven practice of testing to build logic? Ultimately, build logic is as important as application code. It helps us to deliver production software to the customer in an automated, reproducible and reliable fashion. There might be many reasons to skip testing; however, one of the reasons that stands out is the data definition format used to formulate build logic. In the past, writing tests for XML-based build logic definitions was a daunting, almost impossible task without the right tooling.

In this regard, Gradle makes your life easier. Build code can be structured properly, organized based on functional boundaries, and developed as actual class implementations with the help of concepts like custom tasks and binary plugins. Automated testing of build logic becomes approachable, and when combined with the appropriate tooling is easily attainable.

Meet TestKit

One way to test build logic is to declare and execute it the same way as the end user would. In practice, this means creating a build script, adding the configuration you want to test and executing it with the Gradle runtime. The outcome of the build, such as the console output, the executed tasks, and produced artifacts, can be inspected and verified against expected assertions. This type of testing is commonly referred to as functional testing.

Let’s have a look at an example. In the following build script, we apply the Java plugin.

build.gradle

apply plugin: 'java'

Executing this build script with the compileJava task should produce class files for the Java source files found in the directory src/main/java. As an end user, we’d expect these class files to be located in the directory build/classes/main. Of course, you could verify this behavior by executing the given build script manually with the Gradle command and inspect the output directory. I hope the last sentence gave you the itch. We are automation engineers, so obviously we’ll want to automate as much as we can.

Meet the Gradle TestKit: a toolkit for executing functional tests in an automated fashion. TestKit is bundled with Gradle starting with version 2.6 and is available to be used in your projects now.

Using TestKit

There are typically two different use cases for adding TestKit to a project.

  1. Cross-version compatibility testing. You want to verify if a build script is compatible with a specific Gradle version. Organizations often apply this technique in preparation for a Gradle version upgrade of an existing build or when multiple versions of Gradle must be supported by the same build logic.

  2. Custom build logic testing. You want to test if your custom task or plugin behaves as expected under certain conditions that resemble the real-world usage by a build script author. A typical example could be: “If a user applies this plugin and configures a property of my exposed extension, then a provided task should observe a specific runtime behavior and produce the output x when executed.” On top of this scenario cross-version compatibility could play a role as well.

Given the last example, let’s have a look how we can implement the scenario with the TestKit API. Note that the following test class uses the Spock test framework.

BuildLogicFunctionalTest.groovy

import org.gradle.testkit.runner.GradleRunner
import static org.gradle.testkit.runner.TaskOutcome.*
import org.junit.Rule
import org.junit.rules.TemporaryFolder
import spock.lang.Specification

class BuildLogicFunctionalTest extends Specification {
   @Rule final TemporaryFolder testProjectDir = new TemporaryFolder()
   File buildFile

   def setup() {
       buildFile = testProjectDir.newFile('build.gradle')
   }

   def 'produces class files when compiling Java source code'() {
       given:
       buildFile << "apply plugin: 'java'"

       when:
       def result = GradleRunner.create()
           .withProjectDir(testProjectDir.root)
          .withArguments('compileJava')
          .build()

       then:
       result.task(':compileJava').outcome == SUCCESS
       new File(testProjectDir.root, 'build/classes/main').exists()
   }
}

Even if you haven’t used Groovy or Spock before, it becomes apparent how easy it is to formulate a functional test case with the help of TestKit.

Tell Me More About TestKit

The previous code example uses Spock for implementing a test case. If you are not familiar with Spock or prefer a different test framework, you can still use TestKit. By design, TestKit is test framework-agnostic. It’s up to you to pick the test framework you are most comfortable with whether that’s JUnit, TestNG or any other test framework out there in the wild.

For test scenarios that require you to execute a test with multiple Gradle distributions, e.g. in the context of cross-version compatibility testing, TestKit exposes API methods for providing the appropriate Gradle distribution information. You can either point to a local installation of Gradle, a distribution identified by version on a server hosted by Gradle Inc. or a distribution identifiable by URI.

As you execute the test, you might also want to step through the build logic under test for debugging purposes from the IDE of your choice. TestKit allows you to execute tests in debug mode to track down unexpected test runtime behavior.

What’s on the Roadmap for TestKit

There’s more to come for TestKit. In the future, we want to make it even more convenient to use the TestKit API. You can read all about it in the design document. Let us know if you are interested in contributing! We’d love to see TestKit evolve.

Buildship: From Inception to Inclusion in Eclipse

At Gradle, we believe that maintaining developer “flow state” is essential to building good software. And because we believe flow is essential, we assert that developers should not have to leave the IDE to build, and they should not have to know what functions are being performed by the IDE and what is delegated to the build system. It is also our vision that all build logic is kept exclusively in the build system and thus all work to calculate the project configuration, to build the project, to run the tests and to run an executable is delegated from the IDE to the build system. Hence the IDE maps the projects of the build, visualizes the build models, and displays the progress of executing a build.

To realize our vision of ideal IDE Gradle interaction, we resolved to build and offer our own reference implementation to guide implementers of other IDEs. Because of this, in the Fall of 2014, Gradle Inc. decided to provide its own Eclipse plugin for Gradle to give the users the best experience when working with Gradle from within Eclipse.

Buildship invited to join the Eclipse Mars release train

Soon after we had started with the implementation, the Eclipse Foundation asked us whether we would like to contribute an Eclipse plugin for Gradle and have the plugin become part of the Eclipse Mars release train. This fit well with our vision for demonstrating how IDEs and builds should interact and further allowed us to serve the large global group of Eclipse users with the best Gradle support possible. We agreed and the project onboarding process defined by eclipse.org started immediately.

Our first step was to find a name independent of either Gradle or Eclipse and get it trademarked, thus the Buildship project was born and made official. The suffix “ship” has a nice feel, denoting condition, character, office, skill such as “Fellowship” or “Statesmanship”, but also emphasizes the significance of shipping software. From the very beginning, Wayne Beaton from the Eclipse Foundation assisted us with all the countless steps involved in the formal process of going from a project without a name to a project that is part of the Eclipse Simultaneous Release.

Gradle builds Eclipse bundles

On the implementation side, in January 2015, we started by creating a Gradle build to compile, test, assemble, and deploy Eclipse bundles. No satisfying solutions existed that we could leverage. The work on our own new build was very incremental and the buildSrc feature of Gradle proved very valuable to quickly mature our build logic. Today, our build is very stable, its logic is packaged into Gradle plugins defined in the buildSrc folder, and the plugins are generic enough to be used by other Eclipse bundle projects. Looking at the advanced logic of our Gradle build, there is no way we could have achieved the same conciseness and expressiveness with Maven, which is still ubiquitous for Eclipse bundle projects.

Buildship debuts at EclipseCon NA in March

Once the Gradle build was established and a Continuous Integration pipeline set up on top of it, we were able to start focusing on the content of the Eclipse plugin for Gradle. We started with the Tasks View, took on the Project import, added Run Configurations and a Console View, and then integrated into the existing Eclipse Test Runner. This was the state that we presented at EclipseCon NA in March 2015.

Gradle dedicates more developers to Buildship

To accelerate the development of Buildship and to ensure that we would meet the deadline for the Eclipse Mars release in June 2015, we got Simon Scholz from Vogella GmbH to help us with work that required in-depth Eclipse knowledge, which proved to be invaluable. We also dedicated one more Gradle core developer to work on Buildship. Replacing our integration into the Eclipse Test Runner with our own Executions View was next. This view visualizes what happens when running a build, like what tasks are run, what tests are executed, and so on. The visualization happens based on events that Buildship receives from Gradle via the Tooling API. The Tooling API is a separate, standalone Gradle library with its own API that allows the IDE to communicate with a Gradle build through inter-process communication. The architecture where the Tooling API serves as a proxy of the Gradle build comes with many advantages, like process isolation, backward/forward compatibility, and contained build logic. Many enhancements have been added to the Tooling API during the development of Buildship, like event broadcasting, optimized classpath dependency calculation for Eclipse, and more.

Buildship released simultaneously with Eclipse Mars at EclipseCon France in June

With the Task View, the Executions View, the Console, View, the Run Configurations, and the Project Import Wizard in place, we were ready to become part of Eclipse Mars. Unfortunately, not everyone was convinced that we were ready yet to become part of the Simultaneous Release and so in June 2015, we shipped Buildship on the same day with Eclipse Mars but not yet in Eclipse Mars. The Eclipse Foundation had created a new entry in the Eclipse Marketplace for that purpose with a select few plugins hosted there. This allowed all users to select and install the Buildship plugin right from within Eclipse. This was the state that we presented at EclipseCon France in June 2015.

Gradle continues to invest in Buildship features based on your feedback

After the release, the work on enhancing Buildship continued without interruption. We focused on enhancing and polishing the existing functionality, primarily based on feedback we had received through our Buildship Forum. The Forum is actively used to report issues, to request new features, and to ask questions. One topic that came up repeatedly was to extend the import functionality and to allow the user to explicitly refresh imported projects. Thus, we invested a significant amount of work into consolidating and at the same time extending the logic of importing a project, explicitly refreshing a project, and opening a project. We also added a new feature that allows a user to execute tests through Gradle from the Executions View and from the Source Editor. This is the state that we will present at EclipseCon Europe in November 2015.

Community tracks and fixes a mysterious bug

In time for Eclipse Mars.1, we finished our enhancements, became part of the Simultaneous Release, were included in three important Eclipse distributions (EPPs), and the Buildship project itself graduated out of incubation to become a full project. Due to a bug that was found in Buildship during the quiet period of the Mars.1 release, we had to provide a new version of Buildship that delayed the final Mars.1 release by one week. The bug had been there since June but nobody had ever made the connection before between how the bug manifested itself and Buildship being the plugin causing the problem. The severity of the bug and how to go about fixing it were discussed and resolved openly on the Eclipse Mailing List.

Buildship released in Eclipse Mars.1

On October 2nd, 2015, Eclipse Mars.1 was officially released with Buildship being the most prominent new feature in the release. Developing Buildship over the past ten months has been very interesting to everyone involved since it included working on Gradle core, on the Tooling API, on Buildship itself, and participating in the eclipse.org formal process.

Even more to come

The work on Buildship will continue. We want to bring much more of our vision on how the IDE should integrate with Gradle to reality: visualizing detailed information about the rich model behind a Gradle build, supporting the project configuration for web applications, debugging tests and web applications, code assistance, and more.

Try it now

The easiest way to try out Buildship is to download Eclipse Mars for Java developers, and choose File > Import… > Gradle Project and point to an existing Gradle Java project. Let us know if you experience build happiness (or some other sensation) while working with Buildship.

Introducing Continuous Build Execution

Optimizing the build-edit-build loop

In the past, we’ve recommended that you enable the Gradle Daemon (and parallel execution, with some caveats) to get the best performance out of Gradle. We’ve also talked about using incremental builds to speed up your build-edit-build feedback loop by skipping unnecessary work. Now there’s another optimization available—one that allows you to get out of the way and let Gradle start the build for you.

As of 2.5, Gradle supports continuous build execution, which will automatically re-execute builds when changes are detected to its inputs. There have been a few community plugins that add support for a Gradle “watch” mode that do something similar.

With Maven, the same watch functionality needs to be implemented for each plugin or you have to use a plugin that has a predefined set of goals. We wanted to do better than that. We wanted any plugin to be able to leverage the power of continuous builds without having to supply additional information. We also wanted the set of tasks to execute to be completely ad-hoc. Since Gradle needs to know a task’s inputs and outputs for incremental builds, we had all the information necessary to start watching for changes.

Using continuous build

Continuous build can be used with any task or set of tasks that have defined inputs and outputs. If you’re using well-behaved tasks, this shouldn’t be a problem for most builds. If you find that your build isn’t rebuilding with continuous build as you think it should, it could point to a problem in your build script.

Command-line option

You enable continuous build with the -t or --continuous command-line option along with whichever tasks you want to run (we call these task selectors). At least one task that runs needs to define inputs to enter continuous build mode.

For example, on a typical Java project,

$ gradle -t test 

would enable continuous build and re-run tests any time the main sources or test sources change.

We’re not limited to a single task, so we could also re-run tests and FindBugs on the main sources using 

$ gradle -t test findBugsMain.

Determining when to run another build

When you run Gradle with the continuous build option, Gradle executes the build as usual, except Gradle also registers the inputs to all tasks with a file watch service. Even tasks that are UP-TO-DATE will have their inputs recorded, so all inputs can be considered when triggering a new build. This means that you don’t have to start from a clean build for Gradle to know which inputs could change in continuous build mode.

After the end of the build, Gradle will start watching for file system changes based on the collected inputs. The Gradle command-line interface will display the message Waiting for changes to input files of tasks on the console and will wait for changes to inputs. If any of the input files are changed or deleted, Gradle will execute another build with the identical set of task selectors. Gradle can detect changes to simple files (deleted, modified) and changes to directories (deleted or new files).

See a demo of this in action:

Exiting continuous build

Once Gradle is running in continuous build, it will not exit, even if the build is not successful. To get out of continuous build, you should use Ctrl-D to cancel the build. On Microsoft Windows, you must also press ENTER or RETURN after Ctrl-D.

If you use Ctrl-C, Gradle will exit abruptly and also kill the Gradle Daemon.

UPDATE: As of Gradle 3.1, Ctrl-C no longer kills the Gradle Daemon.

Limitations

The User Guide chapter describes all limitations and quirks with continuous build.

Requires Java 7 or better

Gradle uses Java 7’s WatchService to watch for changes to inputs. This functionality is only available on JDK 7 or later.

Mac OS X performance

For GNU/Linux and Microsoft Windows, the file system change events are provided through a kernel service. For Mac OS X, Java falls back to a polling-based system. This means on Mac OS X only, change detection on a very, very large number of input files may be delayed and, in some cases, cause a deadlock. Both of these issues are tracked as JDK bugs: JDK-7133447 and JDK-8079620.

Changes to build scripts

Gradle doesn’t consider changes to your build logic when in continuous build mode. Build logic is created from build.gradle, settings.gradle, gradle.properties and other sources. If you make changes to your build scripts, you’ll have to exit continuous build and restart Gradle. Future versions of Gradle will make it easier to describe inputs to your build logic so that continuous build can work with this as well.

Future improvements

In addition to mitigating some of the limitations with the current implementation, there are other interesting things we can use continuous build to accomplish.

Right now, there are not any supported, public ways of managing a process started by Gradle that needs to exist between builds. Gradle expects that a process started (e.g., via Exec) will exit as part of the build.

In the next release (2.6), Play support is coming to Gradle, and with that you’ll be able to start Play applications in a separate JVM for local development. With continuous build enabled, Gradle will hot-reload the Play application whenever classes or assets are changed. The Play plugin accomplishes this by registering the Play JVM with Gradle in a way that survives between builds.

We want to eventually evolve this Play specific reload functionality into a general feature, so plugins can have their own “hot-reload”-like behavior.

Another opportunity for improvement is up-to-date checking. For very large projects, up-to-date checking can be time consuming for the no-op case. When looking for out-of-date files, Gradle must scan entire directories or recalculate file checksums. When using continuous build, Gradle must already keep track of file and directory changes, so in some cases, Gradle may be able to skip checks for files that are known to have not changed.

Feedback

Please let us know on the forums if you run into any surprises with this new feature.

Introducing Incremental Build Support

Task inputs, outputs, and dependencies

Built-in tasks, like JavaCompile declare a set of inputs (Java source files) and a set of outputs (class files). Gradle uses this information to determine if a task is up-to-date and needs to perform any work. If none of the inputs or outputs have changed, Gradle can skip that task. Altogether, we call this behavior Gradle’s incremental build support.

To take advantage of incremental build support, you need to provide Gradle with information about your tasks’ inputs and outputs. It is possible to configure a task to only have outputs. Before executing the task, Gradle checks the outputs and will skip execution of the task if the outputs have not changed. In real builds, a task usually has inputs as well—including source files, resources, and properties. Gradle checks that neither the inputs nor outputs have changed before executing a task.

Often a task’s outputs will serve as the inputs to another task. It is important to get the ordering between these tasks correct, or the tasks will run in the wrong order or not at all. Gradle does not rely on the order that tasks are defined in the build script. New tasks are unordered, therefore execution order can change from build to build. You can explicitly tell Gradle about the order between two tasks by declaring a dependency between one task another, for example consumer.dependsOn producer.

Declaring explicit task dependencies

Let’s take a look at an example project that contains a common pattern. For this project, we need to create a zip file that contains the output from a generator task. The manner in which the generator task creates files is not interesting—it produces files that contain an incrementing number.

build.gradle

apply plugin: 'base'

task generator() {
    doLast {
        def generatedFileDir = file("$buildDir/generated")
        generatedFileDir.mkdirs()
        for (int i=0; i<10; i++) {
            new File(generatedFileDir, "${i}.txt").text = i
        }
    }
}

task zip(type: Zip) {
    dependsOn generator
    from "$buildDir/generated"
}

The build works, but the build script has some issues. The output directory for the generator task is repeated in the zip task, and dependencies of the zip task are explicitly set with dependsOn. Gradle appears to execute the generator task each time, but not the zip task. This is a good time to point out that Gradle’s up-to-date checking is different from other tools, such as Make. Gradle compares the checksum of the inputs and outputs instead of only the timestamp of the files. Even though the generator task runs each time and overwrites all of its output files, the content does not change and the zip task does not need to run again. The checksum of the zip task’s inputs have not changed. Skipping up-to-date tasks lets Gradle avoid unnecessary work and speeds up the development feedback loop.

Declaring task inputs and outputs

Now, let’s understand why the generator task seems to run every time. If we take a look at Gradle’s info-level logging output by running the build with --info, we will see the reason:

Executing task ':generator' (up-to-date check took 0.0 secs) due to:
Task has not declared any outputs.

We can see that Gradle does not know that the task produces any output. By default, if a task does not have any outputs, it must be considered out-of-date. Outputs are declared with the TaskOutputs. Task outputs can be files or directories. Note the use of outputs below:

build.gradle

task generator() {
    def generatedFileDir = file("$buildDir/generated")
    outputs.dir generatedFileDir
    doLast {
        generatedFileDir.mkdirs()
        for (int i=0; i<10; i++) {
            new File(generatedFileDir, "${i}.txt").text = i
        }
    }
}

If we run the build two more times, we will see that the generator task says it is up-to-date after the first run. We can confirm this if we look at the --info output again:

Skipping task ':generator' as it is up-to-date (took 0.007 secs).

But we have introduced a new problem. If we increase the number of files generated (say, from 10 to 20), the generator task does not re-run. We could work around this by doing a clean build each time we need to change that parameter, but this workaround is error-prone.

We can tell Gradle what can impact the generator task and require it to re-execute. We can use TaskInputs to declare certain properties as inputs to the task as well as input files. If any of these inputs change, Gradle will know to execute the task. Note the use of inputs below:

build.gradle

task generator() {
    def fileCount = 10
    inputs.property "fileCount", fileCount
    def generatedFileDir = file("$buildDir/generated")
    outputs.dir generatedFileDir
    doLast {
        generatedFileDir.mkdirs()
        for (int i=0; i<fileCount; i++) {
            new File(generatedFileDir, "${i}.txt").text = i
        }
    }
}

We can check this by examining the --info output after we change the value of the fileCount property:

Executing task ':generator' (up-to-date check took 0.007 secs) due to:
Value of input property 'fileCount' has changed for task ':generator'

Inferring task dependencies

So far, we have only worked on the generator task, but we have not reduced any of the repetition we have in the build script. We have an explicit task dependency and a duplicated output directory path. Let’s try removing the task dependencies by relying on how CopySpec#from evaluates arguments with Project#files. Gradle can automatically add task dependencies for us. This also adds the output of the generator task as inputs to the zip task.

build.gradle

task zip(type: Zip) {
    from generator
}

Inferred task dependencies can be easier to maintain than explicit task dependencies when there is a strong producer-consumer relationship between tasks. When you only need some of the output from another task, explicit task dependencies will usually be cleaner. There is nothing wrong with using both explicit task dependencies and inferred dependencies, if that is easier to understand.

Simplifying with a custom task

We call tasks like generator ad-hoc tasks. They do not have well-defined properties nor predefined actions to perform. It is okay to use ad-hoc tasks to perform simple actions, but a better practice is to move ad-hoc tasks into custom task classes. Custom tasks let you remove a lot of boilerplate and standardize common actions within your build.

Gradle makes it really easy to add new task types. You can start playing around with custom task types directly in your build file. When using annotations like @OutputDirectory, Gradle will create output directories before your task executes, so you do not have to worry about making the directories yourself. Other annotations, like @Input and @InputFiles, have the same effect as manually configuring a task’s TaskInputs.

Try creating a custom task class named Generate that produces the same output as the generator task above. Your build file should look like the following:

build.gradle

task generator(type: Generate) {
    fileCount = 20
}

task zip(type: Zip) {
    from generator
}

Here is our solution:

build.gradle

class Generate extends DefaultTask {
    @Input
    int fileCount = 10

    @OutputDirectory
    File generatedFileDir = project.file("${project.buildDir}/generated")

    @TaskAction
    void perform() {
        for (int i=0; i<fileCount; i++) {
            new File(generatedFileDir, "${i}.txt").text = i
        }
    }
}

Notice that we no longer need to create the output directory manually. The annotation on generatedFileDir takes care of this for us. The annotation on fileCount tells Gradle that this property should be considered an input in the same way we used inputs.property before. Finally, the annotation on perform() defines the action for Generate tasks.

Final notes about incremental builds

When developing your own build scripts, plugins and custom tasks, declaring task inputs and outputs is an important technique to keep in your toolbox. All of the core Gradle tasks use this to great effect. If you would like to learn about other ways to make your tasks incremental at a lower level, take a look at the incubating incremental task support. Incremental tasks provide a fine-grained way of building only what has changed when a task needs to execute.