Advertisment

Customizing Continuous Integration

author-image
PCQ Bureau
New Update

Troubled software projects face problems in the initial

steps of making a working software. As a QA engineer I wasted days, either

stalled because there was no build to test or because the build I installed was

non-functional. From experienced developers, I learnt the survival technique of

working with my little sandbox of code that was days or weeks out of touch with

the repository. This worked well enough until that moment we all

dreaded-system integration. I am happy that my life is not like that any more.

The team I manage and work on has everyone working with current code, the

developers don't suffer from merge hell and the testers receive builds that

are reliably functional. There are many factors that go into reaching this happy

state, but a lot of credit goes to an automated CI (Continuous Integration)

system that has been tailored for our process.

Advertisment
Direct Hit!
Applies to:

Software developers
USP:

Understand how to deliver reliably functional code to your testers by achieving automated continuous integration
Primary Link:

http://cruisecontrol.sourceforge.net/gettingstarted.html
Google keywords:

continuous integration

Making the best practice better



CI builds need not have durable build products to be worthwhile. They let a
developer have a conversation with the system, to get reassurance that they have

done their part, at least for now. And with a CI build the cycle time is short,

the number of affected parties is small and, thus, the cost of failure is low.

This change in the cost of failure makes for a significant change in

behavior-if you'll let it. I've met people who want CI failures to be a

shaming event, similar to what happens when the nightly build breaks. But given

the nature of a CI build, does this make sense? Consider recording programs on

punch cards and handing them over to be entered into the system and run in batch

mode. A syntax error in this environment was a very expensive loss of time, and

significant effort was spent to resolve any mistakes so that the code would be

as perfect as possible at the first compile attempt. With a modern IDE, who

worries about syntax errors as an expensive loss of time?


CruiseControl 1.0



CruiseControl 1.0 was released as open-source software by ThoughtWorks in
March 2001 under a BSD-style license. Originally written as a custom Ant

task, it was re-architected to its current plug-in architecture and

released as version 2.0 in September 2002. NOw it has improved schedule

capabilities, multiple project support, and other enhancements. With the

latest release 2.3.1 in October 2005, the project shows no signs of

slowing down and has become the de facto standard for CI tools. For more

on CruiseControl, refer to http://cruisecontrol.sourceforge.net/gettingstarted.html.

Any non-addressed issues should be directed to the cruisecontrol-user

mailing list.

Advertisment

So a CI build should be tuned to surface failure feedback

quickly. This feedback is not a management tool, it's an enabler. It lets the

developer take responsibility for each check-in in a way that isn't possible

(or at least not cost effective) in the absence of such a system. There are

times when developers stay late, waiting to get their feedback e-mail from the

system. So you should not be tracking the failures caused by each individual;

that would only discourage the behavior of frequent check-ins that you want to

promote.

This isn't to say the formal nightly build should be

abandoned-do both! The CI feedback means that problems are likely detected

before the nightly build and the pending nightly build adds weight to fixing the

problems quickly 'so the nightly build will pass'.

Putting the pieces in place



Nightly builds, continuous builds, and developer builds all benefit from an
automated way to pull, build and test the latest source code. Having all three

types of build using the same script keeps the automated builds live and in sync

with any work being done on the project. It also offers an easy way to resolve

conflicts. If the build passes on one machine but fails on another, you can

always get a third machine as a tie-breaker. This can be used to track problems

on a developer's machine that would otherwise have been written off as a

problem on the build machine, and in a few cases finding very subtle platform

specific issues that would have remained in hidden.


Advertisment

When development teams consider systems, they have a strong

inclination to roll their own. This inclination is made of equal parts

underestimating the work required to build and maintain the code for the system,

and a concern that no third-party tool is going to address exactly their needs.

While no one wants to reinvent the wheel, how do you get a solution tailored to

your environment? Luckily the choice between a complete home-built solution and

a fixed tool is a false dichotomy. I recommend using an existing framework as

the basis of your system so that you don't need to work on the common

infrastructure, instead focus on your own unique requirements.

A standard CI build



Now we'll discuss a specific framework for a continuous build process, the
open-source project CruiseControl. It is characteristic of frameworks-as

opposed to just reusable components-to provide an inversion of control, where

the framework defines the canonical application processing steps, and then an

extension mechanism where they provide explicit hooks through a stable

interface. True to form, CruiseControl defines the stages of a project build and

requires a configuration file (typically named 'config.xml') to specify the

specific implementations that should be used at each stage. These

implementations are registered as plugins for the given project and any required

configuration data is provided in the same config file.


When the build begins the first action by CruiseControl is

to reload the configuration file and re-read the project settings. After this is

an optional bootstrapping step where the project can do any preparation that is

needed before the actual build is invoked. Following the bootstrapping is a

mandatory modification check. If none of the plugins report a modification, the

build loop will terminate and the project will return to its idle state. If a

modification is detected then CruiseControl will invoke the builder that is

associated with the current run by the schedule. The builder will execute and

then return an XML log file that will indicate if the build attempt succeeded or

failed, and then there is the opportunity to incorporate XML files produced by

the build into the main log file. Finally there is an optional publishing step

where any configured publishers can use the contents of the log file to notify

the team of the build results and the log file is stored in the directory

designated for that project.

Advertisment

The functionality provided by CruiseControl includes

support for multiple SCM systems, the ability to execute builds with Ant or

Maven, a Web (JMX) interface for managing projects, and plugins for publishing

results and artifacts via e-mail, FTP, SCP and Lotus Instance Messaging (Sametime).

In addition to this build time functionality, there is a JSP Web app for

reporting against the logs for historical build results. This application is

built on a small library of CruiseControl aware tags with most of the display

elements being handled by XSL files for pulling specific information out of the

log files. XSL files are for reporting information from Ant's Javac task,

JUnit output, Checkstyle violations, Javadoc errors and more.

With the provided functionality it takes no custom code to

get a basic CI build up and running. If I run such a build I know it will detect

compile or unit tests errors within a few minutes of checking in, that I'll be

notified with an  e-mail, and that I'll be able to peruse the Web

interface to look at historical results. Now let's consider some adaptations

to local conditions.

Time matters



The first such adaptation is what to do when the build/compile/test cycle starts
taking too much time.  In fact as a rule of thumb I like to have a

developer get a response within 15 minutes of a check-in. There is a time

threshold (less than 24 hours) beyond which automated builds move from being a

CI build to a daily build. If the cycle time passes this, the developer has

moved on from their check-in and responding to the system becomes an

interruption of a new activity rather than closure on the previous one. When I

find the cycle time rising, I turn to two basic strategies, doing less in the

continuous build and segmenting the continuous build by feedback cycles.


Advertisment

An example of the first strategy is to do incremental

compilation. While it is appropriate dogma that a nightly build should be done

from scratch it is an acceptable trade-off for the quick incremental builds to

only compile a more limited set. We were once able to excise a significant chunk

off our compile time by doing our incremental builds with javamake as opposed to

Ant's javac task. A second example would be to omit build steps that would be

required to build a full deliverable but aren't required to generate the

compile and test feedback. These steps include generating documentation,

building installers, making or signing JARs, and similar.

As the number of tests grow, especially long running system

tests, it becomes increasingly difficult to fit them all within the target time

period. I've found the best solution to it was to add a second build

machine-machine cycles are cheap, opportunity is fleeting -try to divide the

tests into 'quick tests' and 'slow tests', and have two continuous

integration loops running throughout the day. Such segmentation provides quick

feedback that developers want, but also gives the benefit of running those

system tests as frequently as possible. At Agitar we have three feedback

cycles-a 15 min cycle for compilation, unit tests and smoke tests; and 1-2 hr

feedback cycle for our system tests and tests using our product, Agitator; and

our 24 hr nightly build feedback cycle where we do a complete build and an

exhaustive test cycle that takes approximately 8 hours to complete.

A customized view



Getting continuous feedback is wonderful, but people get jaded quickly and
pretty soon they want more than the one-size-fits-all view the world. This is

where the extensibility of the framework shines, with a range of customization

strategies, from simple hard coding of links to adding support for entirely new

tools.


Advertisment

One fertile area for customization is to modify the XSL

files included with CruiseControl. A simple modification that we use at Agitar

is to breakout some test suites and report them separately. We did this by

creating a modified version of CruiseControl's default 'unittests.xsl'

that only reports on tests run in one of our suites, and then modifying

'unittests.xsl' to ignore that same suite. At another company our Ant build

validated our JavaScript code, but in the case of an error the default report

would only say that the build failed and not show the error messages from the

validator. It turned out the solution was almost the opposite of the previous

case, modifying 'compile.xsl' to report all errors and warnings, not only

those created by the javac task, a modification that has now become part of the

standard 'error.xsl'. Another example that shows the range of customization

possibilities-and the value of a reusable framework-is how we've

incorporated the Agitator results in our reporting. Agitator is a dynamic

developer testing tool that allows the developer to define assertions and then

agitates the class under test while evaluating the assertions to see if they

hold true. The primary problem  to solve was to notify our developers if

one of their classes failed agitation and to let them easily get the result

files to debug the problem. In addition to testing the code we also generate a

dashboard that reflects the progress of the team towards reaching their testing

goals, so a secondary interest was to make it easy for managers to view the

dashboard results for each build. Then to publish our developer scorecard for

each developer to tell at a glance how many classes they own, their progress in

testing them, and any failures that surfaced in the run.

To solve the primary problem, getting the developers

feedback on failures, the support for checkstyle provided a partial

guide-generate an XML result file, merge it into the log, create an XSL file

to transform the results into pretty output, and then tell the e-mail publisher

and the Web reporting module to use our XSL file. When running the Agitator Ant

task it will generate 'console.xml' file that serves as the result file, and

merging that result is as easy as configuring the log element.





 







 







 









Advertisment

After configuring CruiseControl to merge our result file I

created 'agitate.xsl' to display the agitation results in a format

consistent with the report data, and added this file to the

cruisecontrol-2.3.1/reports/jsp/webcontent/xsl directory. To have this file used

by buildresults.jsp was a matter of modifying 'buildresults.xsl' to include

my XSL in the same manner as the other XSL files. Because 'buildresults.xsl'

is also used by the HtmlEmailPublisher this step gave me our custom output for

the Web application and our e-mail at the same time. As a further customization

of the reporting application I was also able to reuse one of CruiseControl's

JSP tags to create links directly to the specific build artifacts that were of

the greatest interest, our failure data and the management dashboard.







cellpadding="2" align="center">





Agitator

Dashboard







failures by

owner





 













In the default Web app, the build results page shows only

the unit test that failed and there is a tab for viewing the details of all the

unit tests. Similarly, I wanted to have a tab to show all the agitation results,

and yet again there was a JSP tag provided to do just the trick.

name="agitateResults" label="Agitation Messages" >







 









The final requirement we had was to publish our developer

score card at the end of each build, and for this we turned to another customer

publisher, PagePublisher, that would both send the score card generated by the

build and modify any links in the score card to link to the full report

published to the artifacts directory. I could, thus, integrate our test results

from Agitator in the CruiseControl report and in such a way as to feel

completely natural for our workflow, and that should be the goal of any such

effort. Make the tool conform to your process, not your process to the tool. The

advantage of using a flexible framework like CruiseControl is that you get the

benefit of the custom fit but your effort is spent only in customization.

Jeffrey Fredrick, Director of Engineering, Agitar

Software; founding member, JBuilder development team

Advertisment