(Download tool described in this post.)
For team development a way is needed to ensure that team members doesn’t inadvertently break the shared code, e.g., a developer forgets to commit all files related to a change, which is something not easily fixable by other developers.
Also, the team might want to run post-compile tasks, such as unit tests, on a dedicated machine to avoid the works-on-my-machine syndrome.
Ideally, the project’s code should always go from one stable state to another so that a developer doesn’t get broken code the next time he synchronizes with source control.
One way to achieve this is by means of continuous integration of changes into the code. Either by rolling your own build script that monitors source control for changes to trigger a build and inform developers of the result or by leveraging an existing continuous integration solution, such as CruiseControl.net (ccnet): a continuous integration server, an asp.net based front-end, and a desktop client.
Although CruiseControl (both the Java and .Net incarnation) appears to be a popular choice among developers, I find it somewhat immature: it violates the dry principle in its projects configuration file and, worst of all, it doesn’t support build queues to ensure that only one of several projects are actually building at once.
It’s not a problem for ccnet itself to perform parallel continuous integration. However, the build chain is no stronger than the weakest link, which in my case is devenv.exe, the Visual Studio executable used by ccnet to build the application (by running devenv.exe from the command line with a couple of arguments, you can leverage the build system within Visual Studio from a script).
The downside to using devenv.exe, however, is that all running instances share a temporary build folder for compiler output. Thus, when ccnet does continuous integration of branches in parallel, the instances of devenv.exe spawned by ccnet compete for write access to identically named files, namely the executables generated by the compiler.
Not surprisingly, this results in build failures for one or more ccnet projects, although there is nothing wrong with the code.
In my experience from the trenches this happens too often to be ignored for any serious use of ccnet: It’s just too common for a developer to cross-commit a bugfix to several branches or for independent developers to commit to their working branches in between when ccnet checks the branch for updates.
Having these false negatives make the team members not trust ccnet and not take proper action whenever they see a build failure, thus rendering continuous integration useless.
One solution to the above file locking problem is to serialize the builds manually by building branches at predefined intervals, thus interleaving the builds so that they shouldn’t be able to overlap. As a result the developer may have to wait a fair amount of time to be sure his latest commit doesn’t breaks anything. Also, if a build has not finished when the next is scheduled to start, it may cause a cascade of broken builds to occur until the interleaving is properly realigned.
Another solution is to head over to Jay Flowers. He has branched off the official 1.0 version of ccnet and created a ccnet that allows for a lot more constraints than just serializing builds. It happens at the expense of running the non-official, not-recent version of ccnet, though.
My solution to the parallel build problem was to write a front-end for ccnet that implements build queues without the knowledge of ccnet, i.e., by means of checking for changes on a configurable number of branches. If a change is detected, the branch is enqueued. Then when all branches are checked, the tool force builds each branch, monitoring the state of the ccnet server to ensure that only one branch is building at a time.
With this solution you integrate the nice features of ccnet, such as the nice webgui for displaying what went wrong with a build and the systray application that goes green, yellow, or red depending on the state of your builds, with rapid feedback in a way that is compatible with Visual Studio’s integrated build engine.
The biggest downside, however, is that you can no longer see the changes that triggered a build from the web gui as ccnet is no longer in charge of cvs operations.
The tool is roughly 100 lines of IronPython code and works by spawning cvs.exe, synchronizing a number of working copies with the cvs repository, and in the process monitors the output of cvs.exe for changes, additions, and removals of files to the working copies.
The state of CruiseControl is determined by screen scraping the webgui and builds are forced by simulating a click on the “build” button for the ccnet project in the webgui for which a change was detected.
The url of the ccnet webgui, the path to the cvs.exe, and mappings between the path on disk where each working copy resides and the corresponding name of the ccnet project are stored in the tools configuration file, which is itself Python code loaded into the tool on startup.
Using Python as a language for your configuration file, you don’t have to invent your own language or use verbose xml, which requires you to write code for parsing it.
PS: I recently stumbled across the Sequential Task Plug In that I want to try out sometime.
Update, April 16: It has come to my attention that there is a way to minimize duplication in the configuration file. It involves “… users to set up DTD entities within the CCNet configuration” as mentionen here. I haven’t tried it, though.
Update, August 10: CruiseControl.net 1.3 now comes with a build-in queue feature.