Jacksum – a java checksum utility – Introduction and usage

jacksum-introduction

Jacksum, a java checksum utility

Software Name : Jacksum

Website : http://jacksum.net/en/index.html

Version : 1.7.0

Jacksum is a platform independent checksum utility (written entirely in Java) for computing and verifying (integrity check) checksums, CRC and hashes (fingerprints). It supports 58 popular hash algorithms and a lot of unique features.

The Jacksum is a freeware tool and suitable for more advanced users verify checksum values. It is platform independent software (support Windows and Unix) and supports 58 popular standard algorithms (Adler32, BSD sum, Bzip2’s CRC-32, POSIX cksum, CRC-8, CRC-16, CRC-24, CRC-32 (FCS-32), CRC-64, ELF-32, eMule/eDonkey, FCS-16, GOST R 34.11-94, HAS-160, HAVAL (3/4/5 passes, 128/160/192/224/256 bits), MD2, MD4, MD5, MPEG-2’s CRC-32, RIPEMD-128, RIPEMD-160, RIPEMD-256, RIPEMD-320, SHA-0, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512, Tiger-128, Tiger-160, Tiger, Tiger2, Tiger Tree Hash, Tiger2 Tree Hash, Unix System V sum, sum8, sum16, sum24, sum32, Whirlpool-0, Whirlpool-1, Whirlpool and xor8).
The Jacksum is written in Java and open source, we can download the source code and try to understand how Jacksum implement the above algorithm.
Use of Jacksum

  • Often time, while we downloading files or software via some secure website, “Checksum” values in MD5 or SHA1 format are provided to protect users from downloading corrupted files or trojan infected files. Unfortunately, many users has no idea how to verify the downloaded file with “Checksum” value, end up in downloading virus or trojan infected files.
  • With Jacksum you can check if a filetransfer was successful. If you download software or huge files, like CD-images (so called iso files) from the internet, often there is a checksum or a hashcode provided. With Jacksum you can calculate such a checksum or hashcode from your local copy. If both checksequences are identical you know that the filetransfer was successful.
  • As Jacksum supports recursively file processing, you can compare two directory trees. Therefore you will be able to verify, if your copies or backups are identical with the original source, even if you don’t have access to both trees at the same time (compare two DVDs with just one drive for example).
  • Jacksum can assist you if you want to perform an unidirectional sync of two directory trees, even if they are on two different computer without a connection in between.
    As Jacksum reads each byte of a file, you can check if all files are still readable on your CD-ROM or DVD.
  • As Jacksum supports a platform independent and compatible file format, it helps you to verify data integrity of burned data on CD-ROMs or DVDs even after many years and even if you will have changed your Operating System.
  • Jacksum can help you to create incremental backups. If you are a developer, Jacksum can help you to create patches for your customers.
  • You can use Jacksum for intrusion detection, because Jacksum can check whether and what files have been changed or deleted on your system. Jacksum can not only check the content of each file you want, but also the file timestamps.
  • Use Jacksum for website content change detection, so get informed if something has changed on your favorite website (can be useful if the website has no announcement-mail-alias).
  • This software is OSI Certified Open Source software
  • It is entirely free software (it runs on free platforms and therefore it is listed also at the Free
  • Software Foundation directory)
  • It is free of charge, it costs nothing
  • It is free of registration (well, I’m happy if you write an e-mail to jonelo@jonelo.de for feedback
  • It never expires
    Requirements:· JRE version 1.3.1 or later

Links:
http://manpages.ubuntu.com/manpages/jaunty/man1/jacksum.1.html
http://linux.softpedia.com/get/Utilities/Jacksum-7175.shtml
http://packages.ubuntu.com/lucid/jacksum
https://jacksum.dev.java.net/
http://www.jonelo.de/java/jacksum/
http://sourceforge.net/projects/jacksum/

Tagged : / / / / / / / / / / / / / / /

Introduction of RFT(Rational Functional Testing)

rational-functional-testing-intro

Introduction of RFT(Rational Functional Testing)

Functional Tester is available in two integrated development environments and two scripting languages.
Functional Tester, Java™ Scripting uses the Java language and the IBM® Rational® Software Delivery Platform.
Functional Tester, VB.NET Scripting uses the VB.NET language and the Microsoft® Visual Studio .NET development environment.
Use Functional Tester to:

  • Perform full functional testing. Record and play back scripts that navigate through your application and test the state of objects through verification points.
  • Create and edit simple and easy-to-read, object-oriented test scripts. In addition to automatically recording test scripts, Functional Tester contains wizards for generating code, for example, for automatically creating a verification point. Functional Tester’s test scripts are implemented in your choice of Java or VB.NET.

Object Based Testing
The object-oriented recording technology in Functional Tester lets you generate scripts quickly by recording applications against the application-under-test. Functional Tester uses object-oriented technology to identify objects by their internal object properties, not by screen coordinates. If the location or text of an object changes, Functional Tester can still find it on playback.
The object testing technology in Functional Tester enables you to test any object in the application-under-test, including the object’s properties and data. You can test objects in Java, VB.NET, Windows®, and Web-based applications, whether they are visible or hidden in the interface.
When you record a script, Functional Tester automatically creates a test object map for the application-under-test. The Functional Tester test object map lists the test objects available in the application, whether they are currently displayed or not. You can also create a new test object map, either by pasing it on an existing map or by adding objects as required. The object map provides a quick way to add objects to a script. Since the test object map contains recognition properties for each object, you can easily update the recognition information in one central location. Any scripts that use this test object map also share the updated information.

Verification points
During recording you can insert verification points into the script to confirm the state of an object across builds of the application-under-test. The verification point captures object information (based on the type of verification point) and stores it in a baseline data file. The information in this file becomes the baseline of the expected state of the object during subsequent builds. Functional Tester has an object properties verification point and five data verification points (menu hierarchy, table, text, tree hierarchy, and list). You can use the Verification Point Comparator to analyze differences across builds and update the baseline file.

Platform-independent
Functional Tester features platform-independent and browser-independent test playback. For example, you can record a script on Windows and play it back on Linux®. You can record a script using Firefox, Mozilla, Internet Explorer or Netscape. Because the script contains no references to the browser used during recording, you can play back the script using any of the supported versions of Firefox, Mozilla, Internet Explorer or Netscape.

Integrated with Rational TestManager
Functional Tester is integrated with Rational TestManager, which enables you to record and play back a Functional Tester script from TestManager and make use of TestManager features, such as the Log. If you have TestManager installed, you can use these integrated features. See Understanding Functional Tester integrations for information.

Integrated with Rational ClearQuest
Functional Tester is also integrated with Rational ClearQuest® Test Manager, which enables you to play back a functional test script from ClearQuest TestManager, generate logs, and track defects. If you have ClearQuest Test Manager installed, you can use these integrated features.

Tagged : / / / / / / / / / / / / / / /

Advance Features of Smart Build Tools

smart-build-tools-features

Table of Contents
Agile Development Challenges
Deployment Challenges
Build Acceleration Challenges
Integration with Elastic and Cloud Computing
Workflow Management
Reference:

 

Agile Development Challenges

Two of the core principles of agile development are: “deliver working code frequently” and “working software is the primary measure of progress.” More and more teams, from ISVs to enterprise IT groups, are recognizing the quality and productivity benefits of integrating early and often. Whether you want to just build and test more frequently or implement a comprehensive agile process, you’ll need to ensure that your build and test process is:

  • fast
  • automated (not requiring manual intervention)
  • reliable (broken builds are the enemy of an agile approach)

Addressing the build-test-deploy process is one of the best first steps toward an agile development model.

Agile Development  Challenge

http://electric-cloud.com/images/common/spacer.gif

http://electric-cloud.com/images/common/spacer.gif

Impact

http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif
Builds require manual intervention http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif Integrating often will overwhelm a manual build process http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif
Slow build cycle (whether long individual builds or a large number of build targets) http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif Long builds limit the number of iterations possible in a day
Builds longer than ~30 minutes rule out Continuous Integration
http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif
Developers introducing errors during integration or production builds http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif If developers can’t do preflight builds and tests on all targets/platforms prior to check-in, Continuous Integration can turn into Continuously Broken Builds http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif http://electric-cloud.com/images/common/spacer.gif

Deployment Challenges

Deployment of a real world application is a complex, and all-too-often, a manual process. The complexity comes from the fact that the application may need to be deployed to any number of environments: Development (DEV), Quality Assurance (QA), User Acceptance Testing (UAT), Pre-production (STAGE), and Production (PROD). Each of these environments is typically slightly different requiring different accounts, password, and setting and thus it may seem like manual intervention is required.
Complicating matters is the fact that today’s business applications comprise multiple moving parts that all need to be synchronized and deployed together. For example, a typical J2EE application can be made up of an EAR file, database changes, some static content, and environment changes (such as JDBC connection pools, Message Queues, etc.). Making sure that all these pieces get deployed on the correct machines and in the correct order can feel like a daunting task, especially if you need to repeat it for each environment. Keep in mind that each environment may have a different number of database servers, application server, HTTP server, etc.

Build Acceleration Challenges

Slow software build cycles are more than just an annoyance. Whether you have long monolithic builds or a large number of smaller builds that take a long time, they are having a real impact on your development organization.

  • Wasted time, frustration waiting for builds
  • Fixing broken builds delays testing
  • Fewer bugs fixed, fewer features added
  • Developers avoid builds
  • QA may skip tests to meet deadlines

A recent survey of more than 350 development professionals found that 68% reported broken builds at least a few times per month (and often weekly), and fixing those typically wasted 2 weeks per year. The majority said their builds were each more than 1 hour long, with 24% reporting builds of 2-4 hours, which translates to time not spent creating great software.

Integration with Elastic and Cloud Computing

In a large infrastructure with many projects, the CI server often experiences variable task loads with unpredictable load peaks (such as during releases). In an environment with fixed resources, this leads to build queue growths, increased build times, and poor build scheduling control.
Computing clouds such as Amazon EC2 provide a nice and efficient way to scale resources to fulfill variable demands with random load spikes. Such rapid demand growths is exactly what happens in Build Server during CI “rush hours,” when everyone starts personal builds before going home, and also during pre-release days, when every new fix needs another round of unit tests, and build feedback time becomes mission-critical.
Build Server utilizes EC2 via virtual build agents. Virtual build agents are similar to standard ones, except that they run on virtual instances on the Amazon EC2. This means that Build Server is able to dynamically start as many instances of such agents as needed, in order to keep the build queue under control under high loads. Additionally, Build Server can shut down virtual build agents when they are not needed anymore, to minimize EC2 instances uptime.
While being a great solution for mitigating peak loads and maintaining constant build feedback time, Amazon EC2 integration also provides great cost optimization opportunities, as each resource accumulates costs only while in use.

Workflow Management

In recent years, build management solutions have provided even more relief when it comes to automating the build process. Both commercial and open source solutions are available to perform more automated build and workflow processing. Some solutions focus on automating the pre and post steps around the calling of the build scripts, while others go beyond the pre and post build script processing and drive down into streamlining the actual compile and linker calls without much manual scripting.

Tools Comparison

 

  OpenMake Bamboo AnthillPro TeamCity Electric Cloud
Agile Support YES YES YES YES YES
Deployment YES NO YES No YES
Remote Agent YES YES YES YES YES
Cloud – Elastic NO YES No YES YES
WorkFlow Mgmt YES YES – Limited YES YES – Limited YES, Limited
           

Reference:

http://www.electric-cloud.com
www.anthillpro.com
http://www.atlassian.com
www.jetbrains.com/teamcity
http://www.openmakesoftware.com

Tagged : / / / / / / / / / / / / / / / / / / / /

Installation and Configuration Guide: Bamboo

bamboo-installation-and-configuration

Bamboo is available in two ‘distributions’ — Standalone or EAR-WAR. The Standalone distribution is recommended (even for organisations with an existing application server environment).
Standalone Installation Guide — Windows

Download and Install Bamboo Standalone for Windows (Windows Installer)

·  Download Bamboo Standalone for Windows. Bamboo Standalone for Windows is available for download from the Bamboo Download Center. Choose the Windows Installer (.exe) download.
·  Launch the Bamboo Windows installer (atlassian-bamboo-x.x-standalone.exe) to begin the installation wizard.
·  The installer requires you to specify two directories:
Bamboo installation directory — This is the directory where Bamboo’s application files will be installed. The default is:
C:/Program Files/Bamboo
Bamboo home directory — This is the directory where Bamboo will store its configuration data. If the directory you specify doesn’t exist, Bamboo will create the directory when it launches. The default is:
C:/Documents and Settings//Bamboo-home
Note: You must use forward-slashes in your directory path. Backslashes are not recognised by Bamboo. Please ensure that the Bamboo home directory is not located inside the Bamboo installation directory

Download and Install Bamboo Standalone for Windows (ZIP Archive)

·  Download Bamboo Standalone for Windows. Bamboo Standalone for Windows is available for download from the Bamboo Download Center. Choose the ZIP Archive (.zip) download (click the ‘Show all‘ link to show the ‘ZIP Archive‘ download link).
·  Extract the files from the ZIP Archive (atlassian-bamboo-x.x-standalone.zip) to a Bamboo installation directory of your choice. By default, the root directory in your zip file is named “Bamboo”.
·  Set up your Bamboo home directory — this is the directory where Bamboo will store its root configuration data. To do this, edit the file named bamboo-init.properties in the Bamboo/webapp/WEB-INF/classes directory. In this file, insert the property “bamboo.home”, with an absolute path to your Bamboo home directory. Your file should look something like this:

    bamboo.home=C:/test/bamboo-home

Alternatively, you can specify an environment variable ‘BAMBOO_HOME’ which specifies the absolute path to your {BAMBOO_HOME} directory. Bamboo will check if an environment variable is defined.
·  If you are going to use Bamboo remote agents, set the following in the bamboo-init.properties file in the /webapp/WEB-INF/classes directory:
bamboo.jms.broker.uri=tcp://localhost:54663

  • Replace ‘localhost’ with the real host name or IP address of your Bamboo server.
  • If port number 54663 is already in use, specify a different port number.

Launch Bamboo

Launch via the Start Menu
If you have used the ‘Windows Installer’ to install Bamboo, you can start Bamboo via the Start Menu in Windows (generally under the ‘Bamboo’ folder by default). The following options will be available in your Start Menu:

  • Bamboo Continuous Integration Server Uninstaller‘ — uninstalls Bamboo from your computer
  • Install Service‘ — installs Bamboo as a Windows service (note, this will not start Bamboo)
  • Remove Service‘ — removes the Bamboo Windows service, if you have previously installed it (note, Bamboo will not be uninstalled from your computer)
  • Start in Console‘ — starts Bamboo in a Windows console
  • Start Service‘ — starts your installed Bamboo Windows service
  • Stop Service‘ — stops your installed Bamboo Windows service

You can run Bamboo in two modes, either in a Windows console or as a Windows service:

  • To run Bamboo in a Windows console, click the ‘Start in Console‘ option.
  • To run Bamboo as a Windows service, click the ‘Install service‘ option. After the service is installed, click ‘Start Service‘. Once you have installed Bamboo as a service, Bamboo will start up automatically every time Windows restarts.

Launch via batch file
You can start Bamboo via the batch files that are shipped with Bamboo. If you have installed Bamboo via the ZIP Archive, you will need to use the batch files to start Bamboo. You can find the following batch files in your installation directory:

  • BambooConsole.bat‘ — this starts Bamboo in a Windows console.
  • InstallAsService.bat‘ — this installs Bamboo as a Windows service. Note that this will not start Bamboo.
  • StartBamboo.bat‘ — this starts your installed Bamboo Windows service.
  • StopBamboo.bat‘ — this stops your installed Bamboo Windows service
  • UninstallService.bat‘ — this un-installs the Bamboo Windows service from your machine. Note that your Bamboo installation still remains.

You can run Bamboo in two modes, either in a Windows console or as a Windows service:

  • To run Bamboo in a Windows console, run ‘BambooConsole.bat
  • To run Bamboo as a Windows service, run ‘InstallAsService.bat‘. After the service is installed, run ‘StartBamboo.bat‘. Once you have installed Bamboo as a service, Bamboo will start up automatically every time Windows restarts.

Configure Bamboo

  1. Access your running Bamboo instance by going to your web browser and entering the address: http://localhost:8085/.
  2. Configure Bamboo via the Setup Wizard which will display. Read Running the Setup Wizard for further instructions.

For more details on Installation and Configuration, please review following links
http://confluence.atlassian.com/pages/viewpage.action?pageId=55246864

Tagged : / / / / / / / / / / / / / / / / / / / / /

Disadvantages of Bamboo – Bamboo Expert Review

bamboo-disadvantage

Bamboo Disappointment
Being a big fan of Atlassian’s Confluence and Jira, it was with much anticipation that installed Bamboo, the continuous integration (CI) engine they’ve released. Perhaps these high expectations led to my ultimate disappointment with Bamboo, but truly the features I’ve come to expect in a commercial CI product are nowhere to be found.

No concept of inherited project structure.
If you have 20 modules you would like to build, defining behavior and properties for each is a daunting task. QuickBuild and AntHillPro both allow for a hierarchal organization of modules, so that a child may inherit properties (like environmental variables, build targets, etc) of its parent. In Bamboo, when creating a “Plan”, I can clone an existing module, but that’s it. Should I have the need to change a property for all plans, I’ll be forced to configure each through the web GUI. A tedious process — even with Cruisecontrol I could search & replace config.xml in a text editor to make wholesale changes.

Alien nomenclature
In Bamboo, you have top-level “projects”, and beneath them you have “plans”, which represent the modules being built. I’ve never used the word “plan” before when describing a module’s build, and frankly the limited options offered by Bamboo to govern build behavior makes it a dubious word choice.

No passing of properties
It is sometimes desirable to direct a CI engine to pass arbitrary properties to one’s build process, and vice versa. I don’t mean “static” environmental variables, rather I refer to dynamic properties like “version number”. No such functionality is present in Bamboo. Luntbuild and Quickbuild both allow for this using OGNL expressions.

No concept of build promotion
Most commercial CI products have evolved to include “Application Lifecycle Management”, so you may describe how a build can go from being development-status to gold. Implicit in this is a workflow allowing QA to promote and release builds. None of this is even hinted at in Bamboo — it does not even tag your build.

Reference:
http://poorinnerlife.blogspot.com/2007/09/bamboo-disappointment.html

Tagged : / / / / / / / / / / / / /

Bamboo Vs TeamCity Vs CruiseControl – Continuous Integration Expert Review

bamboo-vs-teamcity-vs-crui

Difference between Bamboo Vs TeamCity Vs CruiseControl
TEAMCITY

  • TC pre-tested commit is good.
  • TC integrates to Visual Studio which is our main IDE.
  • JetBrains are more focused on supporting .NET builds than Atlassian is, since JetBrains actually has .NET products so they use it internally.
  • Support for .Net projects, as well as Java, in the same product (nice if you need it).
  • Server-side code coverage analysis (you could get the same results by running EMMA from the Ant build.xml)
  • Server-side static code analysis using IDEA inspections (nice but relies on using IDEA for development – Checkstyle and FindBugs could do something similar from Ant).
  • Pre-tested commits. Sends your changes to the CI server for building before committing to version control. Your changes are only checked-in if the build succeeds and all tests pass.

BAMBOO

  • Bamboo JIRA integration is awesome. I wish TeamCity provides that kind of integration.
  • Bamboo on the other hand looks like a great tool too. It has so many plugins (just like JIRA). It looks nice as well. I can live without VSTD integration, but I really wanted pre-tested commit. Just a note: I did create a feature request ticket to Atlassian telling them about TC’s pre-tested commit feature.

CruiseControl

  • “CruiseControl is a framework for a continuous build process.”
  • “Bamboo is a continuous integration build server that offers heaps of insight on build processes and patterns via solid reporting metrics.”
  • “TeamCity is an innovative, IDE independent, integrated team environment targeted for .NET and Java software developers and their managers.”
  • “AnthillPro3 is a third generation Build Management Server.”

Links and Reference:
http://blog.chris-read.net/2007/02/21/quick-comparison-of-teamcity-12-bamboo-10-and-cruisecontrol-26/
http://blog.uncommons.org/2006/12/08/teamcity/
http://poorinnerlife.blogspot.com/2007/09/bamboo-disappointment.html

Tagged : / / / / / / / / / / / / / / / / / / / / / / / / / / /

Bamboo – A Continuous Integration Server – Complete Guide

bamboo-a-continuous-integration-server

Bamboo – A Continues Integration Server

Continuous integration (CI) brings faster feedback to your development process, preventing bugs from piling up and reducing the risk of project delays.

Bamboo enables development teams of any size to adopt CI in minutes, easily integrate it with their work day and scale their build farm using elastic resources in the Amazon EC2 cloud.

Continuous integration (CI) brings faster feedback to your development process, preventing bugs from piling up and reducing the risk of project delays.

Bamboo enables development teams of any size to adopt CI in minutes, easily integrate it with their work day and scale their build farm using elastic resources in the Amazon EC2 cloud.

Bamboo makes every stage of continuous integration adoption easy, intuitive and pain-free.

Set up your first CI build in minutes
Integrate/ Collaboration CI with your current tools and workflow
Scale your build farm on-premises or in the cloud!
Analyse and improve your build performance
Extend Bamboo with plugins and the REST API
Full feature list and system requirements

Integrate/ Collaboration

Bamboo lets you pick how and when you’re notified about builds and integrates easily with tools you’re already using, so your team will be able to work together to keep your builds green!

Notifications via email, RSS, IM or IDE pop-up:
With Bamboo each team member can choose how and when to be notified:

Email, RSS, IM or IDE pop-up notifications
Customised email templates (HTML or plain-text)

Choose which builds to be notified about:

All builds for a project
Specific build plans
Every time a build finishes, only when it fails X times, only when it hangs, or only when it times out

Priorities your build queue
When your build agents are busy, Bamboo builds go into a queue.

Need to see the results of a build ASAP? You can:

Escalate builds to the front of the queue with one click.
Stop in-progress builds.
Move lower-priority builds to the back of the line.

Apply labels and comments to build results
Why did a build fail? What did you fix to turn it green again? Which builds have been tested by your QA team? What builds are approved for release to customers? Bamboo let’s you provide content to your builds results using labels and comments.

  • Apply labels and comments via your Web browser, IDE, or Bamboo’s unique 2-way IM system
  • Subscribe to an RSS feed of all build results with certain labels (e.g. “QA_FAILED”, “PATCH”,etc.)

Run and fix builds from Eclipse and IntelliJ
Bamboo integrates with Eclipse and IntelliJ IDEA to bring build management and notification into the IDE:

  • Run, label and comment builds
  • Receive pop-up notifications for builds you care about
  • Quickly identify failing tests and re-run them locally with just one click

Link builds to JIRA issues
Bamboo integrates with JIRA and allows you to easily:

  • View all builds related to an issue
  • View all issues related to a build, and mark them resolved
  • Embed build status and summary gadgets into your JIRA project dashboards

View changes that triggered builds with FishEye
By integrating Bamboo with FishEye, you can quickly see what files were changed to trigger a build and what JIRA issue the changes were made for. Want to see exactly what changed? FishEye diffs are just a click away!

Run Clover Test Optimization and code coverage
Integrate Clover for Java Test Optimization and code coverage, and you’ll instantly get faster builds and better code quality insights!

  • Clover Test Optimization can make your Java unit and functional tests run several times faster
  • View method, branch and statement coverage
  • See when and where coverage drops over time

Display results in Confluence dashboard portlets
Bamboo provides portlets that can easily be embedded on any Confluence page, so you can keep every project stakeholder up to date on project status.

  • See pass/fail status, duration, and number of failed tests for recent runs of a build plan.
  • View the latest status for all builds in a project.

Embed JavaScript widgets in any HTML page
Bamboo provides JavaScript widgets that can be embedded into any HTML page. With just a few lines of code, create custom pages that include:

  • Latest build results
  • Latest changes
  • Last result of all builds in a project
  • Plan summary graphs

Scale Your Build System

As your team runs more and more CI builds, you’ll want to add more computing power to maintain fast feedback on your build and test results.

Bamboo makes scaling your build system a snap with:

  • Remote agents that run on-premises
  • Elastic agents in the Amazon EC2 cloud.

Remote agents

Your Bamboo server can manage dozens of remote agents simultaneously, taking advantage of available computing power to provide the fastest feedback possible. With remote agents you can:

  • Run multiple builds at once to reduce feedback times
  • Test on different platform configurations

Elastic agents

Elastic agents are remote agents that run in the Amazon Elastic Compute Cloud (EC2). By using the cloud, you can instantly scale your build environment as your development cycles ebb and flow and build queues become longer. Bamboo makes it easy to customize and manage your elastic agents:

  • Schedule agents to start and stop based on known peaks and valleys in your need for CI builds.
  • Cut operational costs by taking advantage of EC2’s reserved instance pricing and availability zones.
  • Customise agent images with different operating systems, installed software, and computing power to create the most flexible building and testing system possible.
  • Reduce data transfer and startup times by using Amazon Elastic Block Storage (EBS) for persistence.

Every build agent (remote or local) has specific capabilities that are used to match the requirements of queued builds.

Bamboo’s web interface makes it easy to manage all your agents and view a log of recent activity for each agent.

Analyse and Improve Your Builds

Your team is running builds on every commit, comprehensive performance and functional tests are running every night, and your build farm has been scaled out. Wondering where and how you can make improvements? Bamboo makes it easy to see the performance of your builds and identify trouble spots and possible improvements.

  • Find out why a build failed
  • View build plan performance across time
  • See what’s breaking most so you can investigate
  • Compare several plans
  • Compare team performance and drill into author details

What happened?
When your builds turn red, you want to fix them as fast as possible! Bamboo provides information to your team that makes root-cause analysis easier, so you can turn the build green again ASAP.

  • Full stack traces for compilation failures
  • Full stack traces for test failures
  • Highlighting of newly failed tests
  • Click-through to the failing code from within the IDE

Build plan summaries at a glance
Bamboo’s plan summary view presents a wealth of information about each build plan including:

  • Results of the latest build
  • Historical pass percentage and average build time
  • Result, duration, and number of failed tests for recent builds
  • Duration and failure trends across time

Find your trouble spots
Bamboo identifies problem areas within each build based on its performance history including:

  • Most common failures
  • Tests that take longest to fix
  • Long-running tests

Compare your build plans
Which builds are turning red the most often? Which builds are taking longest to run? Bamboo reports provide useful information within a few clicks, for the exact set of plans you care about. Reports include:

  • Success percentage
  • Duration
  • Activity
  • of tests
  • of failed tests
  • Time to fix failures
  • Clover code coverage

View the CI scorecard
Who’s submitting bullet-proof code? Who’s going to buy beer for breaking the most builds? Bamboo provides insightful reports for:

  • Full Teams: # of builds triggered / failed / fixed
  • Individuals: build history, last 10 triggered / failed / fixed
  • Groups: Success percentage, # of failed / fixed builds

Extend Bamboo

Bamboo works great right out of the box, and it can be extended to fit your exact needs:

  • Install 3rd party plugins from the Atlassian Plugin Exchange to add support for additional SCMs, test tools and more.
  • Create your own Bamboo plugins. Get help from Atlassian and other Bamboo developers in the Bamboo Development Forum and then share your plugins on the Atlassian Plugin Exchange
  • Use Bamboo’s comprehensive REST API to integrate Bamboo with other tools or automate common tasks.

System requirements and supported development tools

CI server and agent operating systems Windows, Linux, Mac OS X

Cloud platforms

Amazon EC2 (Linux, Windows)

SCM repositories

Built-in support: Subversion, CVS, Perforce
Supported via plugin: Git, Github, Mercurial, Clearcase, Accurev, Dimension

Programming languages

All languages supported — Java, C/C++, C#, VB.net, PHP, Ruby, Python, perl, …

Builders

Ant, Maven, Maven2, make, NAnt, Visual Studio (devenv, MSBuild), custom command line, shell scripts

Test tools

JUnit, any tool with JUnit XML output including: Selenium, TestNG, NUnit, CppUnit, PHPUnit, PyUnit (plugin), PMD (plugin)

Code coverage tools

Atlassian Clover, Corbertura (Plugin), RCov (Plugin)

Build and agent management

Build configuration
  • Plan to agent capability matching
  • Build artifact management
  • Build notification configuration
  • Bulk editing of multiple plans
  • Build result and artifact expiration

Build triggers

  • Commit-triggered builds
  • Manual builds
  • Scheduled builds
  • Dependency-triggered builds

Build queue management

  • Build-queue re-ordering
  • Hung-build detection
  • Configurable queued build timeouts
  • Elastic agent startup

Build result management

  • Label build results via Web browser, IDE, or 2-way IM
  • Comment build results via Web browser, IDE, or 2-way IM
  • Create a Mylyn task to fix failed builds
  • View the most popular labels, all build results with a label, or all labels applied to a build plan.

Agent configuration

  • local, remote and elastic agents
  • Builder, JDK and custom capabilities

Agent management

  • Agent status monitoring

Build result notifications

RSS feed
  • All builds or all failed builds across all plans
  • All builds or all failed builds of a specific plan
  • All builds with a specific label

Email

  • Customized email templates
  • All results or all failed results for a build plan

Instant message

  • Google Talk, Jabber, other XMPP-based clients
  • All results or all failed for a build plan
  • Commment on build results via IM
  • Label build results via IM

IDE notification

  • Pop-up notifiers
  • Pass/fail icons in status bar

External tool integrations

IDE connectors Eclipse, IntelliJ IDEA

JIRA

  • View and manage issues related to a build result
  • View all builds related to an issue
  • View builds related to a JIRA project
  • Display JIRA dashboard gadgets for latest build status and build plan summaries

Confluence

  • View latest result of a build plan
  • View recent results for projects, plans, or authors
  • Charts for recent build duration and test failure count
  • Charts for average build duration and test failure % over time

FishEye

  • View committed changes that triggered a build
  • One-click diff and change history from Bamboo build results

Other tools

  • JavaScript widgets including latest builds, plan status, and summary graphs

Elastic Bamboo

Elastic agent configuration
  • Amazon Machine Image (AMI) customisation
  • Elastic Block Storage (EBS) persistence

Elastic Agent Management

  • Web browser and SSH management
  • Start agents from build queue
  • Agent scheduling
  • Agent usage tracking

Build analysis and reporting

Build plan reports
  • Duration, failed tests for recent builds
  • % Successful builds, average build duration over time
  • Test statistics per plan
  • Individual test history
  • Clover — code coverage per plan
  • Clover — lines of code per plan
  • Avg. time to fix builds

Author reports

  • Build statistics per author
  • Build results per author
  • Activity, failures, fixes per author

Security and user management

Authentication
  • Single sign-on with Atlassian Crowd
  • LDAP integration

Permissions and access control

  • User and group definitions and permissions
  • Anonymous user permissions
  • Plan-level permissions

Extending Bamboo

Plugins
  • Bamboo plugin framework
  • Dozens of 3rd party plugins available for download

API

  • REST API

Reference:

Features:
http://www.atlassian.com/software/bamboo/features/
Bamboo: getting Started
http://confluence.atlassian.com/display/BAMBOO/Bamboo+101
Forum
http://forums.atlassian.com
Plugins
https://plugins.atlassian.com/search/by/bamboo
http://confluence.atlassian.com/display/BAMBOO/Bamboo+Plugin+Guide
Jira and Bamboo
http://www.atlassian.com/better-together/progress_report.jsp

Tagged : / / / / / / / / / / / / / / / / / / / / / /

Deployment Foundation Issues

deployment-foundation-issue

Deployment Foundation Issues

Establish Key Roles/Charter for Deployment

The very first order of business is to firmly establish “who’s on first” for getting deployment done. Senior management is crucial at this point for making sure all their direct reports and managers are on board with this
and that it comes from the top. I mention this because at one place I worked, we immediately got into interdepartment squabbling due to a lack of senior management support and direction. If you hear a manager
say things like “do what you want — but don’t touch my area,” you will have deployment problems. I strongly recommend the formation of a process group as the focal point for all matters related to process and process deployment. This group has to have both the authorization and responsibility for process. If you have a distributed set of “process owners,” consolidate that responsibility and authority to this new group. My requirements for membership in this process group are:

Six to eight people. Larger process groups tend to be less efficient and more cumbersome. A smaller group tends to be ineffective. It is not necessary to have representatives from all corners of your organization. It is important that these domain experts get called in as necessary for process development and inspection. One company had a 15-person process group established by a non–process-oriented vice president. It was a disaster to get a
repeatable quorum present for any meeting. We spent subsequent meetings repeating stuff from earlier meetings to accommodate a different set of participants at every meeting.

Process-group commitments. My most successful process group was when I insisted that members commit 5 percent of their workweek to process-group meetings. Group members and their managers had to sign the commitment. The 5 percent figure is doable — even for busy people. Two one-hour meetings per week reflect that percentage. I also had fixed time meetings both by time and day of week. It became automatic to show up. To make this really work, I was the process-group lead and I dedicated 100 percent to this effort. I had clerical support services available to me. The most effective process-group meetings are concentrated sessions
with a time-stamped agenda and where my support staff and I do all extracurricular activities. You want to restrict extra time (beyond actual process-group meeting time) needed by your key process participants because they tend to be super busy.

Showing up on time. We could not tolerate people wandering in five or ten minutes late. We started promptly on the hour and stopped promptly on the hour. At one company, I removed a person for being late because it held everyone up. Promptness became so important at one commercial company that other process- group members would be “all over” tardy people. The tardiness stopped quickly when peers got involved in any discipline.

People who are process oriented. Do not have people in this group who don’t fit this requirement! At one company, a vice president insisted on naming people to the group (which became double the size I had wanted) who were almost completely ignorant about process. We spent almost all our precious process-group time just getting these people to understand the most fundamental aspects of process. It was painful. The VP wondered why progress was slow. Duh!

People who are opinionated — i.e., not afraid to speak up on issues. You cannot afford to have people just show up and suck air out of the room and not participate. The best processes I’ve developed came from sessions where it was not clear who would walk out alive after spirited process discussions.

People that others look up to. They may be leads or workers. Every organization has these types of people and they may not be in the management ranks. The reason for this requirement is to form an initial set of process champions right out of the box. These initial process champions will develop more champions.

People who are willing to have an enterprise perspective versus an organizational perspective. This could be a huge problem if process- group discussions degenerate into preservation of turf — no matter what. At one place, I actually went to a paint store, bought disposable painting hats, placed a big “E” for enterprise on the hats, and made process-group members wear the hats at our meetings to reinforce that enterprise focus. It got a few laughs and some grumbles but it worked.

People who are not “who” oriented. A process group avoids the “who” question and concentrates on the “whats.” Once the “what you have to do” is addressed, the “who” looks after itself. When process-group meetings degenerated into discussing “who does this” and “who does that,” I routinely stopped the meeting and reminded everyone that when you have a hole in the bottom of the boat, this is not the time to discuss whose hole it is! I got laughs but my point was taken.

This is your key group for process development and deployment. It’s obvious, but if you have this marvelous group put together without regard to an overall process architectural goal, you will fail. This is where this software process model will help you enormously. Ideally, the processgroup lead has an in-depth knowledge of the targeted process architecture with an initial goal to get the process group up to speed on this aspect first — before any company processes are tackled. If you are under pressure to “just get on with it” (without getting all process members up on the target process architecture), you will fail. You will end up flailing around for a large amount of time. You will also end up with a hodgepodge of process elements and no encompassing architecture. You want to end up with a hierarchy of goals supported by tasks that are measurable for earned value and progress reporting by the process group itself. Essentially, you want to create a balanced scorecard for process progress. This makes your process group accountable for progress just like any other project team.
For deployment success, I will repeat an important division of labor within the process group itself. You absolutely need to develop advocates for the process framework architecture itself and make sure the integrity
of the process model is maintained. This book will be invaluable for that aspect. These people are very different from most process-group members, who should be domain experts. The process framework advocates are
the folks that put the “meat on the bone” for process and they will make sure that the process parts all fit within that framework architecture, whereas the domain folks make sure to develop process elements that are useful and make sense.

I make this point because uneducated management personnel may pressure you to “just get on with it” without considering the importance of making sure that all process elements fit within a framework architecture.
The worst thing you can do is crank out process into an ever larger pile of stuff that increasingly gets more and more useless for the organization. The main litmus test for process is that it is useful. I have run into
managers who seem to think that bigger piles mean success. In reality, you may have just the opposite result. Resist those who are pushing you in that direction for success. The most successful process group I led was when I was not only the lead but also the process architect and had management backing to do what was needed. I mention management backing because at another place, I had the exact same situation but had a boss who was so insecure that all my suggestions and recommendations were either ignored or rejected because they didn’t come from him! Anything from me was dead on arrival. If you’re ever in that position, run, don’t walk! You cannot succeed. There are people like that out there and (sadly) some are in senior management positions. I simply didn’t want to manipulate him to have him believe that all ideas were his ideas. That’s what it would take
to deal with this kind of person.

Ensure an Inspection Procedure Is in Place

When actually doing process deployment for the software process model, there is one how-to procedure that absolutely needs to be addressed early on: the inspection procedure. This particular procedure is fundamental to
all the activities within this software process model as a quality gate. If you have a lousy how-to procedure here, you will have an awful time in getting people to buy into this model. Conversely, a good how-to will
take off like wildfire and become engrained in an organization real fast. The software process model wants quality built in the “what you have to do” world by placing the quality responsibility on the producer’s back.
The inspection procedure is critical to this end goal. I worked at one place that had a “review” procedure in place. It was hardly used, did not work well, and the management protected it with
their lives. I had the gall to suggest a better way of doing things. I had to present this new way at three different hearings to this management group, finally receiving a disposition of “rejected.” They could not handle
the fact that this software process model allows for better mousetraps. Both methods could coexist in this model. I knew that once the better way was an option, the bad way would drop off for usage very naturally.
These managers had a personal and vested interest in preserving the status

quo — regardless of usefulness. They had invested time in the existing process element. They wanted no interlopers on their possessive world. This company was very closed in their thinking. Consequently, we had
no effective inspection procedure at this company and had a huge management barrier to ever getting a better way proposed or deployed. This same company has the same ineffectual review procedure in place
today that is really bad and is barely used. Go figure! In another job, I had the privilege of working for a section of a very large company and had incredible support from the head person. In that
environment, I was able to provide this part of the company with a slick, efficient, Web-based inspection procedure that was up to ten times faster than the existing inspection procedure. My new inspection procedure also
produced higher-quality inspections and had built-in defect prevention to boot. What happened was incredible. The word spread like wildfire within my own group about how great this procedure was. That worker enthusiasm spilled over to other organizational elements that clamored to get onboard with our solution. I was deluged with training requests and guest appearances to various “all-hands” meetings regarding this way of doing
things. I didn’t have to do a thing to sell this. It sold itself. I knew that the software process model approach encourages better ways of doing things and encourages variances in scale or location quite naturally.

Why is the inspection procedure so critical to this software process model?

 Every activity at the “what you need to do” level has built-in inspections across the board (i.e., the inspection procedure is a how-to elaboration on all the “Inspect” verbs in all activities).
 A bad inspection procedure can have a huge detrimental effect on all activities’ elapsed completion times. Conversely, an efficient inspection procedure can vastly improve activity execution times across the board.
 A good inspection procedure increases work product quality and reduces rework. Rework is expensive and should be avoided at all costs.
 A good inspection procedure gives you the basis for defect prevention — in addition to defect detection. With the software process model, you now have the ability to ask, “Where should this defect have been found?” This provides the mechanism to improve any earlier inspection checklist associated with any earlier work product. With this inspection procedure you have a built-in process-improvement mechanism in this software process model.
 Finally, an efficient inspection procedure will be used and will become part of the company culture. A bad one will not be used.

Get at Pain Issues

To be successful with process deployment, you really want to keep coming back to pain issues for any organization. The big question is, how do you do that? And how do you do it so that the data is believable? This
is independent of the type of process model you’re using. You will achieve higher levels of buy-in from all levels of the company if the perception is that you’re solving real-world problems. If you separate
process initiatives from “pain” issues, you will get a lot of cold shoulders about this process stuff. An absolute killer is to tie process initiatives to a maturity model (like CMMI) in a vacuum. As I mentioned before, a
particular model or standard can be viewed as the flavor of the month. Some people may view all this with an “if I keep a low profile, this too shall pass” attitude. There’s nothing like solving real problems — especially
if people can reduce their 60-hour weeks to something more reasonable. I learned one big lesson when I got married — don’t discount the power of a spouse! As Dr. Phil has said repeatedly, “If Mom’s not happy, no one
is happy.” For most employees, you really have a shadow employee to deal with as well — the employee’s spouse. If the employee can get home earlier, play with the kids more, do family things more, etc., how
do you think that family unit is going to support you? Do you think you’ll get early support for your next process initiative? The people part of process improvement can be enormous as a huge positive factor or a
huge negative factor. The process group needs to come to grips with this aspect of deploying new processes in an organization. It is not enough to have a marvelous process framework architecture into which all the
process parts fit nicely. Personal interviews have mixed results for actually getting at pain issues. Can you be trusted as an interviewer? Will the person being interviewed be forthright or will he or she give you politically correct data? Will there be retribution if he or she dares to be totally honest? For these reasons, I would not get process problem data this way. Two companies where I worked tried the survey route. In my opinion,
surveys are best suited for getting simple check-off answers to specific questions. They are not suitable for open-ended responses. I still laugh at a British sitcom called “Yes, Prime Minister,” where you can organize
sets of questions and get a totally opposing poll result based on the question set — even by surveying the same people. My point here is that polls and surveys can be manipulated. Busy people tend to kick and
scream about surveys and certainly want to get them off their plates as fast as possible. This means that open-ended surveys don’t end up with a lot of useful data. For these reasons, surveys are not the way to go.
As an adjunct for getting at pain issues, always leave the door open for having process practitioners critique or suggest things directly or via

An Implementation Technique for Getting at Pain Issues

I have used two of the 7 M tools (modified somewhat) very successfully to get at both enterprise process pain issues and project pain issues (as a project postmortem). These two techniques have fancy names:

 Infinity brainstorming
 Interrelational digraphs

I don’t use these terms when I conduct these techniques — I just call them “focus groups,” “action groups,” or “postmortem.” Using fancy terms will turn people off. Don’t do it. A focus group is fast (it usually takes
less than two hours) and is totally anonymous (no retribution). This particular technique levels the playing field for quiet, introverted people versus loud, dominant people. That quiet, shy person may be the very
person with a lot to express anonymously. The most successful focus group in my experience was done with about 35 people in a single session of about an hour and a half. At this point, you’re probably thinking it’s impossible to have a successful session with 35 people. Conventional wisdom says the success of any meeting is conversely proportionate to the number of attendees. The higher number of people produces lower success. The lower number of people produces the higher success. This technique is just the opposite. You need at least 12 people to be successful. A small group simply won’t work for this technique.

Here are the supplies needed to conduct these sessions:
 Large Post-it notes — enough for about 20 Post-its minimum per participant.
 Butcher paper or flip-chart paper —

these are taped to three walls of the conference room. Four or five charts are taped to one wall. Five to six charts are taped to the opposite wall. One chart is taped
on a third wall (for infinity brainstorming rules). One chart will be used to capture the major impact analysis after we collect the data from the infinity brainstorming part of the session. The size of the room will affect how many walls are actually used. No matter what, you need two walls for charts.

 Masking tape for the large paper sheets above.
 Fine-point felt pens — enough for participants and facilitator.
You need a large conference room that will hold all the participants and has wall space onto which you can tape large paper charts on three walls. Reserve this room for about two and a half to three hours to allow
time for the facilitator to set up, for the actual session, and for wrapping up. The participants show up about half an hour after the room’s reserved start time. At that point, all supplies should be out and the paper should
be up on the walls. This is what you need to do ahead of time:
 Write down the session rules on a single chart. The rules are:
– One finding per Post-it
– You can write as many Post-its as you want within the allotted
time
– Use only the supplied fine-point felt pen for writing
– No handwriting — print your finding
– No names (i.e., anonymous)
– Don’t get personal — it’s process related
– Be businesslike (not crude) in your remarks
– Make finding clear as to your intent: Can another person understand
your point?
– Be quiet when writing findings
 Take a few minutes to explain what you will be doing to the assembled group. Make sure the group knows about your expectations and desired results. I have even put this in written form and sent it to the group ahead of time to make sure that everyone is onboard with this technique. This sets the foundation. (5 minutes maximum)
 Announce that participants are to write one finding per Post-it note on as many Post-it notes as they want — within a ten-minute time frame. This is a totally quiet part of the technique. After writing,
participants take their individual Post-its and stick them onto one wall’s paper charts. Random placement is in order. This part actually creates all the pain issues as experienced by the participants in a
nonretributional way because no names are used. (10 minutes maximum).

 Explain that the findings should be placed into “like” groupings by placing Post-its from one wall into Post-it groupings on another wall. Like things should be clustered together; some adjustments
may need to be made later. Also point out that there is a predetermined category called “orphans.” (When conducting a project postmortem, I add a “good” category for the things we did right
on a project.) Forget trying to establish any category names. (About 1 minute)

 Have everyone stand up, grab a pile of Post-its from one wall, and place them on another wall as Post-it clusters. Remind them that once a finding is placed, it can’t be removed. Some talk among
people can happen at this point. If you do this correctly, you will try to limit the category clusters to about 10–12 groups at a maximum. Have orphaned Post-its be placed under “orphans.”
(About 10–12 minutes)  Identify a “reader” from the group. This individual will read the Post-its to the entire group and possibly rearrange some Post-its. (About 1-2 minutes)
 Have the reader stand up and read each Post-it finding in each cluster out loud. This accomplishes the following:
– Everyone gets to hear all the findings.
– Everyone gets a chance to persuade the reader to remove a
Post-it if it is not in a “like” group.
– Finally, the group establishes a mailbox name for each cluster
of Post-its. Keep the name short if possible. (For project postmortems,
I found that using the names from one project as predetermined names for subsequent postmortems was helpful for metrics data. However, one group disagreed with this and felt it was stifling to have a set of mostly predetermined names, especially when they disagreed with an earlier group over those names.)
 The reader repeats this for all Post-it clusters until all cluster groups have category names. During this time frame, some Post-it notes may be moved from one group to another. Finally, an attempt is made to place any and all orphaned Post-it notes into a named category. If not, they stay as orphans. This part takes the findings and attempts to categorize them for the interrelational digraph part of this technique. (15–20 minutes)
 The moderator takes a large blank matrix and writes all the category names down the left side of the matrix and then writes the same set across the top of the matrix. The moderator shades out where
each category intersects with itself. You should end up with a diagonal line of shaded boxes from the top left down to the bottom right in that matrix. This is the foundation for the interrelationship digraph. We want to end up with some idea of what we need to work on first, second, third, etc., to get the biggest bang for the buck in process. (About 2 minutes)

 The moderator reads each category name down the left side of the matrix, and asks for each, “For this category, what are the other categories that have a major impact on it?” The group participates in identifying other categories that have that major impact. The moderator simply places an “X” across the row for that targeted category. This gets repeated for each category name down the left until done. (10 minutes maximum)

 The moderator tallies up the number of “X” marks per column and writes the totals at the bottom of each column. This provides a good idea of what categories should be attacked first that have the most impact on other things. (About 2 minutes)

 Thank the group for their time and dismiss them.

Is this a perfect technique? No. Is it fast? Yes. Does it get at process pain issues? You bet. By spending about one and a half to two hours on this, you will extract pain issues from everybody. There is no retribution
because there are no names involved. The quiet person can write stuff down anonymously just like the extroverted person can. The inputs come from the very people seeing and suffering from those pain issues.
What I have done after the session is to record all the findings by category into an Excel spreadsheet. This is a great application for counting things and coming up with percentages. This completed spreadsheet gets
sent back to all the participants immediately. I have cautioned this group to keep this data under wraps because it is confidential. The next step is to convene a senior management meeting to go over
the findings and categories. The senior staff needs an understanding of what went on and that this technique gathers data rapidly. As a moderator, take the top three categories in particular and concentrate on those for
this senior management group. This is done to:

 Acquaint the senior management on pain issues “from the trenches” and in a written form (not sanitized)
 Identify the top three categories that, if worked, should give the biggest bang for the buck in improving or removing pain issues
 Have this top-level management group develop an initial plan to attack the top three categories (or a subset of them) Finally, I arrange for a feedback meeting with all the participants, so that a member of senior staff:

 Tells participants that management has heard their pain issues

 Informs participants on the plan to attack pain issues This feedback meeting can be powerful to all involved. It closes the loop with participants and makes them feel like they have not wasted their time. It involves senior management directly with unsanitized pain issues. They can’t say they didn’t know about this or that. There’s no place to hide. They have to do something about it. It does cause action. When any improvements are made, you will keep going back to these pain issues. You don’t tell the rank and file that you’ve now satisfied the first goal of some part of the CMMI! They will not relate to that at all. Tell them that these processes directly address the pain issues that were established. When regular folks get to see less pain, you will rapidly develop more and more champions to your cause. If upper management sees smoother operations, better quality, smaller time-to-market costs, better repeatability, etc., which all contribute to a healthier bottom line, you will get more champions at that level. You can do this periodically to see how you’re doing. You can do this
as part of a preappraisal drill for process maturity. You can do this as a preaudit drill. The periodic approach will give you some powerful metrics related to pain issues. There’s nothing like solid numbers to show your
workforce that you are serious about reducing workforce pain.

Develop a Top-Level Life-Cycle Framework

This may be obvious but you really need to provide that top-level lifecycle framework into which to fit all the process pieces being developed. Without that top-level picture, there is no cohesive way of creating process
elements that “fit” into anything. One vice president I worked for insisted on forming various Process Action Teams (PATs) to get some deployment items done without this in place. I was even ordered to get these groups
going despite my strong objections. The results of this VP’s order were absolute chaos and a huge waste of time. I sure hope none of you will deal with some of the characters I’ve had to endure for process development
and improvement! People like that are out there. Some of them even get promoted! Hopefully, the top-level life cycle has been developed before insertion takes place. You can do a subset top-level life cycle if your initial
deployment efforts only deal with that part of the overall life cycle. For example, if you are attacking proposal-related processes, you can get away with just developing the proposal part of your life cycle. The bottom

line is that you absolutely need a framework into which to fit any process elements, so that you develop once and don’t need rework. With that top-level life cycle laid out with PADs per life-cycle phase,
you now have the ability to tie your pain issues to activities and to associative procedures. You also have the ability to tie event-driven procedures to any and all life-cycle phases.

Reference: Defining and Deploying Software Processes

Tagged : / / / / / / / / / / / / / / /

Why Worry About Versioning? – Versioning Complete guide

versioning

Why Worry About Versioning?

Having a good version scheme for your software is important for several reasons. The following are the top five things a version scheme allows you to do (in random order):

Track your product binaries to the original source files.

Re-create a past build by having meaningful labels in your source tree.

Avoid “DLL hell” —multiple versions of the same file (library in this case) on a machine.

Help your setup program handle upgrades and service packs.

Provide your product support and Q/A teams with an easy way to identify the bits they are working with.

So, how do you keep track of files in a product and link them back to the owner? How can you tell that you are testing or using the latest version of a released file? What about the rest of the reasons in the list? This chapter describes my recommendations for the most effective and easiest way to set up and apply versioning to your software. Many different schemes are available, and you should feel free to create your own versioning method.

Ultimately, like most of the other topics in this book, the responsibility to apply versioning to the files in a build tends to fall into the hands of the build team. That might be because it is usually the build team that has to deal with the headaches that come from having a poor versioning scheme. Therefore, it is in their best interest to publish, promote, and enforce a good versioning scheme. If the build team does not own this, then someone who does not understand the full implications of versioning will make the rules. Needless to say, this would not be desirable for anybody involved with the product.

File Versioning

Every file in a product should have a version number; a four-part number separated by periods such as the one that follows seems to be the established best practice. There are many variations of what each part represents. I will explain what I think is the best way of defining these parts.

Major version— The component owner usually assigns this number. It should be the internal version of the product. It rarely changes during the development cycle of a product release.

Minor version— The component owner usually assigns this number. It is normally used when an incremental release of the product is planned instead of a full feature upgrade. It rarely changes during the development cycle of a product release.

Build number— The build team usually assigns this number based on the build that the file was generated with. It changes with every build of the code.

Revision— The build team usually assigns this number. It can have several meanings: bug number, build number of an older file being replaced, or service pack number. It rarely changes. This number is used mostly when servicing the file for an external release.

Build Number

Each build that is released out of the Central Build Lab should have a unique version stamp (also known as the build number). This number should be incremented just before a build is started. Don’t use a date for a build number or mix dates into a version number simply because there are so many date formats out there that it can be difficult to standardize on one. Also, if you have more than one build on a given date, the naming can get tricky. Stick with an n=n+1 build number. For example, if you release build 100 in the morning and rebuild and release your code in the afternoon, the afternoon build should be build 101. If you were to use a date for the build number, say 010105, what would your afternoon build number be? 010105.1? This can get rather confusing.

It is also a good idea to touch all the files just prior to releasing the build so that you get a current date/time stamp on the files. By “touching” the files, I simply mean using a tool (touch.exe) to modify the date/time stamp on a file. There are several free tools available for you to download, or you can write one yourself. Touching the files helps in the tracking process of the files and keeps all of the dates and times consistent in a build release. It also eliminates the need to include the current date in a build number. You already have the date by just looking at the file properties.

In addition, try to avoid tools that inject version numbers into binaries (as a post-build step). Although the tools might seem reliable, they introduce an instability factor into your released binaries by hacking hexadecimal code. Most Q/A teams become distressed if this happens to the binaries they are testing—and justifiably so. The build number should be built into the binary or document files.

Source Code Control Trees

All the source code control (SCC) software that I have seen has some kind of labeling function to track either checked-in binaries or source code. (Remember that I am not for checking in binaries, but I mention this for the groups or teams that do. This type of versioning (or more appropriately named labeling) is typically used to track a group of sources that correspond to a product release. Most of the time, they combine labeling of the sources with the branching of the source code lines.

Should There Be Other Fields in the File Version Number?

I am of the opinion that no, there shouldn’t be other fields in the file version number. Let’s look at some other fields that might seem like they should be included but really don’t need to be:

  • Virtual Build Lab (VBL) or offsite development group number— You can use this number to track a check-in back to a specific site or lab that the code was worked on. If you have a golden tree or mainline setup, and all your VBLs or offsite trees have to reverse integrate into the golden tree, this extra field would be overkill. That’s because you can trace the owner through the golden tree check-in. Having a field in which you would have to look up the VBL or offsite number would take just as long.

    The reality is that when you check a version number, you won’t care where the file came from. You’ll only care if it is a unique enough number to accurately trace the file to the source that created it. Most likely, you’ll already know the version number you’re looking for and you’ll just need to confirm that you’re using it.

  • Component—If you break your whole project into separate components, should each component have its own identification number that would be included in the version string? No, similarly to the reasons in the previous bullet, you can track this information by the name of the binary or other information when checking the other properties of the file. This information would probably only come into play if you were filing a bug and you had other resources available to determine which component this file belonged to.
  • Service Pack Build Number— If you’re doing daily builds of a service pack release, you should use the earlier example; increment the build number and keep the revision number at the current in-place file build number. This seems like a good argument for a fifth field, but it isn’t if the revision field is used properly.

 

DLL or Executable Versions for .NET (Assembly Versions)

This section applies to you only if you are programming in .NET.

When Microsoft introduced .NET, one of its goals was to get rid of DLL hell and all the extra setup steps I talk about later in this chapter. In reviewing where .NET is today, it looks like Microsoft has resolved the sideby-side DLL hell problem, but now the problem is “assembly version hell.” Without going into too much detail about the .NET infrastructure, let’s touch on the difference between file versioning and assembly versioning.

Assembly versions are meant for binding purposes only; they are not meant for keeping track of different daily versions. Use the file version for that instead. It is recommended that you keep the assembly version the same from build to build and change it only after each external release. For more details on how .NET works with these versions, refer to Jeffrey Richter’s .NET book. Don’t link the two versions, but make sure each assembly has both an assembly version and a file version when you right-click. You can use the same format described earlier for file versioning for the assembly version.

How Versioning Affects Setup

Have you ever met someone who reformats his machine every six months because “it just crashes less if I do” or it “performs better?” It might sound draconian, but the current state of component versioning and setup makes starting from scratch a likely solution to these performance issues, which are further complicated by spyware.

Most of the problems occur when various pieces of software end up installing components (DLLs and COM components) that are not quite compatible with each other or with the full set of installed products. Just one incorrect or incorrectly installed DLL can make a program flaky or prevent it from starting up. In fact, DLL and component installation is so important that it is a major part of the Windows logo requirement.

If you are involved in your product’s setup, or if you are involved in making decisions about how to update your components (produce new versions), you can do some specific things to minimize DLL hell and get the correct version of your file on the machine.

Installing components correctly is a little tricky, but with these tips, you can install your components in a way that minimizes the chance of breaking other products on your own.

Install the Correct Version of a Component for the Operating System and Locale

If you have operating system (OS)-specific components, make sure your setup program(s) check which OS you are using and install only the correct components. Also, you cannot give two components the same name and install them in the same directory. If you do, you overwrite the component on the second install on a dual-boot system. Note that the logo requirements recommend that you avoid installing different OS files if possible. Related to this problem is the problem caused when you install the wrong component or typelib for the locale in use, such as installing a U.S. English component on a German machine. This causes messages, labels, menus, and automation method names to be displayed in the wrong language.

Write Components to the Right Places

Avoid copying components to a system directory. An exception to this is if you are updating a system component. For that, you must use an update program provided by the group within Microsoft that maintains the component.

In general, you should copy components to the same directory that you copy the EXE. If you share components between applications, establish a shared components directory. However, it is not recommended that you share components between applications. The risks outweigh the benefits of reduced disk space consumption.

Do Not Install Older Components Over Newer Ones

Sometimes, the setup writer might not properly check the version of an installed component when deciding whether to overwrite the component or skip it. The result can be that an older version of the component is written over a newer version. Your product runs fine, but anything that depends on new features of the newer component fails. Furthermore, your product gets a reputation for breaking other products. We address the issue of whether it makes sense to overwrite components at all. But if you do overwrite them, you don’t want to overwrite a newer version.

“Copy on Reboot” If Component Is in Use

Another common mistake is to avoid dealing with the fact that you cannot overwrite a component that is in use. Instead, you have to set up the file to copy on reboot. Note that if one component is in use, you probably should set up all the components to copy on reboot. If you don’t, and if the user doesn’t reboot promptly, your new components could be mixed with the old ones.

Register Components Correctly; Take Security into Account

Sometimes setups don’t properly register COM components correctly, including the proxy and stub. Note that Windows CE requires that you also register DLLs. Note, too, that when installing DCOM components, you must be vigilant about permissions and security.

Copy Any Component That You Overwrite

It is smart to make a copy of any component that you overwrite before you overwrite it. You won’t want to put it back when you uninstall unless you’re sure that no product installed after yours will need the newer component—a difficult prediction! But by storing the component in a safe place, you make it possible for the user to fix his system if it turns out that the component you installed breaks it. You can let users know about this in the troubleshooting section of your documentation, the README file, or on your Web site. Doing this might not save a call to support, but it does at least make the problem solvable. If the component is not in use, you can move it rather than copying it. Moving is a much faster operation.

Redistribute a Self-Extracting EXE Rather Than Raw Components

If your component is redistributed by others (for instance, your component is distributed with several different products, especially third-party products), it is wise to provide a self-extracting EXE that sets up your component correctly. Make this EXE the only way that you distribute your component. (Such an EXE is also an ideal distribution package for the Web.) If you just distribute raw components, you have to rely on those who redistribute your components to get the setup just right. As we have seen, this is pretty easy to mess up.

Your EXE should support command-line switches for running silently (without a UI) and to force overwriting, even of newer components, so that product support can step users through overwriting if a problem arises.

If you need to update core components that are provided by other groups, use only the EXE that is provided by that group.

Test Setup on Real-World Systems

If you’re not careful, you can install all your setup testing on systems that already happen to have the right components installed and the right registry entries made. Be sure to test on raw systems, on all operating systems, and with popular configurations and third-party software already installed. Also, test other products to make sure they still work after you install your components.

Even Installing Correctly Does Not Always Work

Even if you follow all the preceding steps and install everything 100 percent correctly, you can still have problems caused by updating components. Why? Well, even though the new component is installed correctly, its behavior might be different enough from the old component that it breaks existing programs. Here is an example. The specification for a function says that a particular parameter must not be NULL, but the old version of the component ran fine if you passed NULL. If you enforce the spec in a new version (you might need to do this to make the component more robust), any code that passes NULL fails. Because not all programmers read the API documentation each time they write a call, this is a likely scenario.

It is also possible for a new version to introduce a bug that you simply didn’t catch in regression testing. It is even possible for clients to break as a result of purely internal improvements if they were relying on the old behavior. Typically, we assume that nothing will break when we update a component. In fact, according to Craig Wittenberg, one of the developers of COM who now works in the ComApps group at Microsoft, if you don’t have a plan for versioning your components in future releases, it is a major red flag for any component development project. In other words, before you ship Version 1, you need to have a plan for how you will update Version 1.1, Version 2, and beyond—besides how to bug-fix your updates.

Reference: The Build Master: Microsoft’s Software Configuration Management Best Practices

 

Tagged : / / / / / / / / / / / / / / / / /

Build Management Recommendation – Build Management Guidance

build-management-recommendation

Recommendation from Microsoft’s Software Configuration Management Best Practices

Build Management Recommendation

Recommendation for Defining a Build

  • Define terms in your development process, and keep a glossary of them on an internal build Web page. If you like, standardize on the definitions in this chapter.
  • Clean build your complete product at least once per week, or every day if possible.
  • Use incremental builds on a daily basis if clean builds are not possible or practical.
  • Start charting the quality of your product, and post it where everyone involved in the project can see it.
  • Release LKG (or IDW) builds weekly; then switch to daily releases toward the end of the shipping cycle.
  • Follow the Software Development Flow diagram.

Recommendation for Source Tree Configuration for Multiple Sites and Parallel (Multi-Version) Development Work

  • Create the mainline (public) and virtual build labs (private) codelines.
  • Make sure the mainline is pristine and always buildable and consumable. Create shippable bits on a daily basis. Use consistent, reliable builds.
  • Build private branches in parallel with the main build at a frequency set by the CBT.
  • Use consistent reverse and forward integration criteria across teams.
  • Be aware that dev check-ins are normally made only into a private branch or tree, not the mainline.
  • Know that check-ins into a private branch are only reverse integrated (RId) into main when stringent, division-wide criteria are met.
  • Use atomic check-ins (RI) from private into main. Atomic means all or nothing. You can back out changes if needed.
  • Make project teams accountable for their check-ins, and empower them to control their build process with help from the CBT.
  • Configure the public/private source so that multisite or parallel development works.
  • Optimize the source tree or branch structure so that you have only one branch per component of your product.

Recommendation for Daily, Not Nightly, Builds

  • Hire a consulting firm to come in and review your processes and tools.
  • Start your build during the day, not during the evening.
  • Publish the build schedule on an internal Web site.
  • Release daily builds as sure as the sun comes up, but make sure they are quality, usable builds. Don’t just go through the motions.
  • Discourage build breaks by creating and enforcing consequences.

Recommendation for The Build Lab and Personnel

  • Set up a build lab if you do not already have one.
  • Purchase all necessary hardware, and do not skimp on quality or number of machines.
  • Keep the lab secure.
  • Show a lot of appreciation for your build team. They have a difficult job that requires a special skill set and personality to be successful.

Recommendation for Build Tools and Technologies

  • Use command-line builds for the central build process.
  • Use Make or ANT for non-Microsoft platforms or tools.
  • Use MSBuild for .NET builds.
  • Use VCBuild for non-.NET builds.
  • Write your scripts in an easy language such as Perl or Batch files.
  • Learn XML because it is ubiquitous.

Recommendation for SNAP Builds—aka Integration Builds

  • Develop your own integration build tool to be used as a precheck-in build test to the golden source trees, or wait for Microsoft to release a tool similar to this one. Keep the points in this chapter in mind when you’re building this “silver bullet.”
  • If you deploy a Virtual Build Lab process, require that the VBLs use a SNAP system to stay in sync with the Central Build Lab. This is the whole concept of an integration build system such as this.
  • Do not rely on the SNAP system as your mainline build tool. Although I mention that some groups at Microsoft use this tool as their Central Build Team build tool, this can be a little problematic because the check-in queues can get really backed up.
  • Understand that no magic tool is out there to do your work for you when it comes to builds and merging code. But SNAP is a good tool, and some groups at Microsoft cannot live without it.
  • Make sure you have all the other processes down in this book before trying to roll out a SNAP system. These processes include source tree configuration, build schedules, versioning, build lab, and so on.

Recommendation for The Build Environment

Here is a short list of what the build environment example in this chapter is about. If everyone is using the same batch files to launch a build environment, re-creating a build will be less painful.

  • Keep the build environment consistent, and control it through the Central Build Team’s intranet Web page.
  • Use batch file commands similar to the ones used in the examples in this chapter.
  • Enforce through checks and balances in the batch files that everyone is using the published project build environment.

Recommendation for Versioning

What I recommend is to read and re-read this chapter to make sure you fully understand everything in it and to grasp the importance of reliable versioning. Here is a short list of what you need to do:

  • Use a four-part number separated by periods for your file version string.
  • Increment your build number before starting each build.
  • Avoid the use of dates in the build number.
  • Use your build number for the label in sources you just built.
  • Don’t try to include product marketing versions in your file version string.
  • For .NET programmers, don’t link assembly versioning to file versioning.
  • During your setup program, do the following:
    • Check for the OS that you are installing on.
    • Avoid copying anything to the system directory.
    • Copy all components to the same directory as the executable.
    • Avoid installing older components over newer ones.
    • Make a copy of any component you overwrite.
    • Use a self-extracting executable to update.

Recommendation for Build Security

With all the talk about security on the Internet and in the applications that are out there, we must not forget about keeping company “jewels” safe. Here are some recommendations that were covered in this chapter:

  • At a minimum, use the four-layer approach talked about in detail in this chapter:
    • Physical security— Doors, locks, cameras, and so on.
    • Tracking source changes— The build process.
    • Binary/release bits assurance— The tools process.
    • IT infrastructure— Company-wide policy and security.
  • Consider the .NET platform as a means of security.
  • Look into software restriction policies that are in Microsoft Windows XP and Windows Server 2003.
  • Start worrying about security before a breach occurs.

Recommendation for Building Managed Code

You will find that building projects for the .NET Framework is a bit different than the classic “unmanaged code builds” that have been around before Web services were ever dreamed up. I went over the parts of building .NET code that tend to trip people in this chapter; the following is a quick list of recommendations:

  • If you build managed code, learn the basic terms of the .NET Framework and the compilation process explained in this chapter.
  • Use delayed signing when developing your project to avoid having to sign the assemblies in conjunction with your daily build.
  • Understand the risk of exposing your developer’s machine to external attacks because of the skip verification list that is created when delaying signing.
  • Decide what is the most practical way of setting up your solution files for your .NET projects. Then enforce whatever policy you come up with through your CBT.

Recommendation for International Builds

Here is what is done at most of the bigger groups at Microsoft and is thus our preferred way of internationalizing our products.

  • Write single binary code as the first step toward truly world-ready products.
  • Implementing a multilingual user interface that allows users to switch between all supported languages is the next logical step.
  • Write Unicode-aware code and create satellite DLLs for your language resources to make these goals much easier to achieve.

Recommendation for Build Verification Tests and Smoke Tests

Some of the general points of the chapter are:

  • Run Visual File Information and VerCheck on each build.
  • Establish a quality bar that the build will be measured against
  • Let the test team own the BVTs, but let the build team run them.
  • Automate BVTs and smoke tests.
  • Track BVT results on a public intranet page.
  • Know the difference between BVTs and smoke tests.
  • Always have the testers or developers run the smoke tests, not the build team.

Recommendation for Building Setup

To improve your setup reliability the following should be done:

  • Decide which setup tool you will use: Wise, InstallShield, WiX, or another brand.
  • Track all files in your product in a spreadsheet or, better yet, a database. If you are using WiX, the files are listed in the .wxs file.
  • Build setup every day, and practice deploying your product to test machines every day. Do not release the build until setup has been created successfully.
  • Start pushing the setup responsibility back to the developers who own the modules.

Recommendation for Ship It!

The following list should be done as the final steps to shipping a great product:

  • Define what and when shipping is for your group. Follow Jim McCarthy’s Rule 21.
  • Start restructuring source trees before you ship, not after.
  • Establish a standard release process for your group.
  • Release a new build process after you release.
  • Add the shipping process time to the product schedule.

Recommendation for Customer Service and Support

From reading this chapter you should understand why you need to:

  • Create and invest in a strong support organization.
  • Make sure your support group has good paths back to the developers but are not intrusive to them doing their work writing code.
  • Outline specific escalation paths in support that give useful feedback to the product group.
  • Involve your support group as early as possible in the WAR meetings to discuss impacts.

Recommendation for Managing Hotfixes and Service Packs

A lot in this chapter was specific to VSS, but I think when you read this chapter you can draw parallels to the source code control tool you use and just copy the basic architect. These recommendations can be followed with any source code control tool but is more specific to VSS.

  • Use sharing to reuse common code modules.
  • Use labeling to isolate and mark different builds.
  • Use share and pin with branching to create new minor releases.
  • Use cloning to create new major releases.
  • Remember: Sharing is the first step of branching.

Recommendation for Suggestions to Change Your Corporate or Group Culture

  • Follow the seven suggestions to change or create a successful culture:
    • Involve as high a level of management as you can when rolling out new processes or tools. In fact, have the executive send the e-mail on the new announcement.
    • Hire a consulting firm to come in and perform an analysis.
    • Match company values to what you are trying to accomplish.
    • Do the math.
    • Not worrying about getting fired gives you incredible power.
    • Never take no from someone who does not have the power to say yes.
    • Publish policies, processes, and tools on the intranet!
  • If you can’t get 100 percent behind what you are doing, you should find something else to do or somewhere else to do it.

Recommendation for Future Build Tools from Microsoft

Looking into a crystal ball that predicts the future, you should see:

  • Adopt MSBuild as soon as possible.
  • Start researching VSTS and see if there are tools you can implement in your current build process and future processes.

 

 

 

Tagged : / / / / / / / / / / / / / / / / /