Sonar Vs Squale

Based on feedback of Fabricefrom Squale, Please find a difference between Sonar and Squale

In a nutshell, we could say that Sonar is good at gathering code metrics and displaying them in various visualisations, mainly targeting technical people, while Squale is good at aggregating those metrics into high level factors to address top-level managers.

In facts, Squale and Sonar both: Similarties
– analyse code for different languages
– get metrics from code and store them into a database
– display them so that it is possible to drill down the code and analyse where complexity and risks are

In term of differences:
– Sonar relies on Maven while Squale has not this requirement
– Sonar does not offer advanced quality models to aggregate raw quality data into high level factors
– Squale does not display code in its web interface (we think this is not a major feature as you can only read code but bot modify it: the most important is to have the feedback in the IDE)
– it is a bit more complex to extend Squale, while Sonar has a good extension mechanism

Tagged : / / / / /

Introduction of MSbuild

Hi Guys,

can u pl help me anyone Introduction of MSbuild?

Regards,

Rambabu.M

  • Rajesh Kumar

    Rajesh Kumar 

    MSBuild Overview

    The Microsoft Build Engine (MSBuild) is the new build platform for Microsoft and Visual Studio. MSBuild is completely transparent with regards to how it processes and builds software, enabling developers to orchestrate and build products in build lab environments where Visual Studio is not installed. This topic provides brief overviews of:

    • The basic elements of an MSBuild project file.
    • How MSBuild is used to build projects.
    • The advanced features of MSBuild.
    • How Visual Studio uses MSBuild to build projects.

    Project File


    MSBuild introduces a new XML-based project file format that is simple to understand, easy to extend, and fully supported by Microsoft. The MSBuild project file format enables developers to fully describe what items need to be built as well as how they need to be built with different platforms and configurations. In addition, the project file format enables developers to author re-usable build rules that can be factored into separate files so that builds can be performed consistently across different projects within their product. The following sections describe some of the basic elements of the MSBuild project file format.

    Items

    Items represent inputs into the build system and are grouped into item collections based on their user-defined collection names. These item collections can be used as parameters for tasks, which use the individual items contained in the collection to perform the steps of the build process.

    Items are declared in the project file by creating an element with the name of the item collection as a child of an ItemGroupelement. For example, the following code creates an item collection named Compile, which includes two files.

    <ItemGroup>
        <Compile Include = "file1.cs"/>
        <Compile Include = "file2.cs"/>
    </ItemGroup>
    
    

    You reference item collections throughout the project file with the syntax @(ItemCollectionName). For example, you reference the item collection in the example above with @(Compile).

    Items can be declared using wildcards and may contain additional metadata for more advanced build scenarios. For more information on items, see MSBuild Items.

    Properties

    Properties represent key/value pairs that can be used to configure builds. Items and properties differ in the following ways:

    • Items are stored in collections, while properties contain a single scalar value.
    • Items cannot be removed from item collections, while properties can have their values changed after they are defined.
    • Items can contain metadata and can use the %(ItemMetadata) notation, while properties cannot.

    Properties are declared by creating an element with the name of the property as a child of a PropertyGroup element. For example, the following code creates a property named BuildDir with a value of Build.

    <PropertyGroup>
        <BuildDir>Build</BuildDir>
    </PropertyGroup>
    
    

    You reference properties throughout the project file with the syntax $(PropertyName). For example, you reference the property in the example above with $(BuildDir). For more information on properties, see MSBuild Properties.

    Tasks

    Tasks are reusable units of executable code used by MSBuild projects to perform build operations. For example, a task might compile input files or run an external tool. Once created, tasks can be shared and reused by different developers in different projects.

    The execution logic of a task is written in managed code and mapped to MSBuild with the UsingTask element. You can write your own task by authoring a managed type that implements the ITask interface. For more information on writing tasks, see How to: Write a Task.

    MSBuild ships with many common tasks such as Copy, which copies files, MakeDir, which creates directories, and Csc, which compiles Visual C# source code files. For a complete list of available tasks and usage information, see MSBuild Task Reference.

    You execute a task in an MSBuild project file by creating an element with the name of the task as a child of a Targetelement. Tasks usually accept parameters, which are passed as attributes of the element. MSBuild item collections and properties can be used as parameters. For example, the following code calls the MakeDir task and passes it the value of the BuildDir property declared in the previous example.

    <Target Name="MakeBuildDirectory">
        <MakeDir
            Directories="$(BuildDir)" />
    </Target>
    
    

    For more information on tasks, see MSBuild Tasks.

    Targets

    Targets group tasks together in a particular order and expose sections of the project file as entry points into the build process. Targets are often grouped into logical sections to allow for expansion and increase readability. Breaking the build steps into many targets allows you to call one piece of the build process from other targets without having to copy that section of code into each target. For example, if several entry points into the build process require references to be built, you can create a target that builds references and run that target from every necessary entry point.

    Targets are declared in the project file with the Target element. For example, the following code creates a target named Compile, which then calls the Csc task with the item collection declared in the previous example.

    <Target Name="Compile">
        <Csc Sources="@(Compile)" />
    </Target>
    
    

    In more advanced scenarios targets can describe relationships between each other and perform dependency analysis, which allows whole sections of the build process to be skipped if that target is up-to-date. For more information on targets, see MSBuild Targets.

    Building with MSBuild


    You run MSBuild from the command line by passing a project file to MSBuild.exe with the appropriate command line options. Command line options allow you to set properties, execute specific targets, and specify loggers. For example, you would use the following command line syntax to build the file MyProj.proj with the Configuration property set to Debug.

    MSBuild.exe MyProj.proj /property:Configuration=Debug
    
    

    For more information on MSBuild command line options, see MSBuild Command Line Reference.

    Security noteSecurity Note:
    Before you build a downloaded project, determine the trustworthiness of the code. MSBuild project files have the ability to execute tasks that can damage your system.

    Advanced concepts


    MSBuild can be used for more advanced operations during builds, such as logging errors, warnings, and messages to the console or other loggers, performing dependency analysis on targets, and batching tasks and targets on item metadata. For more information on these advanced concepts, see MSBuild Advanced Concepts.

    Visual Studio Integration


    Visual Studio uses the MSBuild project file format to store build information about managed projects. Project settings added and changed through Visual Studio are reflected in the .*proj file that is generated for each project. Visual Studio uses a hosted instance of MSBuild to build managed projects, meaning that a managed project can be built in Visual Studio and from the command line (even without Visual Studio installed), with identical results. For more information on how Visual Studio uses MSBuild, see MSBuild Advanced Concepts.

    Source:

    http://msdn.microsoft.com/en-us/library/ms171452%28VS.90%29.aspx

     

  • Rambabu Muppuri

    Rambabu Muppuri 

    hi raj , thanks for ur comments ..could u pl let me know how can we increment version numbers through msbuild tool..i have written below xml tag but which is not working ..pl help me..
    <Target Name=”Version”>if its possible send one exmaple with version numbers increment..

    <Message Text=”Version: $(Major).$(Minor).$(Build).$(Revision)”/>

    <AssemblyInfo CodeLanguage=”VB”

    OutputFile=”My Project\AssemblyInfo.vb”

    AssemblyTitle=””

    AssemblyDescription=””

    AssemblyCompany=””

    AssemblyProduct=””

    AssemblyCopyright=””

    ComVisible=”false”

    CLSCompliant=”true”

    Guid=”d038566a-1937-478a-b5c5-b79c4afb253d”

    AssemblyVersion=”$(Major).$(Minor).$(Build).$(Revision)”

    AssemblyFileVersion=”$(Major).$(Minor).$(Build).$(Revision)”

    Condition=”$(Revision) != ‘0’ “/>
    </Target>

  • Rajesh Kumar

    Rajesh Kumar 

    Long time back same thing i did using Apache ant, you can find logic here http://www.scmgalaxy.com/component/content/article/62-apache-ant/129-ant-script-to-reset-buildnumber.html

     

    meanwhile, I will try to create using MSBuild in free time.

Tagged : / / / /

ANTHILLPRO COMPARISON WITH ATLASSIAN BAMBOO

ANTHILLPRO COMPARISON WITH ATLASSIAN BAMBOO
AnthillPro Vs Bamboo OR
Difference between AnthillPro and Bamboo OR

Last month i was discussing with Eric Minick from Anthillpro on Why Build Engineer should be go for AnthillPro instead of Bamboo and i found some interesting inputs which i am sharing below;

Introduction

Bamboo is a respectable team level continuous integration server. Continuous Integration servers are focused on providing feedback to developers about the quality of their recent
builds, and how that compares to previous builds. While AnthillPro also provides continuous integration features, it pays special attention to what hAnthillpropens after build time.
Where is the build deployed? How does it get tested in the hours, days and weeks after the build occurs? Who releases the software and how?

The distinction in focus between the two solutions shows up in their features. Both AnthillPro and Bamboo provide continuous integration support and integrations with
numerous tools. Only AnthillPro provides the features required to take a build through the release pipeline into production – rich security, build lifecycle management, eAtlassian Bamboo.
For the purposes of this document, we will use the following product aAtlassian Bambooreviations:

Lifecycle Management

There is a lot more to implementing true lifecycle management than simply using the term in marketing and sales materials. The lifecycle extends across multiple processes in
addition to the build process. Most tools have had a very narrow view of this space and have focused their energies purely on the build process. The end result is that true lifecycle
management is an afterthought, and it shows in the features (or lack thereof) in their products. A continuous integration

Pipeline Management

As the lifecycle is made up of multiple processes (such as the build, deployments, tests, release, and potentially others), a lifecycle management tool must provide some means of
tracking and managing the movement of a build through the lifecycle stages. Without this feature, there is nothing to connect a build process execution to a deployment process
execution to a test process execution; thus the end user has no way of knowing what build  actually got tested. Without this pipeline management feature (which we call the Build
Life), traceability between processes is completely absent from the tool.

Atlassian Bamboo: No pipeline management out of the box.
Anthillpro: Provides pipeline management out‐of‐the‐box. Anthillpro has a first‐class concept called the Build Life. The Build Life represents the pipeline and connects the build process to later
processes like deployments into QA, Anthillproprovals by managers, functional testing, and release to production. The pipeline (Build Life) provides guaranteed traceability throughout all
processes in the lifecycle, and provides a context for collecting logs, history, and other data gathered throughout the lifecycle.

Artifact Management

Key to lifecycle management is the ability to connect the outputs of a prior process (such as the build) to the inputs of a subsequent process (such as a deployment). After all, the
deployment process needs to have something to deploy. Ideally, the deployment process would deploy the artifacts produced by the build process. And the test process would run
tests on those same artifacts. The ability to cAnthillproture and manage the artifacts created by a build and other processes is central to this effort. Ideally, the artifacts would be managed
by an artifact repository (a Definitive Software Library (DSL) under ITIL). Further, as hundreds or thousands of builds hAnthillpropen, support for discarding old builds needs to
intelligently remove builds that are no longer interesting. Anthillpro bundles a binary artifact repository called CodeStation.

Atlassian Bamboo: Bamboo does cAnthillproture built artifacts but does not have a robust artifact management system. It does not maintain artifact checksums for validation. Old builds may be archived
after a certain number of weeks, but there is no designation for builds that have been to or are potentially going to production that would use a different retention policy. Artifacts are available for user download, but are not accessible for reuse by other plans or deployments.

Anthillpro: Built‐in artifact management system (DSL) called CodeStation. The cAnthillproture, fingerprint and management of artifacts is essential to the solution. This allows AnthillPro to guarantee traceability of artifacts from the build, through deployment, through testing, and into release (in other words, AnthillPro guarantees that what is released into production is what was tested and built). A maximum number of builds or age to keep can be set per project and per status. This means that builds that were released can be kept longer than a simple continuous integration build.

Security

Especially as servers address functionality before the build – deployments or tests to various environments, controlling who can do what within the system can be key element
securing the system and providing clear separation of duties. Once something has been done, it can be equally important to find out who ran which processes.

Authentication and Authorization

Atlassian Bamboo: Basic role based security. Users may be assigned roles and permissions at the project level. Integration with LDAnthillpro, compliments internally managed security.

Anthillpro: AnthillPro provides a rich role based security system, allowing fine‐grained control over who can see which project, run which workflows and interact with which
environments. The Authentication system supports internally managed, single sign on systems, LDAnthillpro, Kerberos (Active Directory), and JAnthillproS modules.

Secure Value Masking

Many “secrets” are used when building and deploying. Passwords to source control, servers, and utilities are often needed to execute build, deploy, test processes.

Atlassian Bamboo: No facility for securely storing Anthillproplication passwords or obfuscating them from the logs. Bamboo does manage to write libraries for some integrations that avoid passing the
password where the logs can see that line. It has no facility that we can see for flagging a command line parameter that will be logged as secure and filtering that value from the log.

Anthillpro: Sensitive values like Anthillproplication passwords are automatically filtered out of logs, hidden in the user interface, entered through password fields, and stored in the database encrypted with a triple DES one time key.

Process Automation & The Grid

Grouping Agents
In a distributed environment, managing your build and deployment grid needs to be easy.

Atlassian Bamboo: Agents are added into a fairly uniform pool. Agents can define broad cAnthillproabilities they provide and jobs can define what cAnthillproabilities they need to perform matchmaking.

Anthillpro: AnthillPro provides the concept of an environment. Environments are groups of servers. A build farm for a class of projects could be one environment while the QA environment for another project would be another environment. This allows for roaming – or deploying to everything – to span just the machines in an environment. Jobs can be
assigned to a single machine, or roam, or select machines based on criteria like processor type, operating system, or customized machine cAnthillproabilities.

Complex Process Automation

Atlassian Bamboo: Bamboo runs full plans on a single agent. While different agents can be running various builds in parallel, any given plan is executed on just a single agent.

Anthillpro: AnthillPro provides a rich workflow engine, which allows jobs to be run in sequence, parallel, and combinations thereof. Jobs can also be iterated so that they run multiple times with slight variations in their behavior on each execution. This allows parallelization that takes advantage of numerous agents. This facility also makes sophisticated deployments possible.

Cross Site Support


Atlassian Bamboo: 
Bamboo provides no special support for agents (slaves) that exist outside the local network.

Anthillpro: AnthillPro is architected with support for an cross‐site, even international, grid. Agent relays and location specific artifact caches assist in easing the configuration and
performance challenges inherent in deployment involving multiple sites.

Dependency Management

Component based development and reuse are concepts that get a lot of lip service but few if any features from most vendors. Only AnthillPro provides features to enable component based development and software reuse. A flexible dependency management system is part of the built‐in feature set of AnthillPro. The dependency management system is integrated with the bundled artifact repository and with the build scheduler so that builds can be pushed up the dependency grAnthillproh and pulled down the dependency grAnthillproh as configured. Integration with Maven dependency management provides an integrated system.

Atlassian Bamboo: Provides some basic support for build scheduling based on dependencies. A build of one project can kick off a build of it’s dependents and some blocking strategies can prevent wild numbers of extra builds being generated. Bamboo does not provide any tie in between dependency triggering and build artifacts – sharing artifacts between projects is left to the team to figure out with an external tool such as Anthillproache Maven.

Anthillpro: Support for dependency relationships between projects out‐of‐the‐box. AnthillPro provides a rich set of features for relating projects together. Large projects often have tens
or hundreds of dependencies on sub‐projects, common libraries and third party libraries. At build time the dependency system can calculate which projects need to be rebuilt based on changes coming in from source control. At build time, artifacts from dependency projects are provided to the dependants with version traceability and tracking.

AnthillPro provides highly customizable build scheduling and artifact sharing to these projects. In a “pull” model, anytime a top level project is built, it’s dependencies are inspected to see if they are up‐to‐date. If not, they are first built, then the top level project is built. In a “push” model, builds of dependencies will trigger builds of their dependents. AnthillPro interprets the dependency grAnthillproh to avoid extra builds or premature builds. In the case of Maven projects, AnthillPro can simply provide the scheduling or cooperate with Maven to provide traceable artifact reuse.

Summary

While both tools have a lot of similarities, AnthillPro’s Lifecycle Management, Dependency Management, and full featured Security cAnthillproabilities set it Anthillproart. Only AnthillPro supports
complete end‐to‐end traceability across all the phases of Build, Deploy, Test, and Release. While Bamboo is likely an effective team level continuous integration server, AnthillPro is a proven solution for enterprises looking to automate the full lifecycle of a build. For build and release automation the technology leader since 2001 is AnthillPro. We were
the first to release a Build Management Server. We were the first to recognize the need for comprehensive lifecycle management (beyond just build management), and we were the
first to release features required to deliver on the vision. We have been very successful at enterprise level RFPs and have added hundreds of customers including some of the leading banks, insurance companies, and high‐technology companies in the world. Our dedication to solving the problems faced by our customers means that we are very responsive to feature and enhancement requests with turn around times measured in days or weeks instead of months and quarters. Urbancode delivers the leading product in its space, the expertise to roll it out, and caring support for our customers to ensure their continued success.

Tagged : / / / /

Introdcution of Perl

What is Perl

  1. Perl is a programming language, It’s Object Oriented, simple to learn and very powerful. Perl stand for: “Practical Extraction and Reporting Language”.
  2. Perl is an Interpreted language, so you don’t have to compile it like you do Java, C, C++ etc. For fast development work, that’s a godsend.
  3. Perl is a versatile, powerful programming language used in a variety of disciplines, ranging from system administration to web programming to database manipulation.
  4. Perl is a different language to different people. It is a quick scripting tool for some, and a fully-featured object-oriented language for others.
  5. Perl is used in so many places because Perl is a glue language. A glue language is used to bind things together.
  6. Perl is good at is tying these elements together. Perl can take your database, convert it into a spreadsheet-ready file, and, during the processing, fix the data if you want.
  7. Perl can also take your word processing documents and convert them to HTML for display on the Web.

WHY PERL.

There are a many reasons why Perl is a great language for use in development and general processing.
Following are some of them…

  • Learning: Perl has all the same abilities, data constructs and methods of other languages, and its easier to learn then most. If you understand Perl, you will have far less trouble learning other languages like C, C++, Java, PHP etc then if you were starting from scratch.
  • Interpreted language means less time spent debugging.
  • Mod_perl for CGI work means perl can be as fast as compiled languages without the need to manually compile. mod_perl is an advanced implementation of Perl that runs in the Apache web server. It provides extremely fast performance and full access to Apache internals via Perl.
  • CPAN.org, a massive collection of perl modules that can do almost anything, someone has usually done the work for you. CPAN, the Comprehensive Perl Archive Network, is one of the largest repositories of free code in the world. If you need a particular type of functionality, chances are there are several options on the CPAN, and there are no fees or ongoing costs for using it.
  • Online support. Perl has been around since the early 90’s, its exceptionally well known and thousands of tutorial and help sites abound on the internet. Perl has a very strong user community and this is the primary avenue for support.
  • ISP support. Perl runs on nearly anything, it comes standard on the vast majority of unix/linux servers and is available free for windows servers. As a result its the most commonly supported language on ISP (Internet Service Providers) hosting servers.
  • Text processing. Because perls’ initial reason for living was text processing, its regular expression engine is exceptionally powerful. That means advanced text manipulation is easier then ever. (And let’s face it; nearly all programming is text manipulation of some sort.
  • Database connectivity. Thanks to the DBI module, perl can talk to a great many different databases with the same syntax. That means that you only have to learn one interface to talk to over a dozen different database servers. That’s as opposed to learning each DB’s syntax and commands seperately. Perl provides an excellent interface to nearly all available databases, along with an abstraction layer that allows you to switch databases without re-writing all of our code.
  • Freebies. Since Perl has been around for ages, there are thousands of scripts on the internet that are free to use and/or modify. Perl, Apache, and related technologies are open source and free. On-going overhead cost to vendors for code that continues to run is $0.
  • Multi-platform: Perl runs on Linux, MS Windows and all of the platforms listed here: http://www.cpan.org/ports/external link
  • Rich Community Support: The main point of these stats is that Perl has a large and broad user community. With any technology you choose, you don’t want to be the only one using it. These numbers show that Perl is still widely used for web development, among other things, and the user community is very active.
  • Re-usable code architecture (modules, OO, etc.): Perl is architected to allow and encourage re-use. The core block of re-use, the module, makes it very easy to leverage business logic across platforms in web applications, batch scripts, and all sorts of integration components.
  • Multi-use: Perl can be used to develop Web apps, batch processing, data analysis and text manipulation, command-line utilities and apps, GUI apps.
  • Multi-language integration: can interact with C, C++, Java, etc. from within Perl code.

WHY NOT PERL.

All languages have areas that they excel in, and others that they don’t. Perl is no different. Technically, you could write anything in Perl, even a complete operating system. but that does not mean you should. Its a matter of considering your requirements and deciding on the best language to suit them. Here are some reasons why Perl might not be your best choice:

  • Speed. If for example, you were writing a huge word processer like MS Word or WordPerfect. the sheer size of it would make it extremely slow to compile at runtime. For this you would be much better served by a language like C or C++ where the compilation is done before you run it.

Reference:

http://www.scmgalaxy.com/component/content/article/64-perl/223-introduction-of-perl.html

Tagged : / / / / /

Agile Software Development Methodology

What is Agile Software Development Methodology?

Agile development practices increase the velocity at which software teams deliver customer value by improving everyone’s visibility into project features, quality and status.

BROAD DEFINITION: 
Agile software development refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
Agile methods generally promote a disciplined project management process that encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-organization and accountability, a set of engineering best practices intended to allow for rapid delivery of high-quality software, and a business approach that aligns development with customer needs and company goals.

General Definition
There are many specific agile development methods. Most promote development, teamwork, collaboration, and process adaptability throughout the life-cycle of the project.

Agile methods break tasks into small increments with minimal planning, and do not directly involve long-term planning. Iterations are short time frames that typically last from one to four weeks. Each iteration involves a team working through a full software development cycle including planning, requirements analysis, design, coding, unit testing, and acceptance testing.

 

image

Specification

This helps minimize overall risk, and lets the project adapt to changes quickly. Stakeholders produce documentation as required. An iteration may not add enough functionality to warrant a market release, but the goal is to have an available release (with minimal bugs) at the end of each iteration.[1] Multiple iterations may be required to release a product or new features.

Team composition in an agile project is usually cross-functional and self-organizing without consideration for any existing corporate hierarchy or the corporate roles of team members. Team members normally take responsibility for tasks that deliver the functionality an iteration requires. They decide individually how to meet an iteration’s requirements.

Agile methods emphasize face-to-face communication over written documents when the team is all in the same location. When a team works in different locations, they maintain daily contact through videoconferencing, voice, e-mail, etc.

Most agile teams work in a single open office (called bullpen), which facilitates such communication. Team size is typically small (5-9 people) to help make team communication and team collaboration easier. Larger development efforts may be delivered by multiple teams working toward a common goal or different parts of an effort. This may also require a coordination of priorities across teams.

No matter what development disciplines are required, each agile team will contain a customer representative. This person is appointed by stakeholders to act on their behalf and makes a personal commitment to being available for developers to answer mid-iteration problem-domain questions. At the end of each iteration, stakeholders and the customer representative review progress and re-evaluate priorities with a view to optimizing the return on investment and ensuring alignment with customer needs and company goals.

Most agile implementations use a routine and formal daily face-to-face communication among team members. This specifically includes the customer representative and any interested stakeholders as observers. In a brief session, team members report to each other what they did yesterday, what they intend to do today, and what their roadblocks are. This standing face-to-face communication prevents problems from being hidden.

Specific tools and techniques such as continuous integration, automated or xUnit test, pair programming, test driven development, design patterns, domain-driven design, code refactoring and other techniques are often used to improve quality and enhance project agility.

 

image

Some of the principles behind the Agile Manifesto are:

* Customer satisfaction by rapid, continuous delivery of useful software
* Working software is delivered frequently (weeks rather than months)
* Working software is the principal measure of progress
* Even late changes in requirements are welcomed
* Close, daily cooperation between business people and developers
* Face-to-face conversation is the best form of communication (co-location)
* Projects are built around motivated individuals, who should be trusted
* Continuous attention to technical excellence and good design
* Simplicity
* Self-organizing teams
* Regular adaptation to changing circumstances

Reference: http://en.wikipedia.org/wiki/Agile_software_development

Tagged : / / / / / /

Jetty – Java-based HTTP server

Jetty is a 100% pure Java-based HTTP server and servlet container (application server). Jetty is a free and open source project under the Apache 2.0 License. Jetty is used by several other popular projects including the Geronimo application server and by the Google Web Toolkit plug-in for Eclipse.
Jetty deployment focuses on creating a simple, efficient, embeddable and pluggable web server. Jetty’s small size makes it suitable for providing web services in an embedded Java application.

In January 2009, Webtide announced that Jetty would be moving to Eclipse Foundation. The Jetty project has been created, and the initial Jetty 7 code (with a refactored org.eclipse.jetty package) was checked into Jetty Subversion at Eclipse.
The Java support in Google App Engine is built on Jetty.

Jetty provides an HTTP server, HTTP client, and javax.servlet container. These components are open source and available for commercial use and distribution.
Jetty is used in a wide variety of projects and products. Jetty can be embedded in devices, tools, frameworks, application servers, and clusters. See the Jetty Powered page for more uses of Jetty.

The core Jetty project is hosted by the Eclipse Foundation. The codehaus provides Jetty accessories , integrations, and extensions, as well as hosting older versions of Jetty. See the About page for information about the project structure.

Features Jetty

  • Full-featured and standards-based
  • Open source and commercially usable
  • Flexible and extensible
  • Small footprint
  • Embeddable
  • Asynchronous
  • Enterprise scalable
  • Dual licensed under Apache and Eclipse

Powered

Reference: 
http://jetty.codehaus.org/jetty/
http://en.wikipedia.org/wiki/Jetty_%28web_server%29
http://www.mortbay.org/jetty/
http://www.eclipse.org/jetty/

Tagged : / /

Types of Build in Remote Agent

Types of Build in Remote Agent

Build Management tools has one one capability where they can share the infrasture to build the product in less time with the help of Remote Agent Machine.

There are following types of build in remote Agent.

Distributed processing Build:
distributed build is one in which individual steps in the Workflow are sent to be executed on multiple machines.  In doing this you are able to leverage more machine power instead of attempting to run the entire Workflow on a single machine.

Parallel processing build
Remote Agents also support parallel processing.  A Parallel process executes Workflows for each unique stage of the development lifecycle including continuous integration builds for developers, pre-production builds for testers and emergency builds for production control. A Remote Agent can be configured to execute builds according to the location that the binaries will be distributed.

Multi-platform builds
Workflows can be configured to call Remote Agents that are  running different operating systems This allows you to execute a Workflow that builds the application across multiple operating systems or build specific components of the application on the appropriate machine.   For example, a Workflow that needs to build Windows .Net components as well as AIX Oracle back-end components would use two Remote Agents one for Windows and one for AIX.

Multi-language builds 
Workflows can be configured to call Remote Agents that are  running different languages.  This allows you to execute a Workflow that builds the application across multiple operating systems or build specific components of the application on the appropriate machine.   For example, a Workflow that needs to build Windows .Net components as well as AIX Oracle back-end components would use two Remote Agents one for Windows and one for AIX.
Dedicated builds

Remote Agents can also be used as ‘dedicated build machines’.  A dedicated “Build Machine” is often bigger and faster than a regular desktop machine.  The dedicated build machine would be configured as a Remote Agent in which Workflows could be executed in order to improve the workflow processing.  In addition, if you are using Meister’s Build Automation, a dedicated machine with multi-processing power can be used to manage the calls to the compilers and linkers and accelerate the building of C and Java applications.

 

image

Tagged : / / / /

Compare between RSM and Sonar

Metrics Tools
Category Metric Comment RSM Sonar
Function Metrics LOC Lines of Code Per Function, All Functions Yes Yes
eLOC (Effective LOC) Per Function, All Functions Yes Yes
lLOC (Logical Statements LOC) Per Function, All Functions Yes No
FP Function Points Derived from LOC metrics Per Function, All Functions Yes No
Comments Lines Per Function, All Functions Yes Yes
Blank Lines Per Function, All Functions Yes Yes
Physical Lines Per Function, All Functions Yes Yes
Number of Input Parameters Per Function, Yes No
Number of Return Points Per Function, Yes No
Interface Complexity (Parameters + Returns) Per Function, Yes Yes
Cyclomatic Complexity Logical Branching Per Function, All Functions Yes No
Functional Complexity (Interface + Cyclomatic) Per Function, All Functions Yes No
Functional Quality Analysis Per Function, Yes Yes
Number of functions Total, Average, Maximum and Minimums All Functions Yes Yes
Logical Lines All Functions Yes No
Return Points All Functions Yes No
Function Parameters All Functions Yes No
Total Quality Profile All Functions Yes ?
Class Metrics Number of public, private, protected data attributes Per Class, All Classes Yes Yes
Number of public, private, protected methods Per Class, All Classes Yes Yes
Template Type Per Class, Yes No
Inheritance Per Class, Yes No
Depth of Inheritance Tree Per Class, Yes Yes
Number of derived child classes per base class Per Class, Yes Yes
LOC Lines of Code Per Class, All Classes Yes Yes
eLOC (Effective LOC) Per Class, All Classes Yes No
lLOC (Logical Statements LOC) Per Class, All Classes Yes No
Comments Lines Per Class, All Classes Yes Yes
Blank Lines Per Class, All Classes Yes Yes
Physical Lines Per Class, All Classes Yes Yes
Number of Input Parameters Per Class, All Classes Yes No
Number of Return Points Per Class, All Classes Yes No
Interface Complexity (Parameters + Returns) Per Class, All Classes Yes No
Cyclomatic Complexity Logical Branching Per Class, All Classes Yes No
Class Complexity (Interface + Cyclomatic) Per Class, All Classes Yes Yes
Class Quality Analysis RSM Quality Analysis Per Class, Yes No
Total number of classes All Classes Yes Yes
Inheritance Tree All Classes Yes No
Number of Base Classes All Classes Yes Yes
Number of Derived Classes All Classes Yes Yes
Derived/Base Class Ratio All Classes Yes No
Maximum and Average Inheritance Depth All Classes Yes No
Maximum and Average Number of Child Classes All Classes Yes No
Total Quality Profile All Classes Yes ?
Namespace or Package Metrics Number of classes Per Namespace, All Namespace/Packages Yes Yes
Number of functions Per Namespace, All Namespace/Packages Yes Yes
Average functions per class Per Namespace, Yes Yes
Number of public, private, protected data attributes Per Namespace, All Namespace/Packages Yes Yes
Number of public, private, protected methods Per Namespace, All Namespace/Packages Yes Yes
LOC Lines of Code Per Namespace, All Namespace/Packages Yes Yes
eLOC (Effective LOC) Per Namespace, All Namespace/Packages Yes No
lLOC (Logical Statements LOC) Per Namespace, All Namespace/Packages Yes No
Comments Lines Per Namespace, All Namespace/Packages Yes Yes
Blank Lines Per Namespace, All Namespace/Packages Yes Yes
Physical Lines Per Namespace, All Namespace/Packages Yes Yes
Number of Input Parameters Per Namespace, All Namespace/Packages Yes No
Number of Return Points Per Namespace, All Namespace/Packages Yes No
Interface Complexity (Parameters + Returns) Per Namespace, All Namespace/Packages Yes No
Cyclomatic Complexity Logical Branching Per Namespace, All Namespace/Packages Yes No
Package/Namespace Complexity (Interface + Cyclomatic) Per Namespace, All Namespace/Packages Yes No
Quality Analysis RSM Quality Analysis Per Namespace, Yes No
Total Quality Profile All Namespace/Packages Yes ?
File Metrics LOC Lines of Code Yes Yes
eLOC (Effective LOC) Yes No
lLOC (Logical Statements LOC) Yes No
FP Function Points Derived from LOC Yes No
Comments Lines Yes Yes
Blank Lines Yes Yes
Logical and Physical Lines Yes Yes
Comment and White space percentages Yes Yes
Average Character line length Yes No
Memory Allocation and De-allocation metric Yes No
Language Keyword use Yes No
Language Construct use Yes No
Extract Comments per file for understandability rating and spell checking Yes No
Extract Strings per file for spell checking Yes No
Create files in line numbered format for code reviews Yes No
Number of Quality Notices per file Yes No
Metrics differentials between two file version Yes No
Project Metrics Total LOC, eLOC, lLOC, Comment, Blanks, Lines Yes Yes
FP Function Points Derived from LOC metrics Yes No
Total Function Metrics Yes No
Total Class Metrics Yes Yes
Total Namespace Metrics Yes No
Inheritance Tree and Metrics Yes No
Language Keywords, constructs and metrics Yes No
Quality Profile Yes ?
Metric Estimation Factors for software estimates Yes No
Total Language Metrics Example Yes No
Total C, C++ and Header Files Yes No
Total Java Files Yes No
Total Number of Files Yes Yes
Baseline Metric Differential Yes Yes
Tagged : / / / / /

General SCM Interview Questions

The previous chapters outlined the state of CM technology from the standpoint of a spectrum of concepts underlying automated CM, and from the standpoint of the reflection of some of these concepts in commercial CM products. Clearly, no CM product supports all CM concepts; similarly, not all CM concepts are necessary in the support of all possible end-user requirements. That is, different CM tools (and the concepts which underlie these tools) may be required by different organizations or projects, or within projects at different  phases of the software development life cycle. This observation, coupled with the observed,continuing industry effort to adopt computer-aided software engineering (CASE) tools, leads us to conclude that integration is key to providing automated CM support in software development environments.
In this chapter we define what we mean by integration by way of a three-level model of integration. We illustrate where CM integration fits into this three-level model.  e then describe the advantages and disadvantages of current approaches to achieving integration in  software development environments. We close with a brief discussion on the relationship between future integration technology and the three levels of integration.

CM Services in Software Environments: A Question of Integration

There is no concensus regarding where CM services should reside in software environment architectures, despite the diversity of approaches that have been explored. For example, CM services have been offered via:

· Tools such as RCS, SCCS, CCC.
· Operating system extensions at the file-system level such as DSEE and NSE.
· Shared data models such as in the CIS specifications [18] and the PCTE PACT [53] environment.

A further complication is the emergence of a robust CASE tool industry, wherein many popular CASE tools provide their own tool-specific repository and CM services. As a result, CM functions are increasingly provided by, and distributed across, several CASE tools in an environment.
We have found it useful to think of integration in terms of a three-level model. This model, illustrated in Figure 5-1, corresponds to the ANSI/SPARC [48] three-schema  pproach used to describe database architectures. A useful intuition is that this correspondence is more than accidental. The bottom level of integration, called “mechanism” integration, corresponds to the ANSI/SPARC physical schema level. Mechanism integration addresses the implementation aspects of software integration, including, but not limited to: software interfaces provided by the environment infrastructure, e.g., operating system or environment framework interfaces;

software interfaces provided by individual tools in the environment; and architectural aspects of the tools, such as process structure (e.g., client/server) and data management structure (derivers, data dictionary, database). In the case of CM, mechanism integration can refer to integration with CM systems such as SCCS, RCS, CCC and DSEE; and CM implementation aspects such as transparent repositories and other operating-systems level CM services.

The middle level of integration, called “services” integration, corresponds to the ANSI/SPARC logical schema level. Services refers to the high-level functions provided by tools, and integration at this level can be regarded as the specification of how services can be related in a coherent fashion. In the case of CM, these services refer to elements of the spectrum of concepts discussed in chapter 3, e.g., workspaces and transactions, and services integration constitutes a kind of unified model of CM services.

The top level of integration, called “process” integration, corresponds to the ANSI/SPARC external schema (also called “end-user”) level. Process integration can be regarded as a kind of process specification for how software will be developed; this specification can define a view of the process from many perspectives, spanning individual roles through larger organizational pespectives. In the case of CM, process integration refers to policies and procedures for carrying out CM activities.

Integration occurs within each of these levels of integration; thus, mechanisms are inte- 34 ATR grated with mechanisms, services with services, and process elements with process elements. There are also relationships that span the levels. The relationship between the mechanism level and the services level is an implementation relationship: a CM concept in  he services layer may be implemented by different tools in the mechanism level, and conversely, a single mechanism may implement more than one CM concept. The relationship between the services level and the process level is a process adaptation relationship: different CM services may be combined, and tuned, to support different process requirements.

image

This three-level model provides a working context for understanding integration. For the moment, however, existing integration technology does not match exactly this somewhat idealized model of integration. For example, many services provided by CASE tools (including CM) embed process constraints that should logically be separate, i.e., reside in the process level. Similarly, tool services are often closely coupled to particular implementation techniques.

The level of adaptability required of integrating CM—both in terms of adaptability for projectspecific requirements as well as adaptability to multiple underlying CM
implementations—pushes the limits of available environment integration techniques. The following sections describe the current state of integration technology and its limitations. The next chapter discusses how future generation integration technology can address these shortcomings.

Reference: 
The State of Automated Configuration Management.
A. Brown, S. Dart, P. Feiler, K. Wallnau

Tagged : / / / / / / / / / /

Definition of Configuration Management

Software CM is a discipline for controlling the evolution of software systems. Classic discussions about CM are given in texts such as [6] and [8]. A standard definition taken from IEEE standard 729-1983 [42] highlights the following operational aspects of CM:

  • Identification: an identification scheme reflects the structure of the product, identifies components and their types, making them unique and accessible in some form.
  • Control: controlling the release of a product and changes to it throughout the lifecycle by having controls in place that ensure consistent software via the creation of a baseline product.
  • Status Accounting: recording and reporting the status of components and change requests, and gathering vital statistics about components in the product.
  • Audit and review: validating the completeness of a product and maintaining consistency among the components by ensuring that the product is a welldefined collection of components.

The definition includes terminology such as configuration item, baseline, release and version. When analyzing CM systems—automated tools that provide CM—it becomes evident that these incorporate functionality of varying degrees to support the above definition. Some CM systems provide functionality that goes beyond the above definition though. This is due (among other reasons) to the recognition of different user roles, disparate operating environments such as heterogeneous platforms, and programming-in-the-large support such as enabling teams of software programmers to work on large projects synergistically. To capture this extra functionality, it is necessary to broaden the definition of CM to include:

  • Manufacture: managing the construction and building of the product in an effective and optimal manner.
  • Process management: ensuring the carrying out of the organization’s procedures, policies and lifecycle model.
  • Team work: controlling the work and interactions between multiple users on a product.

In summary, the capabilities provided by existing CM systems encompass identification, control, status accounting, audit and review, manufacture, process management and team work.

Tagged : / / / / / / /