Work on remote Subversion repositories locally with Git

remote-subversion-git

Work on remote Subversion repositories locally with Git

Version control is great stuff, and being able to combine different version control mechanisms is even better. Subversion is a very popular version control system and a lot of repositories (public or otherwise) use Subversion to manage files. Git is another popular one, but what happens if you are working on a project where Subversion is used but Git is your preferred version control system?

With the git-svn plugin, you can have the best of both worlds. You can convert a Subversion repository to Git, use Git tools, then push the changes back to Subversion.

To begin, you will need the git-svn plugin installed. Most likely, if your distribution of choice provides Git, it will also provide git-svn. On Fedora, install it using:

# yum install git-svn

Then use git-svn to check out your Subversion repository into Git format:

% mkdir -p ~/git/code

$ cd ~/git/code

% git svn init http://svn.host.com/svn/code

Initialized empty Git repository in /home/user/git/code/.git/

% git svn fetch

This may take a while on large repositories

r267 = 079b7c1cff49187d1aabc4b16f316102088fdc0d (refs/remotes/git-svn)

W: +empty_dir: trunk

r268 = 3f1944530a092c811c55720bd9322b8c150a535b (refs/remotes/git-svn)

r351 = e2af3c12e5ed174d23ffc5917f03a6136f8ebb6b (refs/remotes/git-svn)

Checked out HEAD:

http://svn.host.com/svn/code r351

At this point, the Subversion repository located at http://svn.host.com/svn/code has now been checked out in Git format. On individual files and directories, you can use the git log command as you would the svn log command in order to get history information on the item in question. With git, you will also see the Subversion commit that corresponds to the log entry, for instance:

commit 23f705cd87e1e9c6dd841ca88a14d808e0c4292a

Author: user@HOST.COM

Date:   Sat Mar 20 18:25:38 2010 +0000

correct logic on the buildrequires extractor, add stats on BuildRequires to showdbstats output

git-svn-id: http://svn.host.com/svn/code@350 7a5473d1-2304-0410-9229-96f37a904fa4

With the above, you can see that user@HOST.COM did the commit, see the log message, and the Subversion revision (r350).

To work with these files, make changes as normal. git diff works like svn diff does, to see the changes made. To commit changes, use git commit like you would use svn:

% git commit -m “some minor change” file

[master 2454be1] some minor change

1 files changed, 2 insertions(+), 0 deletions(-)

To update your local copy from Subversion, instead of using svn update use git svn rebase. This will merge in any changes found in the Subversion repository.

When committing files using git commit, you are committing your changes to the Git repository. None of these changes are pushed to the Subversion repository until you specifically tell Git to do so. This is done with the git svn dcommit command, which then takes each commit made to Git and pushes them to Subversion as individual commits, which will retain all of your history and log comments:

% git svn dcommit

Committing to http://svn.host.com/svn/code …

M      trunk/rqp

Committed r352

M      trunk/rqp

r352 = 0557314a580c4390ff646380baa3aa33d1f6a5cd (refs/remotes/git-svn)

No changes between current HEAD and refs/remotes/git-svn

Resetting to the latest refs/remotes/git-svn

Unstaged changes after reset:

M      trunk/rqp

M      trunk/rqp

Committed r353

M      trunk/rqp

r353 = 249e97283ad28126bf84ccaffb32873e12d15b7b (refs/remotes/git-svn)

No changes between current HEAD and refs/remotes/git-svn

Resetting to the latest refs/remotes/git-svn

Now, if you were to look at the changed file(s) in Subversion (via another Subversion working copy or something like ViewVC), you will see the individual commits. Above, there were two changes made to the trunk/rqp file, each committed locally to Git. The “dcommit” command pushed those changes as individual commits to the Subversion repository. In this way you can do all local development with Git and when you have something you want to commit to the Subversion repository, you can push all of the relevant changes at once, retaining each separate commit.

Using the git-svn plugin makes it extremely easy to use Git locally with a remote Subversion repository. If you are in a project or organization that, for whatever reason, does not want to convert to Git, you can continue to work with that Subversion repository, without the restriction of using Subversion yourself.

Tagged : / / / / / / / / / / / / / / / / / / /

Anthillpro Comparison with Atlassian Bamboo – Continuous Integration Tools Review

anthillpro-vs-atlassian-bamboo

ANTHILLPRO COMPARISON WITH ATLASSIAN BAMBOO
AnthillPro Vs Bamboo OR
Difference between AnthillPro and Bamboo OR

Last month i was discussing with Eric Minick from Anthillpro on Why Build Engineer should be go for AnthillPro instead of Bamboo and i found some interesting inputs which i am sharing below;

Introduction

Bamboo is a respectable team level continuous integration server. Continuous Integration servers are focused on providing feedback to developers about the quality of their recent
builds, and how that compares to previous builds. While AnthillPro also provides continuous integration features, it pays special attention to what hAnthillpropens after build time.
Where is the build deployed? How does it get tested in the hours, days and weeks after the build occurs? Who releases the software and how?

The distinction in focus between the two solutions shows up in their features. Both AnthillPro and Bamboo provide continuous integration support and integrations with
numerous tools. Only AnthillPro provides the features required to take a build through the release pipeline into production – rich security, build lifecycle management, eAtlassian Bamboo.
For the purposes of this document, we will use the following product aAtlassian Bambooreviations:

 

Lifecycle Management

There is a lot more to implementing true lifecycle management than simply using the term in marketing and sales materials. The lifecycle extends across multiple processes in
addition to the build process. Most tools have had a very narrow view of this space and have focused their energies purely on the build process. The end result is that true lifecycle
management is an afterthought, and it shows in the features (or lack thereof) in their products. A continuous integration

Pipeline Management

As the lifecycle is made up of multiple processes (such as the build, deployments, tests, release, and potentially others), a lifecycle management tool must provide some means of
tracking and managing the movement of a build through the lifecycle stages. Without this feature, there is nothing to connect a build process execution to a deployment process
execution to a test process execution; thus the end user has no way of knowing what build  actually got tested. Without this pipeline management feature (which we call the Build
Life), traceability between processes is completely absent from the tool.

Atlassian Bamboo: No pipeline management out of the box.
Anthillpro: Provides pipeline management out‐of‐the‐box. Anthillpro has a first‐class concept called the Build Life. The Build Life represents the pipeline and connects the build process to later
processes like deployments into QA, Anthillproprovals by managers, functional testing, and release to production. The pipeline (Build Life) provides guaranteed traceability throughout all
processes in the lifecycle, and provides a context for collecting logs, history, and other data gathered throughout the lifecycle.

Artifact Management

Key to lifecycle management is the ability to connect the outputs of a prior process (such as the build) to the inputs of a subsequent process (such as a deployment). After all, the
deployment process needs to have something to deploy. Ideally, the deployment process would deploy the artifacts produced by the build process. And the test process would run
tests on those same artifacts. The ability to cAnthillproture and manage the artifacts created by a build and other processes is central to this effort. Ideally, the artifacts would be managed
by an artifact repository (a Definitive Software Library (DSL) under ITIL). Further, as hundreds or thousands of builds hAnthillpropen, support for discarding old builds needs to
intelligently remove builds that are no longer interesting. Anthillpro bundles a binary artifact repository called CodeStation.

Atlassian Bamboo: Bamboo does cAnthillproture built artifacts but does not have a robust artifact management system. It does not maintain artifact checksums for validation. Old builds may be archived
after a certain number of weeks, but there is no designation for builds that have been to or are potentially going to production that would use a different retention policy. Artifacts are available for user download, but are not accessible for reuse by other plans or deployments.

Anthillpro: Built‐in artifact management system (DSL) called CodeStation. The cAnthillproture, fingerprint and management of artifacts is essential to the solution. This allows AnthillPro to guarantee traceability of artifacts from the build, through deployment, through testing, and into release (in other words, AnthillPro guarantees that what is released into production is what was tested and built). A maximum number of builds or age to keep can be set per project and per status. This means that builds that were released can be kept longer than a simple continuous integration build.

Security

Especially as servers address functionality before the build – deployments or tests to various environments, controlling who can do what within the system can be key element
securing the system and providing clear separation of duties. Once something has been done, it can be equally important to find out who ran which processes.

Authentication and Authorization

Atlassian Bamboo: Basic role based security. Users may be assigned roles and permissions at the project level. Integration with LDAnthillpro, compliments internally managed security.

Anthillpro: AnthillPro provides a rich role based security system, allowing fine‐grained control over who can see which project, run which workflows and interact with which
environments. The Authentication system supports internally managed, single sign on systems, LDAnthillpro, Kerberos (Active Directory), and JAnthillproS modules.

Secure Value Masking

Many “secrets” are used when building and deploying. Passwords to source control, servers, and utilities are often needed to execute build, deploy, test processes.

Atlassian Bamboo: No facility for securely storing Anthillproplication passwords or obfuscating them from the logs. Bamboo does manage to write libraries for some integrations that avoid passing the
password where the logs can see that line. It has no facility that we can see for flagging a command line parameter that will be logged as secure and filtering that value from the log.

Anthillpro: Sensitive values like Anthillproplication passwords are automatically filtered out of logs, hidden in the user interface, entered through password fields, and stored in the database encrypted with a triple DES one time key.

Process Automation & The Grid

Grouping Agents
In a distributed environment, managing your build and deployment grid needs to be easy.

Atlassian Bamboo: Agents are added into a fairly uniform pool. Agents can define broad cAnthillproabilities they provide and jobs can define what cAnthillproabilities they need to perform matchmaking.

Anthillpro: AnthillPro provides the concept of an environment. Environments are groups of servers. A build farm for a class of projects could be one environment while the QA environment for another project would be another environment. This allows for roaming – or deploying to everything – to span just the machines in an environment. Jobs can be
assigned to a single machine, or roam, or select machines based on criteria like processor type, operating system, or customized machine cAnthillproabilities.

Complex Process Automation

Atlassian Bamboo: Bamboo runs full plans on a single agent. While different agents can be running various builds in parallel, any given plan is executed on just a single agent.

Anthillpro: AnthillPro provides a rich workflow engine, which allows jobs to be run in sequence, parallel, and combinations thereof. Jobs can also be iterated so that they run multiple times with slight variations in their behavior on each execution. This allows parallelization that takes advantage of numerous agents. This facility also makes sophisticated deployments possible.

Cross Site Support


Atlassian Bamboo:
Bamboo provides no special support for agents (slaves) that exist outside the local network.

Anthillpro: AnthillPro is architected with support for an cross‐site, even international, grid. Agent relays and location specific artifact caches assist in easing the configuration and
performance challenges inherent in deployment involving multiple sites.

Dependency Management

Component based development and reuse are concepts that get a lot of lip service but few if any features from most vendors. Only AnthillPro provides features to enable component based development and software reuse. A flexible dependency management system is part of the built‐in feature set of AnthillPro. The dependency management system is integrated with the bundled artifact repository and with the build scheduler so that builds can be pushed up the dependency grAnthillproh and pulled down the dependency grAnthillproh as configured. Integration with Maven dependency management provides an integrated system.

Atlassian Bamboo: Provides some basic support for build scheduling based on dependencies. A build of one project can kick off a build of it’s dependents and some blocking strategies can prevent wild numbers of extra builds being generated. Bamboo does not provide any tie in between dependency triggering and build artifacts – sharing artifacts between projects is left to the team to figure out with an external tool such as Anthillproache Maven.

Anthillpro: Support for dependency relationships between projects out‐of‐the‐box. AnthillPro provides a rich set of features for relating projects together. Large projects often have tens
or hundreds of dependencies on sub‐projects, common libraries and third party libraries. At build time the dependency system can calculate which projects need to be rebuilt based on changes coming in from source control. At build time, artifacts from dependency projects are provided to the dependants with version traceability and tracking.

AnthillPro provides highly customizable build scheduling and artifact sharing to these projects. In a “pull” model, anytime a top level project is built, it’s dependencies are inspected to see if they are up‐to‐date. If not, they are first built, then the top level project is built. In a “push” model, builds of dependencies will trigger builds of their dependents. AnthillPro interprets the dependency grAnthillproh to avoid extra builds or premature builds. In the case of Maven projects, AnthillPro can simply provide the scheduling or cooperate with Maven to provide traceable artifact reuse.

Summary

While both tools have a lot of similarities, AnthillPro’s Lifecycle Management, Dependency Management, and full featured Security cAnthillproabilities set it Anthillproart. Only AnthillPro supports
complete end‐to‐end traceability across all the phases of Build, Deploy, Test, and Release. While Bamboo is likely an effective team level continuous integration server, AnthillPro is a proven solution for enterprises looking to automate the full lifecycle of a build. For build and release automation the technology leader since 2001 is AnthillPro. We were
the first to release a Build Management Server. We were the first to recognize the need for comprehensive lifecycle management (beyond just build management), and we were the
first to release features required to deliver on the vision. We have been very successful at enterprise level RFPs and have added hundreds of customers including some of the leading banks, insurance companies, and high‐technology companies in the world. Our dedication to solving the problems faced by our customers means that we are very responsive to feature and enhancement requests with turn around times measured in days or weeks instead of months and quarters. Urbancode delivers the leading product in its space, the expertise to roll it out, and caring support for our customers to ensure their continued success.

Tagged : / / / / / / / / / / / / / / / / / / / / /

Introduction of Perl – Complete Overview

perl-introduction

What is Perl

  1. Perl is a programming language, It’s Object Oriented, simple to learn and very powerful. Perl stand for: “Practical Extraction and Reporting Language”.
  2. Perl is an Interpreted language, so you don’t have to compile it like you do Java, C, C++ etc. For fast development work, that’s a godsend.
  3. Perl is a versatile, powerful programming language used in a variety of disciplines, ranging from system administration to web programming to database manipulation.
  4. Perl is a different language to different people. It is a quick scripting tool for some, and a fully-featured object-oriented language for others.
  5. Perl is used in so many places because Perl is a glue language. A glue language is used to bind things together.
  6. Perl is good at is tying these elements together. Perl can take your database, convert it into a spreadsheet-ready file, and, during the processing, fix the data if you want.
  7. Perl can also take your word processing documents and convert them to HTML for display on the Web.

WHY PERL.

There are a many reasons why Perl is a great language for use in development and general processing.
Following are some of them…

  • Learning: Perl has all the same abilities, data constructs and methods of other languages, and its easier to learn then most. If you understand Perl, you will have far less trouble learning other languages like C, C++, Java, PHP etc then if you were starting from scratch.
  • Interpreted language means less time spent debugging.
  • Mod_perl for CGI work means perl can be as fast as compiled languages without the need to manually compile. mod_perl is an advanced implementation of Perl that runs in the Apache web server. It provides extremely fast performance and full access to Apache internals via Perl.
  • CPAN.org, a massive collection of perl modules that can do almost anything, someone has usually done the work for you. CPAN, the Comprehensive Perl Archive Network, is one of the largest repositories of free code in the world. If you need a particular type of functionality, chances are there are several options on the CPAN, and there are no fees or ongoing costs for using it.
  • Online support. Perl has been around since the early 90’s, its exceptionally well known and thousands of tutorial and help sites abound on the internet. Perl has a very strong user community and this is the primary avenue for support.
  • ISP support. Perl runs on nearly anything, it comes standard on the vast majority of unix/linux servers and is available free for windows servers. As a result its the most commonly supported language on ISP (Internet Service Providers) hosting servers.
  • Text processing. Because perls’ initial reason for living was text processing, its regular expression engine is exceptionally powerful. That means advanced text manipulation is easier then ever. (And let’s face it; nearly all programming is text manipulation of some sort.
  • Database connectivity. Thanks to the DBI module, perl can talk to a great many different databases with the same syntax. That means that you only have to learn one interface to talk to over a dozen different database servers. That’s as opposed to learning each DB’s syntax and commands seperately. Perl provides an excellent interface to nearly all available databases, along with an abstraction layer that allows you to switch databases without re-writing all of our code.
  • Freebies. Since Perl has been around for ages, there are thousands of scripts on the internet that are free to use and/or modify. Perl, Apache, and related technologies are open source and free. On-going overhead cost to vendors for code that continues to run is $0.
  • Multi-platform: Perl runs on Linux, MS Windows and all of the platforms listed here: http://www.cpan.org/ports/external link
  • Rich Community Support: The main point of these stats is that Perl has a large and broad user community. With any technology you choose, you don’t want to be the only one using it. These numbers show that Perl is still widely used for web development, among other things, and the user community is very active.
  • Re-usable code architecture (modules, OO, etc.): Perl is architected to allow and encourage re-use. The core block of re-use, the module, makes it very easy to leverage business logic across platforms in web applications, batch scripts, and all sorts of integration components.
  • Multi-use: Perl can be used to develop Web apps, batch processing, data analysis and text manipulation, command-line utilities and apps, GUI apps.
  • Multi-language integration: can interact with C, C++, Java, etc. from within Perl code.

WHY NOT PERL.

All languages have areas that they excel in, and others that they don’t. Perl is no different. Technically, you could write anything in Perl, even a complete operating system. but that does not mean you should. Its a matter of considering your requirements and deciding on the best language to suit them. Here are some reasons why Perl might not be your best choice:

  • Speed. If for example, you were writing a huge word processer like MS Word or WordPerfect. the sheer size of it would make it extremely slow to compile at runtime. For this you would be much better served by a language like C or C++ where the compilation is done before you run it.

 

Tagged : / / / / / / / / / / / /

How to configure and use SSH authentication system server CVS

ssh-with-cvs

How to configure and use SSH authentication system server CVS

cvs (Concurrent Version System) is a very popular version control tool. Although its function as Perforce, Subversion and other powerful, but because of its easy configuration, simple to use and the introduction of longer, so in all a kind of software project widely used.

first is to install FreeBSD, and application security patches. This step is very simple, download a FreeBSD (the upcoming 5.2-RELEASE is a good choice, in this article was published, this version may have been able to download a) of the mini iso burning CD-ROM, then you can install the (choose “Minimum” installation). Note that, in order to be able to use ssh authentication, be sure to install the crypto. Configure the network for this machine, so after the restart just fine.

then install the necessary package. FreeBSD 4.x built-in perl, but in 5.x in, perl be deleted from the basic system. In order to better use, including ACL, commit mail, and a number feature, you must install perl.

I personally recommend using the ports in the 5.8.x version of perl. Before installing, first of all convinced that the system does not perl 5.6.x installed, the new system, they can safely execute the following command:

This will remove the existing any package. In general, the majority of FreeBSD users use cvsup to update the system, but the release of the CD, cvsup-without-gui is not included, while the ordinary cvsup package requires a series of X11 libraries. To avoid trouble, consider the following two different ways to complete the first code update:

1, using the FreeBSD’s cvs to synchronize code:

cvs-d: pserver: anoncvs@anoncvs.jp.FreeBSD.org: / home / ncvs login

enter “anoncvs”, Enter

cd / usr

cvs-R-d: pserver: anoncvs@anoncvs.jp.FreeBSD.org: / home / ncvs export-r RELENG_5_2 src ports

Note: If you are not using 5.2-RELEASE, please make appropriate amendments to RELENG_5_2 (for example ,4-STABLE is RELENG_4, – CURRENT is the HEAD, etc.)

2, install cvsup-without-gui:

pkg_add-r cvsup-without-gui

then use cvsup to synchronize code, space is limited, I will not repeat them

I recommend the first method, of course, the second method should be faster, and does not require manually compiling cvsup. Subsequently, the installation of Perl 5.8.x:

cd / usr / ports / lang / perl5.8 & & make all install clean & & rehash

Then we will face a very serious problem: ports inside use.perl script does not know the system had not installed perl, so the time to be wrong in the implementation – this time, just make a symbolic link to perl on it. Well, the implementation:

use.perl port

This script will help you complete use.perl modify configuration files (such as / etc / make.conf) and a series of work. Here is not the table, a little profile about OpenSSH (sshd) configuration, in general, add the following two lines:

Protocol 2

PasswordAuthentication no

the benefits of doing so: ( 1) only allow ssh2 protocol log, which can provide better security (2) does not allow password login, which no doubt will increase security.

1, create a cvs repository

Well, pre-configuration is basically over. Do not forget to create a user for the cvs group, for example, ncvs, as well as a cvs repository for the management of users, such as the repoman (which of course is ncvs that group), and then create a directory to save the cvs repository, in this case , we put it into / home / ncvs in:

rm-rf / home / ncvs

mkdir-p / home / ncvs

chown-R repoman: ncvs / home / ncvs

chmod-R 775 / home / ncvs

next step is to initialize the repository, and simple to implement:

su-l repoman

cvs-d / home / ncvs init

on it.

present, all the FreeBSD version of cvs are included in the safety of the existence of a small vulnerability, although this flaw only in local use, but we recommend that you put it back on is to find / usr / src / contrib / cvs / src / expand_path.c in

return current_parsed_root-> original;

line, turning it into a

return current_parsed_root-> directory;

Of course, the next step is to re-make world kernel of. Need to note is that if you are using a 5-CURRENT, also need to modify some code for it to reach 5.2-RELEASE as performance (-CURRENT in a large number of debugging options turned on), specific methods will not go into here.

2, the configuration commitmail and ACL

then configure cvs commitmail and ACL. I personally think that commitmail the team software development a very important thing to cvs, it is especially important because cvs is not atomic submit function, but commitmail just enough to make up for this.

FreeBSD development team used a very good perl script to complete commitmail function, while the hook they use cvs to achieve a simple access control function (ACL). I use the cvs repository based on FreeBSD’s CVSROOT, and made a few changes.

use the CVSROOT cover your CVSROOT, and you also need to freebsd directory mailsend.c which compiled the results into the / usr / local / bin (the script assumes that the CVSROOT in this matter). In addition, the CVSROOT need some modifications before it is put into use (for example, machine name, etc.). These settings can be found in the cfg_local.pm inside:

$ MAILADDRS = ‘cvs- all@example.org ‘;

commitmail this is the place to be sent to.

$ MAIL_BRANCH_HDR = “X-Phantasm-CVS-Branch”;

this is to be added to the commitmail head of information, if you use the mailing list, then it can help mailing list automatically sorted.

$ MAILBANNER = “The Phantasm Studio repository”;

this soon in commitmail said that they commit to which a repository.

if ($ hostname = ~ / ^ cvs \. Example \. Org $ / i)

commit to this is to determine the host name

$ CVSWEB_URL = “http://cvsweb.example.org/cgi-bin/cvsweb.cgi”;

This is cvsweb services at

a brief CVSROOT in the other documents

avail: the file used to control access to user group.

access: This file is used to control who can perform cvs operations.

exclude: This file is used to control those files do not need to check the cvs tag

options: this file used to control the commencement of cvs tag, for example, can define $ Phantasm $, etc.

3, configuration of users and restrict ssh permissions

ssh authentication using a more vexing question is, ssh means that users have a system account, and that they be able to log in.. If configured properly, they can get a shell, it is natural to be a potential security risk.

must therefore be very careful in handling the ssh cvs authentication issue. The underlying principle is: any action against the user, unless we allow them to do so.

create a user in accordance with the following rules:

user’s” primary “group is ncvs (This will not only restrict the user’s permission, so we can more easily control the other users in the cvs repository which can not be free to commit)

users do not use password authentication, which will ease because they do not properly set up ftp and other safety hazards caused by

still give the user a shell, the “shell” can be a perl script, it only allow the implementation of the beginning of the command cvs

then allow users to use openssh’s ssh-keygen to generate your own key pair. OpenSSH in most * BSD and Linux distributions can be found in, if the user uses the Windows Desktop, you need to install cygwin (in particular, the installation of net in OpenSSH), of course, the implementation of the command is the same:

ssh-keygen-t dsa-b 2048

for the paranoid security enthusiasts, can consider later in 2048 will be replaced by-b 4096. Of course, under the barrel theory, if you are a paranoid, then obviously you should be forced to use all the partners are also at least as long as your key:)

, the administrator should be user (committer) the key into the server, the user corresponding to the authorized_keys file in the folder inside. For example, one of my public key as follows:

ssh-dss AAAAB3NzaC1kc3MAAAEBAL +1 jinOw +86 RcTEaSM5/Hz4Lr9tIS0IQsX8ebo

TwLzWnqpOHRh2KBCGn/e0xGCIAai7PGz7c + SZCvrLiRvG9mCsMMMue8ZIL + QF4OAmMd

Cz8Qoyg0cc4YXImOd + UEpdOX29PC4aMAz28v/GO2yf58/Qa49Clfq1kHa/8q3IAgs9o

W95 / ArG + IWFOsN1Tv9nh4XJb5AQjpa5uMlB5SEmvKGTXQ2oYiRVIxL8vzHL6MtO/8×1

j8 + RioSH6FCpEXS7UJbYxE7vF3m5Fa5o6g2dIZewphsleOeHkvYJ442Hqvsly3p4 +4 N

dvim4bY2HMDha5r5zeTV8tTlOz4wQVgKyWoEAAAAVAINGzX7uU0vR8l63qhBhUeWGZt

C9AAABADWiO +9 bvV7DApsn08LR1eoEnMjJFQgEfGlbV + EvZHkO0bkHZAdRIKtVmgNUw

G6uufykkt2Tb + q5SbVNZkzeaFVv4ZMtnjSvEPIZrEXcQFFguGk1it5v5EYcmq4G8 + j1

BFTVHef4b1wMTSt11WtEz0LUYncuZ6LA48/WGTuZiSm8JkchgVm8HhR9NqjdeFJH8sO

RUhUBoxyWjo/hv7zFg7HqoJGzeNfrEhFg36psR2RDaRvSP0vN1W2q4j5OZy3gB6ZyVt

nsEPl1HELhlaCFifmdz1LVxDx + FyPy6wMsPQLTmB1g6N1J6PWy3qCTJ0NyQgarSt3 / A

TQ0InF1BOdJn8QAAAEAPb1OgswuMHdEsHk2ETZVmOKOkI9Rjf72vjZ3xG45iEbCH/7p

aTP8OQmJMW9FD4MHjdmtktPVYXDIa9Hj/IM44zhfMHEdKs9LlFUK5dBgNUps + yPj2Ns

Mr2rl771ODR0mB52FwrXm1FCmNTM7WQpFOEy/QhtZRpHK +7 / YZp7PBggt17Fw7rbjP2

zhWnZluoSKLgvfkhxhJuOMm/ElNJx2c + XHdxPqI3eR5UxzLNjDUNh59I8 + h + E69bFB3

b2uhKqziziHOQcqoH5r0Kud / DBBE79lU3mRUF8FQNygCRh/V3yFzed40rc0nF0PQpNZ

6zodDTJByrm6vX5wr2lI4RgA9w == bitripper@grimreaper.delphij.net

Description: public key is not allowed off-line, here is convenient for typesetting. We have just opened to the user shell, which is still potential pitfalls, so we tighten security in this regard, the public key by adding the following text before the project:

command = “/ usr / bin / cvs – allow-root = / home / ncvs server”

So, the whole line should look like this:

command =” / usr / bin / cvs – allow-root = / home / ncvs server “ssh-dss AAAAB

…… …………………..

X5wr2lI4RgA9w == bitripper@grimreaper.delphij.net

ssh this command means that up on the implementation of / usr / bin / cvs – allow-root = / home / ncvs server this command, and can only perform this command. Thus, unless there are loopholes in the cvs itself, otherwise, can not really cut out through ssh shell. – Allow-root limits the ability to use the repository, so users can not easily specify the other repository, thus trying to undermine the security will become more difficult. If you need multiple repository, you can specify the number of allow-root parameter.

be noted is that if users have multiple public key, you need to specify each public key before the command.

4, configuration cvsweb

configuration cvsweb is quite simple. First install the apache (I tried 1.3.x and 2.0.x), configure the ready, from the port which install cvsweb on it:

cd / usr / ports / devel / cvsweb & & make all installl clean

and then to modify it configuration file:

cd / usr / local / etc / cvsweb

vi cvsweb.conf

find

;

@ CVSrepositories = (

line, follow the example of an increase in following their own repository, For example:

‘ncvs’ => [‘ New CVS Repository ‘,’ / home / ncvs’],

write the number of how many, of course, @ CVSrepositories the first to directly access cvsweb.cgi when the repository.

in FreeBSD 5.1-CURRENT development process carried out by some of the changes caused by the ports in the cvsweb not normally call the cvs and rcs tools, which will cause it to not work properly. To solve this problem, can be found from the following web site the latest version of cvsweb:

http://www.freebsd.org/projects/cvsweb.html

the author of this writing, the latest cvsweb version is 2.9.1 beta. Before the installation, need to install the other two port:

cd / usr/ports/devel/p5-IPC-Run & & make all install clean

cd / usr/ports/net/p5-URI & & make all install clean

then downloaded tbz file unpack, copy to the appropriate location (cgi-bin and related images directory) can be used.

Tagged : / / / / / / / / / / / / / / /

General SCM Interview Questions – SCM Job Interview Kit

scm-interview-questions

 

General SCM Interview Questions – SCM Job Interview Kit

  • What do you think about configuration management?
  • What do you understand about Change Management?
  • branching methodologies and what currently theya re using it. Show with some example with pros and cons
  • Concept of Merging and Why do we need?
  • What do you think about build Management?
  • What are the key benefit of build Automation and what are the key inputs to automate the build process in the project?
  • Discuss about tools and technology which help to automate the entire build cycle.
  • What is Continuous Build Integration and How this is useful for the project?
  • What is daily build & nightly builds and what are the process need to set up to Automate & monitor consistently.
  • Explain in details for writing build sciprt for any project
  • What do you think about release Management?
  • Talk about Release Management on several platforms?
  • What do you understand about Packaging and Deployment?
  • How to Automate Remote Deployment of Builds on Development & Test Servers?
  • What is workflow management. exmplain this in details.
  • What do you understand about Code Coverage? Describe repective tools & utilities.
  • Describe the Integrate Packaging scripts & Test Automation scripts with build & Monitor build verification test status and tools.
  • How to co-ordinate with development team to increase their productiavity.
  • What do you understand about multisite project
  • How SCM team perform integration and co-ordination between Dev and QA
  • Explain Troubleshooting in Build Server and Process
  • Explain Troubleshooting in Configuration Server and Process
  • Explain Troubleshooting inMost popular java Comipler issues in build server
  • Explain Troubleshooting inMost popular C++ compiler issues in build server
  • software packaging tools if they will be packaging or writing the installations for the releases.
  • Backup your code daily with respect to SVN.
  • Overview of Batch Scripts and top 25 commands
  • Discuss about Web Servers and Application servers
  • What do you think about distributed and multi-site environment
  • Can you name some software development methodologies and describe them?
  • Agile attempts to minimize risk by developing software in short iterations.
  • Extreme Programming employs simplicity, frequent communication, constant customer feedback and decision empowerment.
  • Iterative development is a cyclical methodology that incorporates refactorying into the process.
  • Waterfall software development is a phased methodology. When one phase is complete, it moves onto the next phase.
  • What is an API?
  • What is a web service?
  • What the difference between a global and a local variable?
  • What are Bug /Issue Tatcking tools available and descibe them
  • How does Subversion handle binary files?
  • What is ADO?
  • What is polymorphism?
  • Plz Let me the Difference Between Bea Weblogic IBM Websphere

Perforce:

  • What are basic skills required for Perforce administration including Command Line info.
  • How we can develop Build summary reports for Mgmt team and what are the key inputs for report.
  • Explain the best practice for Setup process & maintain the Archive of software releases (internal & external) & license management of Third Party Libraries
  • Identify the Cdeployment tools for major/minor/patch releases in different environment.
  • Explain Red Hat Linux and some of daily used features.
  • Explain Perforce & Multisite
  • Concept of labeling, branching and merging
  • labeling, branching and merging in perforce

Talk about Release Process

Can you describe some source code control best practice?
# Use a reliable and dedicated server to house your code.
# Backup your code daily.
# Test your backup and restore processes.
# Choose a source control tool that fits your organization’s requirements.
# Perform all tool specific administrative tasks.
# Keep your code repositories as clean as possible.
# Secure access to your code.

Can you describe software build best practices?
# Fully automated build process
# Build repeatability
# Build reproducibility
# Build process adherence

CM tools Comparison

  • Difference Between CVS and SVN
  • Difference Between perforce and SVN
  • Difference Between perforce and Clearcasee
  • Difference Between VSS and TFSC
  • Difference Between perforce and MKS

 

 

Tagged : / / / / / / / / / / / / / / / /

Unix Command: Grep – Quick Reference – Pattern – Examples – Options

unix-command-grep

Grep scans its input for a pattern, and can display the selected pattern, the line numbers of the filenames where the pattern occurs, The command uses the following syntax

grep options pattern filesname(s)

grep searches for pattern in one or more filenames.

Example for Grep command:

  1. grep “sales” emp.lst
  2. grep “director” emp1.lst emp2.lst
  3. grep ‘jai sharma’ emp.lst
  4. grep “jai Sharma $var” emp.lst

—————-Grep options—————————

Ignoring Case (i) When you look for a name, but you are not sure of the case, grep offers the –i (ignore) option which ignores case for patteen matching.

> Grep –i ‘agarwal’ emp.lst

Deleting Lines  or Inverse(-v): -v (inverse) option selects all except lines containing the pattern. Thus, you can create a file other list containing all but director.

> Grep –v “director” emp.lst > other list

Displaying line Numbers (-n): The –n(number) options displays the line numbers containing the pattern, along with the lines:

> grep –n ‘marketing’ emp.lst

Counting Line Containing patterns (-c): The –c (count) option counts the number of lines containing the pattern ( which is not the same as number of occurrences).

  1. grep –c director emp.lst
  2. grep –c director emp*.lst

Displaying Filenames (-l): The –l (list) option displays only the files names of files containing the pattern.

> grep –l ‘manager’ *.lst

Matching Multiple Patterns(-e): With the –e option, you can match the three agarwals by using grep like this:

> grep –e “Agarwal” –e “aggarwal” –e “agarwal” emp.lst

Taking patterns from a file (-f): we can place all patterns in a separate file, one pattern per lin. Grep takes inputs from there with the –f option:

> grep –f pattern.lst emp.lst

 

Tagged : / / / / / / / / / / / / /

Interview Questions Sets : Shell Script Descriptive

shell-script-descriptive-interview-questions-sets

Interview Questions Sets : Shell Script Descriptive Questions Sets

What is shell scripting?
Shell scripting is used to program command line of an operating system. Shell Scripting is also used to program the shell which is the base for any operating system. Shell scripts often refer to programming UNIX. Shell scripting is mostly used to program operating systems of windows, UNIX, Apple, etc. Also this script is used by companies to develop their own operating system with their own features.

Advantages of Shell scripting?
There are many advantages of shell scripting some of them are, one can develop their own operating system with relevant features best suited to their organization than to rely on costly operating systems. Software applications can be designed according to their platform.

What are the disadvantages of shell scripting?
There are many disadvantages of shell scripting they are

  • Design flaws can destroy the entire process and could prove a costly error.
  • Typing errors during the creation can delete the entire data as well as partition data.
  • Initially process is slow but can be improved.
  • *Portbility between different operating system is a prime concern as it is very difficult to port scripts etc.


Explain about the slow execution speed of shells?
Major disadvantage of using shell scripting is slow execution of the scripts. This is because for every command a new process needs to be started. This slow down can be resolved by using pipeline and filter commands. A complex script takes much longer time than a normal script.

Give some situations where typing error can destroy a program?
There are many situations where typing errors can prove to be a real costly effort. For example a single extra space can convert the functionality of the program from deleting the sub directories to files deletion. cp, cn, cd all resemble the same but their actual functioning is different. Misdirected > can delete your data.
Coding Related Shell Scripting Interview Questions …

Explain about return code?
Return code is a common feature in shell programming. These return codes indicate whether a particular program or application has succeeded or failed during its process. && can be used in return code to indicate which application needs to be executed first.

What are the different variables present in Linux shell?
Variables can be defined by the programmer or developer they specify the location of a particular variable in the memory. There are two types of shells they are System variables and user defined variables. System variables are defined by the system and user defined variables are to be defined by the user (small letters).

Explain about GUI scripting?
Graphical user interface provided the much needed thrust for controlling a computer and its applications. This form of language simplified repetitive actions. Support for different applications mostly depends upon the operating system. These interact with menus, buttons, etc.

Shell Scripting Command Interview Questions …

Explain about echo command?
Echo command is used to display the value of a variable. There are many different options give different outputs such as usage \c suppress a trailing line, \r returns a carriage line, -e enables interpretation, \r returns the carriage.

Explain about Stdin, Stdout and Stderr?
These are known as standard input, output and error. These are categorized as 0, 1 and 2. Each of these functions has a particular role and should accordingly functions for efficient output. Any mismatch among these three could result in a major failure of the shell.

Explain about sourcing commands?
Sourcing commands help you to execute the scripts within the scripts. For example sh command makes your program to run as a separate shell. .command makes your program to run within the shell. This is an important command for beginners and for special purposes.

Explain about debugging?
Shell can make your debugging process easier because it has lots of commands to perform the function. For example sh –ncommand helps you to perform debugging. It helps you to read the shell but not to execute it during the course. Similarly sh –x command helps you by displaying the arguments and functions as they are executed.

Explain about Login shell?
Login shell is very useful as it creates an environment which is very useful to create the default parameters. It consists of two files they are profile files and shell rc files. These files initialize the login and non login files. Environment variables are created by Login shell.

Explain about non-login shell files?
The non login shell files are initialized at the start and they are made to run to set up variables. Parameters and path can be set etc are some important functions. These files can be changed and also your own environment can be set. These functions are present in the root. It runs the profile each time you start the process.

Explain about shebang?
Shebang is nothing but a # sign followed by an exclamation. This is visible at the top of the script and it is immediately followed by an exclamation. To avoid repetitive work each time developers use shebang. After assigning the shebang work we pass info to the interpreter.

Explain about the Exit command?
Every program whether on UNIX or Linux should end at a certain point of time and successful completion of a program is denoted by the output 0. If the program gives an output other than 0 it defines that there has been some problem with the execution or termination of the problem. Whenever you are calling other function, exit command gets displayed.

Explore about Environment variables?
Environment variables are set at the login time and every shell that starts from this shell gets a copy of the variable. When we export the variable it changes from an shell variable to an environment variable and these variables are initiated at the start of the shell.

How can you tell what shell you are running on a UNIX system?
Answer :
You can do the Echo $RANDOM. It will return a undefined variable if you are from the C-Shell, just a return prompt if you are from the Bourne shell, and a 5 digit random numbers if you are from the Korn shell.

You could also do a ps -l and look for the shell with the highest PID.

What are conditions on which deadlock can occur while swapping the processes?

All processes in the main memory are asleep. All ‘ready-to-run’ processes are swapped out.
There is no space in the swap device for the new incoming process that are swapped out of the main memory. There is no space in the main memory for the new incoming process.

How do you change File Access Permissions?

Answer :

Every file has following attributes:
owner’s user ID ( 16 bit integer )
owner’s group ID ( 16 bit integer )
File access mode word

‘r w x -r w x- r w x’
(user permission-group permission-others permission)

r-read, w-write, x-execute

To change the access mode, we use chmod(filename,mode).
Example:
To change mode of myfile to ‘rw-rw-r–’ (ie. read, write permission for user – read,write permission for group – only read permission for others) we give the args as:
chmod(myfile,0664) .

Each operation is represented by discrete values
‘r’ is 4
‘w’ is 2
‘x’ is 1

Therefore, for ‘rw’ the value is 6(4+2).

Example 2:
To change mode of myfile to ‘rwxr–r–’ we give the args as:
chmod(myfile,0744).

List the system calls used for process management.
Answer :

System calls Description
fork() To create a new process
exec() To execute a new program in a process
wait() To wait until a created process completes its execution
exit() To exit from a process execution
getpid() To get a process identifier of the current process
getppid() To get parent process identifier
nice() To bias the existing priority of a process
brk() To increase/decrease the data segment size of a process

What is the difference between Swapping and Paging?
Answer:

Swapping:
Whole process is moved from the swap device to the main memory for execution. Process size must be less than or equal to the available main memory. It is easier to implementation and overhead to the system. Swapping systems does not handle the memory more flexibly as compared to the paging systems.

Paging:
Only the required memory pages are moved to main memory from the swap device for execution. Process size does not matter. Gives the concept of the virtual memory.

It provides greater flexibility in mapping the virtual address space into the physical memory of the machine. Allows more number of processes to fit in the main memory simultaneously. Allows the greater process size than the available physical memory. Demand paging systems handle the memory more flexibly.

What is the difference between cmp and diff commands?
Answer :

cmp – Compares two files byte by byte and displays the first mismatch
diff – tells the changes to be made to make the files identical

What is meant by the nice value?
Answer :

Nice value is the value that controls {increments or decrements} the priority of the process. This value that is returned by the nice () system call. The equation for using nice value is:
Priority = (“recent CPU usage”/constant) + (base- priority) + (nice value)
Only the administrator can supply the nice value. The nice () system call works for the running process only. Nice value of one process cannot affect the nice value of the other process.

What is a daemon?
Answer :
A daemon is a process that detaches itself from the terminal and runs, disconnected, in the background, waiting for requests and responding to them. It can also be defined as the background process that does not belong to a terminal session. Many system functions are commonly performed by daemons, including the sendmail daemon, which handles mail, and the NNTP daemon, which handles USENET news. Many other daemons may exist. Some of the most common daemons are:
init: Takes over the basic running of the system when the kernel has finished the boot process.
inetd: Responsible for starting network services that do not have their own stand-alone daemons. For example, inetd usually takes care of incoming rlogin, telnet, and ftp connections.
cron: Responsible for running repetitive tasks on a regular schedule.

What are the process states in UNIX?

Answer :
As a process executes it changes state according to its circumstances. Unix processes have the following states:
Running : The process is either running or it is ready to run .
Waiting : The process is waiting for an event or for a resource.
Stopped : The process has been stopped, usually by receiving a signal.
Zombie : The process is dead but have not been removed from the process table.

How are devices represented in UNIX?
All devices are represented by files called special files that are located in/dev directory. Thus, device files and other files are named and accessed in the same way. A ‘regular file’ is just an ordinary data file in the disk. A ‘block special file’ represents a device with characteristics similar to a disk (data transfer in terms of blocks). A ‘character special file’ represents a device with characteristics similar to a keyboard (data transfer is by stream of bits in sequential order).

What is ‘inode’?
All UNIX files have its description stored in a structure called ‘inode’. The inode contains info about the file-size, its location, time of last access, time of last modification, permission and so on. Directories are also represented as files and have an associated inode. In addition to descriptions about the file, the inode contains pointers to the data blocks of the file. If the file is large, inode has indirect pointer to a block of pointers to additional data blocks (this further aggregates for larger files). A block is typically 8k.
Inode consists of the following fields:
• File owner identifier
• File type
• File access permissions
• File access times
• Number of links
• File size
• Location of the file data

Brief about the directory representation in UNIX
A Unix directory is a file containing a correspondence between filenames and inodes. A directory is a special file that the kernel maintains. Only kernel modifies directories, but processes can read directories. The contents of a directory are a list of filename and inode number pairs. When new directories are created, kernel makes two entries named ‘.’ (refers to the directory itself) and ‘..’ (refers to parent directory).
System call for creating directory is mkdir (pathname, mode).

What are the Unix system calls for I/O?
• open(pathname,flag,mode) – open file
• creat(pathname,mode) – create file
• close(filedes) – close an open file
• read(filedes,buffer,bytes) – read data from an open file
• write(filedes,buffer,bytes) – write data to an open file
• lseek(filedes,offset,from) – position an open file
• dup(filedes) – duplicate an existing file descriptor
• dup2(oldfd,newfd) – duplicate to a desired file descriptor
• fcntl(filedes,cmd,arg) – change properties of an open file
• ioctl(filedes,request,arg) – change the behaviour of an open file
The difference between fcntl anf ioctl is that the former is intended for any open file, while the latter is for device-specific operations.

How do you change File Access Permissions?
Every file has following attributes:
• owner’s user ID ( 16 bit integer )
• owner’s group ID ( 16 bit integer )
• File access mode word
‘r w x -r w x- r w x’
(user permission-group permission-others permission)
r-read, w-write, x-execute
To change the access mode, we use chmod(filename,mode).
Example 1:
To change mode of myfile to ‘rw-rw-r–‘ (ie. read, write permission for user – read,write permission for group – only read permission for others) we give the args as:
chmod(myfile,0664) .
Each operation is represented by discrete values
‘r’ is 4
‘w’ is 2
‘x’ is 1
Therefore, for ‘rw’ the value is 6(4+2).
Example 2:
To change mode of myfile to ‘rwxr–r–‘ we give the args as:
chmod(myfile,0744).

What are links and symbolic links in UNIX file system?
A link is a second name (not a file) for a file. Links can be used to assign more than one name to a file, but cannot be used to assign a directory more than one name or link filenames on different computers.
Symbolic link ‘is’ a file that only contains the name of another file.Operation on the symbolic link is directed to the file pointed by the it.Both the limitations of links are eliminated in symbolic links.
Commands for linking files are:
Link ln filename1 filename2
Symbolic link ln -s filename1 filename2

What is a FIFO?
FIFO are otherwise called as ‘named pipes’. FIFO (first-in-first-out) is a special file which is said to be data transient. Once data is read from named pipe, it cannot be read again. Also, data can be read only in the order written. It is used in interprocess communication where a process writes to one end of the pipe (producer) and the other reads from the other end (consumer).

How do you create special files like named pipes and device files?
The system call mknod creates special files in the following sequence.
kernel assigns new inode,
sets the file type to indicate that the file is a pipe, directory or special file,
If it is a device file, it makes the other entries like major, minor device numbers.
For example:
If the device is a disk, major device number refers to the disk controller and minor device number is the disk.

Discuss the mount and unmount system calls
The privileged mount system call is used to attach a file system to a directory of another file system; the unmount system call detaches a file system. When you mount another file system on to your directory, you are essentially splicing one directory tree onto a branch in another directory tree. The first argument to mount call is the mount point, that is , a directory in the current file naming system. The second argument is the file system to mount to that point. When you insert a cdrom to your unix system’s drive, the file system in the cdrom automatically mounts to /dev/cdrom in your system.

How does the inode map to data block of a file?
Inode has 13 block addresses. The first 10 are direct block addresses of the first 10 data blocks in the file. The 11th address points to a one-level index block. The 12th address points to a two-level (double in-direction) index block. The 13th address points to a three-level(triple in-direction)index block. This provides a very large maximum file size with efficient access to large files, but also small files are accessed directly in one disk read.

What is a shell?
A shell is an interactive user interface to an operating system services that allows an user to enter commands as character strings or through a graphical user interface. The shell converts them to system calls to the OS or forks off a process to execute the command. System call results and other information from the OS are presented to the user through an interactive interface. Commonly used shells are sh,csh,ks etc.

Brief about the initial process sequence while the system boots up.
While booting, special process called the ‘swapper’ or ‘scheduler’ is created with Process-ID 0. The swapper manages memory allocation for processes and influences CPU allocation. The swapper inturn creates 3 children:
• the process dispatcher,
• vhand and
• dbflush
with IDs 1,2 and 3 respectively.
This is done by executing the file /etc/init. Process dispatcher gives birth to the shell. Unix keeps track of all the processes in an internal data structure called the Process Table (listing command is ps -el).

What are various IDs associated with a process?
Unix identifies each process with a unique integer called ProcessID. The process that executes the request for creation of a process is called the ‘parent process’ whose PID is ‘Parent Process ID’. Every process is associated with a particular user called the ‘owner’ who has privileges over the process. The identification for the user is ‘UserID’. Owner is the user who executes the process. Process also has ‘Effective User ID’ which determines the access privileges for accessing resources like files.
getpid() -process id
getppid() -parent process id
getuid() -user id
geteuid() -effective user id

Explain fork() system call.
The `fork()’ used to create a new process from an existing process. The new process is called the child process, and the existing process is called the parent. We can tell which is which by checking the return value from `fork()’. The parent gets the child’s pid returned to him, but the child gets 0 returned to him.

Predict the output of the following program code
main()
{
fork();
printf(“Hello World!”);
}
Answer:
Hello World!Hello World!
Explanation:
The fork creates a child that is a duplicate of the parent process. The child begins from the fork().All the statements after the call to fork() will be executed twice.(once by the parent process and other by child). The statement before fork() is executed only by the parent process.

Predict the output of the following program code
main()
{
fork(); fork(); fork();
printf(“Hello World!”);
}
Answer:
“Hello World” will be printed 8 times.
Explanation:
2^n times where n is the number of calls to fork()

List the system calls used for process management:
System calls Description
fork() To create a new process
exec() To execute a new program in a process
wait() To wait until a created process completes its execution
exit() To exit from a process execution
getpid() To get a process identifier of the current process
getppid() To get parent process identifier
nice() To bias the existing priority of a process
brk() To increase/decrease the data segment size of a process

How can you get/set an environment variable from a program?
Getting the value of an environment variable is done by using `getenv()’.
Setting the value of an environment variable is done by using `putenv()’.

How can a parent and child process communicate?
A parent and child can communicate through any of the normal inter-process communication schemes (pipes, sockets, message queues, shared memory), but also have some special ways to communicate that take advantage of their relationship as a parent and child. One of the most obvious is that the parent can get the exit status of the child.

What is a zombie?
When a program forks and the child finishes before the parent, the kernel still keeps some of its information about the child in case the parent might need it – for example, the parent may need to check the child’s exit status. To be able to get this information, the parent calls `wait()’; In the interval between the child terminating and the parent calling `wait()’, the child is said to be a `zombie’ (If you do `ps’, the child will have a `Z’ in its status field to indicate this.)

What are the process states in Unix?
As a process executes it changes state according to its circumstances. Unix processes have the following states:
Running : The process is either running or it is ready to run .
Waiting : The process is waiting for an event or for a resource.
Stopped : The process has been stopped, usually by receiving a signal.
Zombie : The process is dead but have not been removed from the process table.

What Happens when you execute a program?
When you execute a program on your UNIX system, the system creates a special environment for that program. This environment contains everything needed for the system to run the program as if no other program were running on the system. Each process has process context, which is everything that is unique about the state of the program you are currently running. Every time you execute a program the UNIX system does a fork, which performs a series of operations to create a process context and then execute your program in that context. The steps include the following:
• Allocate a slot in the process table, a list of currently running programs kept by UNIX.
• Assign a unique process identifier (PID) to the process.
• iCopy the context of the parent, the process that requested the spawning of the new process.
• Return the new PID to the parent process. This enables the parent process to examine or control the process directly.
After the fork is complete, UNIX runs your program.

What Happens when you execute a command?
When you enter ‘ls’ command to look at the contents of your current working directory, UNIX does a series of things to create an environment for ls and the run it:

The shell has UNIX perform a fork. This creates a new process that the shell will use to run the ls program.
The shell has UNIX perform an exec of the ls program. This replaces the shell program and data with the program and data for ls and then starts running that new program.

The ls program is loaded into the new process context, replacing the text and data of the shell. The ls program performs its task, listing the contents of the current directory.

What is a Daemon?
A daemon is a process that detaches itself from the terminal and runs, disconnected, in the background, waiting for requests and responding to them. It can also be defined as the background process that does not belong to a terminal session. Many system functions are commonly performed by daemons, including the sendmail daemon, which handles mail, and the NNTP daemon, which handles USENET news. Many other daemons may exist. Some of the most common daemons are:
• init: Takes over the basic running of the system when the kernel has finished the boot process.
• inetd: Responsible for starting network services that do not have their own stand-alone daemons. For example, inetd usually takes care of incoming rlogin, telnet, and ftp connections.
• cron: Responsible for running repetitive tasks on a regular schedule.

What is ‘ps’ command for?
The ps command prints the process status for some or all of the running processes. The information given are the process identification number (PID),the amount of time that the process has taken to execute so far etc.

How would you kill a process?
The kill command takes the PID as one argument; this identifies which process to terminate. The PID of a process can be got using ‘ps’ command.

What is an advantage of executing a process in background?
The most common reason to put a process in the background is to allow you to do something else interactively without waiting for the process to complete. At the end of the command you add the special background symbol, &. This symbol tells your shell to execute the given command in the background.
Example: cp *.* ../backup& (cp is for copy)

How do you execute one program from within another?
The system calls used for low-level process creation are execlp() and execvp(). The execlp call overlays the existing program with the new one , runs that and exits. The original program gets back control only when an error occurs.
execlp(path,file_name,arguments..); //last argument must be NULL
A variant of execlp called execvp is used when the number of arguments is not known in advance.
execvp(path,argument_array); //argument array should be terminated by NULL

What is IPC? What are the various schemes available?
The term IPC (Inter-Process Communication) describes various ways by which different process running on some operating system communicate between each other. Various schemes available are as follows:
Pipes:
One-way communication scheme through which different process can communicate. The problem is that the two processes should have a common ancestor (parent-child relationship). However this problem was fixed with the introduction of named-pipes (FIFO).

Message Queues :
Message queues can be used between related and unrelated processes running on a machine.

Shared Memory:
This is the fastest of all IPC schemes. The memory to be shared is mapped into the address space of the processes (that are sharing). The speed achieved is attributed to the fact that there is no kernel involvement. But this scheme needs synchronization.

State and explain about features of UNIX?
UNIX operating system originally was developed in 1969. This is an open source operating system developed by AT&T. It is widely used in work stations and servers. It is designed to be multi tasking, multi user and portable. UNIX has many several components packed together.

Explain about sh?
Sh is the command line interpreter and it is the primary user interface. This forms the programmable command line interpreter. After windows appeared it still retained the programmable characteristics.

Explain about system and user utilities?
There are two utilities they are system and user utilities. System utilities contain administrative tools such as mkfs, fsck, etc. Where as user utilities contain features such as passwd, kill, etc. It basically contains environment values.

Explain about document formatting?
UNIX systems were primarily used for typesetting systems and document formatting. Modern UNIX systems used packages such as Tex and Ghostscript. It uses some of the programs such as nroff, tbl, troff, refer, eqn and pic. Document formatting is very used because it forms the base of UNIX.

Explain about communication features in UNIX?
Early UNIX systems used inter user communication programs mail and write commands. They never contained a fully embedded inter user communication features. Systems with BSD included TCP/IP protocols.

Explain about chmod options filename?
This command allows you to change, write, read and execute permissions on your file. Changes can be done to the file system but at times you need to change permissions for the file systems. At times files should be executable for viewing the files.

Explain about gzip filename?
Gzip filename is used to compress the files so that those files take up less space. The size of the file actually gets reduced to half their size but they might also depend upon about the file size and nature of the file systems. Files using gzip file name end with .gz.

Explain about refer?
Refer was written in Bell Laboratories and it is implemented as a troff preprocessor. This program is used managing bibliographic references and it is used to cite them in troff documents. It is offered in most of the UNIX packages. It refers with text and reference file.

Explain about lpr filename?
This command is used to print a file. If you want to change the default print you can change the printer by using the P option. For double sided print you can use lpr-Pvalkyr-d. This is very useful command in UNIX present in many packages.

Explain about lprm job number?
This command is used to remove documents from the printer queue. The job number or the queue number can be found by using lpq. Printer name should be specified but this is not necessary if you want to use your default printer.

Brief about the command ff?
This command finds files present anywhere on the system. This command is used to find document location where you forgot the directory in which you kept the file but you do remember about the name. This command is not restricted in finding files it displays files and documents relevant to the name.

Brief about finger username?
This command is used to give information about the user; it gives out a profile about the user. This command is very useful for administrators as it gives the log information, email, current log information, etc. finger also displays information such as phone number and name when they use a file called .plan.

Explain about the command elm?
This command lets you to send email message from your system. This command is not the only one which sends email there are lots of other messenger systems which can facilitate the process of sending a mail. This command behaves differently on different machines.

Brief about the command kill PID?
This command ends the process to which it was assigned (ID). This command cannot be used in multi systems in the network. ID can be obtained by the command ps. This command ignores completely the state at which the process is it kills the process.

Explain about the command lynx?
This command helps you to browse web from an ordinary terminal. Text can be seen but not the pictures. URL can be assigned as an argument to the G command. Help section can be obtained by pressing H and Q makes the program to quit.

Brief about the command nn?
This command allows you to read the news. First you can read about the local news and then the remote news. “nnl” command makes or allows you to read local news and nnr command is used to read remote news. Manual and help information is available with many popular packages.

Brief about ftp hostname?
This command lets you download information, documents, etc from a remote ftp. First it is important to configure an FTP for the process to begin. Some of the important commands relevant to the usage of FTP are as follows get, put, mget, mput, etc. If you are planning to transfer files other than ASCII defined it is imperative to use binary mode.

Explain about the case statement.
The case statement compares word to the patterns from top to bottom, and performs the commands associated with the first, and only the first, pattern that matches. The patterns are written using the shells pattern matching rules, slightly generalized.

Explain the basic forms of each loop?
There are three loops; for, while and until. For loop is by far the most commonly used form of loop. Basically like other programs it executes a given set of commands and instructions. While and until forms of loop use the exit status from a command based system. They control the execution of the commands in the body of the loop.

Describe about awk and sed?
The awk program processes this to report the changes in an easier to understand format. Sed output is always behind its input by one line; there is always a line of input that has been processed but not printed, and this would introduce an unwanted delay.
Explain about signal argument?
The sequence of commands is a single argument, so it must almost always be quoted. The signal numbers are small integers that identify the signal. For example, 2 is the signal generated by pressing the DEL key, and 1 is generated by hanging up the phone. Unless a program has taken explicit action to deal with signals, the signal will terminate it.

Explain about exec?
The exec is just for efficiency, the command would run just as well without it. Exec is a shell built-in that replaces the process running this shell by the named program, thereby saving one process- the shell that would normally wait for the program to complete. Exec could be used at the end of the enhanced cal program when it invokes /usr/bin/cal.

Explain about trap command
The trap command sequence must explicitly invoke exit, or the shell program will continue to execute after the interrupt. The command sequence will be read twice: once when the trap is set and once when it is invoked. Trap is used sometimes interactively, most often to prevent a program from being killed by the hangup signal.

Explain about sort command?
The sort command has an option –o to overwrite a file:
$ sort file1 -0 file2
Is equivalent to
$ sort file1 > file2
If file 1 and file 2 are the same file, redirection with > will truncate the input file before it is sorted. The –o option works correctly because the input is sorted and saved in a temporary file before the output file is created. Many other commands could also use a –o option.

Explain about the command overwrite?
Overwrite is committed to changing the original file. If the program providing input to overwrite gets an error, its output will be empty and overwrite will dutifully and reliably destroy the argument file. Overwrite could ask for conformation before replacing the file, but making overwrite interactive would negate its efficiency. Overwrite could check that its input is empty.

Explain about kill command?
The kill command only terminates processes specified by process-id when a specific background process needs to be killed, you must usually run ps to find the process-id and then re type it as an argument to kill. Killing process is dangerous and care must be taken to kill the right processes.

Explain about the shell variable IFS?
The shell variable IFS (internal field separator) is a string of characters that separate words in argument lists such as back quotes and for statements. Normally IFS contains a blank, a tab, and a new line, but we can change it to anything useful, such as just a newline.

Explain about the rules used in overwrite to preserve the arguments to the users command?
Some of the rules are
• $* and $@ expand into the arguments and are rescanned; blanks in arguments will result in multiple arguments.
• “$*” is a single word composed of all the arguments to the shell file joined together with spaces.
• “$@” is identical to the arguments received by the shell file: blanks in arguments are ignored and the result is a list of words identical to the original arguments.

Explain about @@@ lines?
@@@ Lines are counted (but not printed), and as long as the count is not greater than the desired version, the editing commands are passed through. Two ed commands are added after those from the history file: $d deletes the single @@@ line that sed left on the current version.

Explain about vis?
Vis that copied its standard input to its standard output, except that it makes all non printing characters visible by printing them as \nnn, where nnn is the octal value of the character. Vis is invaluable for detecting strange or unwanted characters that may have crept into files.

Is the function call to exit at the end of vis necessary?
The call to exit at the end of vis is not necessary to make the program work properly, but it ensures that any caller of the program will see a normal exit status from the program when it completes. An alternate way to return status is to leave main with return 0; the return value from main is the program`s exit status.

Explain about fgets?

Fgets (buf, size, fp) fetches the next line of input from fp, up to and including a newline, into buf, and adds a terminating \0; at most size-1 characters are copied. A Null value is returned at the end of the file.

Explain about efopen page?
The routine efopen encapsulates a vey common operation: try to open a file; if it`s not possible, print an error message and exit. To encourage error messages that identify the offending program, efopen refers to an external string program containing the name of the program, which is set in main.

Explain about yacc parser generator?
Yacc is a parser generator that is a program for converting a grammatical specification of a language like the one above into a parser that will parse statements in the language.

What is $*?
Will display all the commandline arguments that are passed to the script

Different types of shells?
Bourne Shell (bash)
Korn Shell (ksh)
C Shell (csh)

What  is difference between a wild-card and regular expression?

Tagged : / / / / / / / / / / / / / / / / / / / /

Perforce Quick Facts – Perforce Quick Start Guide

perforce-quick-facts

Perforce Quick Facts

Clients
==============================================
P4V: Visual Client – (Included in the P4V Installer)
Provides access to versioned files through a graphical interface and also includes tools for merging and visualizing code evolution.
P4Merge: Visual Merge Tool – (Included in the P4V Installer)
Provides graphical three-way merging and side-by-side file comparisons
P4: Command-Line Client – (Included in the Perforce Server Windows Installer)
(Included in the Perforce Server Windows Installer)
P4Web: Web Client – (Included in the P4Web Installer)
Provides convenient access to versioned files through popular web browsers
Server
================================================
P4D: Server – (Included in the Perforce Server Windows Installer)
Stores and manages access to versioned files, tracks user operations and records all activity in a centralized database.
P4P: Proxy Server – (Included in the Perforce Server Windows Installer)
A self-maintaining proxy server that caches versioned files remotely on distributed networks.
Plug-ins & Integrations
=========================================
P4WSAD: Plug-in for Eclipse and WebSphere Studio
Access Perforce from within the Eclipse IDE and the Rational/WebSphere Studio WorkBench family of products
P4SCC: SCC Plug-in – (Included in the P4V Installer)
Enables you to perform Perforce operations from within IDEs that support the Microsoft SCC API including Visual Studio.
P4EXP: Plug-in for Windows Explorer – (Included in the P4V Installer)
Allows Windows users direct access to Perforce.
P4DTG: Defect Tracking Gateway – (Included in the P4DTG Installer)
Allows information to be shared between Perforce’s basic defect tracking system and external defect tracking systems.
P4GT: Plug-in for Graphical Tools
Provides seamless access to version control for files from within Adobe Photoshop, SoftImage XSI, Autodesk’s 3ds max, and Maya
P4OFC: Plug-in for Microsoft Office
Allows documents to be easily stored and managed in Perforce directly from Microsoft Word, Excel, PowerPoint and Project.

Tools & Utilities
=============================================
P4Report: Reporting System
Supports leading tools such as Crystal Reports, Microsoft Access, and Microsoft Excel, or any reporting tool that interfaces with an ODBC data source.
P4Thumb: Thumbnail Generator
Creates thumbnails of graphics files managed by Perforce and stores the thumbnails in the server for presentation in P4V.
P4FTP: FTP Plug-in
Allows FTP clients like Dreamweaver, Netscape, and Internet Explorer to access files in Perforce depots.’
Links to Download: http://www.perforce.com/perforce/downloads/platform.html

Tagged : / / / / / / / / / / / / / / / / /

Maven Interview Questions and Answers – Maven Job Interview Kit

maven-interview-questions-answers

Maven Interview Questions and Answers

Contributed by Rajesh Kumar with the help of Google Search and www.scmGalaxy.com
Is there a way to use the current date in the POM?
Take a look at the buildnumber plugin. It can be used to generate a build date each time I do a build, as follows:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>maven-buildnumber-plugin</artifactId>
<version>0.9.4</version>
<configuration>
<format>{0,date,yyyy-MM-dd HH:mm:ss}</format>
<items>
<item>timestamp</item>
</items>
<doCheck>false</doCheck>
<doUpdate>false</doUpdate>
</configuration>
<executions>
<execution>
<phase>validate</phase>
<goals>
<goal>create</goal>
</goals>
</execution>
</executions>
</plugin>

pom.xml or settings.xml? What is the best practice configuration usage for these files?
The best practice guideline between settings.xml and pom.xml is that configurations in settings.xml must be specific to the current user and that pom.xml configurations are specific to the project.
For example, <repositories> in pom.xml would tell all users of the project to use the <repositories> specified in the pom.xml. However, some users may prefer to use a mirror instead, so they’ll put <mirrors> in their settings.xml so they can choose a faster repository server.
so there you go:
settings.xml -> user scope
pom.xml -> project scope

How do I indicate array types in a MOJO configuration?

<tags>
    <tag>value1</tag>
    <tag>value2</tag>
  </tags>

How should I point a path for maven 2 to use a certain version of JDK when I have different versions of JDK installed on my PC and my JAVA_HOME already set?
If you don’t want to change your system JAVA_HOME, set it in maven script instead.
How do I setup the classpath of my antrun plugin to use the classpath from maven?
The maven classpaths are available as ant references when running your ant script. The ant reference names and some examples can be found here: maven-antrun-plugin
Is it possible to use HashMap as configurable parameter in a plugin? How do I configure that in pom.xml?
Yes. Its possible to use a HashMap field as a parameter in your plugin. To use it, your pom configuration should look like this:

<myMap>
      <yourkey>yourvalue</yourkey>
      .....
   </myMap>

How do I filter which classes should be put inside the packaged jar?
All compiled classes are always put into the packaged jar. However, you can configure the compiler plugin to exclude compiling some of the java sources using the compiler parameter excludes as follows:

<project>
   ...
   <build>
     ...
     <plugins>
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-compiler-plugin</artifactId>
         <configuration>
           <excludes>
             <exclude>**/NotNeeded*.java</exclude>
           </excludes>
         </configuration>
       </plugin>
     </plugins>
     ...
   </build>
  </project>

How can I change the default location of the generated jar when I command “mvn package”?
By default, the location of the generated jar is in ${project.build.directory} or in your target directory.
We can change this by configuring the outputDirectory of maven-jar-plugin.

<plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-jar-plugin</artifactId>
              <configuration>
                  <outputDirectory>${project.build.directory}/<!-- directory --></outputDirectory>
              </configuration>
          </plugin>

How does maven 2 implement reproducibility?

  • Add the exact versions of plugins into your pluginDepenencies (make use of the release plugin)
  • Make use of ibiblio for your libraries. This should always be the case for jars. (The group is working on stabilising metadata and techniques for locking it down even if it changes. An internal repository mirror that doesn’t fetch updates (only new) is recommended for true reproducibility.)

Why there are no dependency properties in Maven 2?
They were removed because they aren’t reliable in a transitive environment. It implies that the dependency knows something about the
environment of the dependee, which is back to front. In most cases, granted, the value for war bundle will be the same for a particular
dependency – but that relies on the dependency specifying it.
In the end, we give control to the actual POM doing the building, trying to use sensible defaults that minimise what needs to be
specified, and allowing the use of artifact filters in the configuration of plugins.

What does aggregator mean in mojo?
When a Mojo has a @aggregator expression, it means that It can only build the parent project of your multi-module-project, the one who has the packaging of pom. It can also give you values for the expression ${reactorProjects} where reactorProjects are the MavenProject references to the parent pom modules.
Where is the plugin-registry.xml?
From the settings.xml, you may enable it by setting <usePluginRegistry/> to true
and the file will be in ~/.m2/plugin-registry.xml
How do I create a command line parameter (i.e., -Dname=value ) in my mojo?
In your mojo, put “expression=${<exp>}” in your parameter field

/**
   * @parameter expression="${expression.name}"
   */
  private String exp;

You may now able to pass parameter values to the command line.
“mvn -Dexpression.name=value install”
How do I convert my <reports> from Maven 1 to Maven 2?
In m1, we declare reports in the pom like this:

<project>
    ...
    <reports>
      <report>maven-checkstyle-plugin</report>
      <report>maven-pmd-plugin</report>
    </reports>
  </project>

In m2, the <reports> tag is replaced with <reporting>

<project>
    ...
    <reporting>
      <plugins>
        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-checkstyle-plugin</artifactId>
          <configuration>
             <!-- put your config here -->
          </configuration>
        </plugin>
        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-pmd-plugin</artifactId>
          <configuration>
             <!-- put your config here -->
          </configuration>
        </plugin>
      </plugins>
    <reporting>
  </project>

What does the “You cannot have two plugin executions with the same (or missing) elements” message mean?
It means that you have executed a plugin multiple times with the same <id>. Provide each <execution> with a unique <id> then it would be ok.
How do I add my generated sources to the compile path of Maven, when using modello?
Modello generate the sources in the generate-sources phase and automatically adds the source directory for compilation in maven. So you don’t have to copy the generated sources. You have to declare the modello-plugin in the build of your plugin for source generation (in that way the sources are generated each time).
What is Maven’s order of inheritance?

  1. parent pom
  2. project pom
  3. settings
  4. CLI parameters

where the last overrides the previous.
How do I execute the assembly plugin with different configurations?
Add this to your pom,

<build>
    ...
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-assembly-plugin</artifactId>
        <executions>
          <execution>
      <id>1</id>
            <phase>install</phase>
        <goals>
             <goal>assembly</goal>
       </goals>
          <configuration>
               <descriptor>src/main/descriptors/bin.xml</descriptor>
               <finalName>${project.build.finalName}-bin</finalName>
       </configuration>
          </execution>

<execution>
<id>2</id>
<phase>install</phase>
<goals>
<goal>assembly</goal>
</goals>
<configuration>
<descriptor>src/main/descriptors/src.xml</descriptor>
<finalName>${project.build.finalName}-src</finalName>
</configuration>
</execution>
</executions>
</plugin>
</plugins>

</build>

and run mvn install, this will execute the assembly plugin twice with different config.
How do I configure the equivalent of maven.war.src of war plugin in Maven 2.0?

<build>
    ...
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-war-plugin</artifactId>
        <configuration>
           <warSourceDirectory><!-- put the path of the directory --></warSourceDirectory>
        </configuration>
      </plugin>
    </plugins>
    ...
  </build>

How do I add main class in a generated jar’s manifest?
Configure the maven-jar-plugin and add your main class.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-jar-plugin</artifactId>
    <configuration>
      <archive>
        <manifest>
     <mainClass>com.mycompany.app.App</mainClass>
        </manifest>
      </archive>
    </configuration>
  </plugin>

What does the FATAL ERROR with the message “Class org.apache.commons.logging.impl.Jdk14Logger does not implement Log” when using the maven-checkstyle-plugin mean?
Checkstyle uses commons-logging, which has classloader problems when initialized within a Maven plugin’s container. This results in the above message – if you run with ‘-e’, you’ll see something like the following:

Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Jdk14Logger does not implement Log

buried deep in the stacktrace.
The only workaround we currently have for this problem is to include another commons-logging Log implementation in the plugin itself. So, you can solve the problem by adding the following to your plugin declaration in your POM:

<project>
    ...
    <build>
      ...
      <plugins>
        ...
        <plugin>
          <artifactId>maven-checkstyle-plugin</artifactId>
          <dependencies>
            <dependency>
              <groupId>log4j</groupId>
              <artifactId>log4j</artifactId>
              <version>1.2.12</version>
            </dependency>
          </dependencies>
        </plugin>
      </plugins>
    </build>
    ...
    <reporting>
      ...
      <plugins>
        <!-- your checkstyle report is registered here, according to Maven documentation -->
      </plugins>
    </reporting>
  </project>

While this may seem a counter-intuitive way of configuring a report, it’s important to remember that Maven plugins can have a mix of reports and normal mojos. When a POM has to configure extra dependencies for a plugin, it should do so in the normal plugins section.
We will probably try to fix this problem before the next release of the checkstyle plugin.
UPDATE: This problem has been fixed in the SVN trunk version of the checkstyle plugin, which should be released very soon.
Plugins and Lifecycle, Sites & Reporting, Errors
How do I determine the stale resources in a Mojo to avoid reprocessing them?
This can be done using the following piece of code:

// Imports needed
  import org.codehaus.plexus.compiler.util.scan.InclusionScanException;
  import org.codehaus.plexus.compiler.util.scan.StaleSourceScanner;
  import org.codehaus.plexus.compiler.util.scan.mapping.SuffixMapping;

// At some point of your code
StaleSourceScanner scanner = new StaleSourceScanner( 0, Collections.singleton( “**/*.xml” ), Collections.EMPTY_SET );
scanner.addSourceMapping( new SuffixMapping( “.xml”, “.html” ) );
Set<File> staleFiles = (Set<File>) scanner.getIncludedSources( this.sourceDirectory, this.targetDirectory );

The second parameter to the StaleSourceScanner is the set of includes, while the third parameter is the set of excludes. You must add a source mapping to the scanner (second line). In this case we’re telling the scanner what is the extension of the result file (.html) for each source file extension (.xml). Finally we get the stale files as a Set<File> calling the getIncludedSources method, passing as parameters the source and target directories (of type File). The Maven API doesn’t support generics, but you may cast it that way if you’re using them.
In order to use this API you must include the following dependency in your pom:

<dependencies>
    <dependency>
      <groupId>org.codehaus.plexus</groupId>
      <artifactId>plexus-compiler-api</artifactId>
      <version>1.5.1</version>
    </dependency>
  </dependencies>

Is there a property file for plug-in configuration in Maven 2.0?
No. Maven 2.x no longer supports plug-in configuration via properties files. Instead, in Maven 2.0 you can configure plug-ins directly from command line using the -D arguement, or from the plug-in’s POM using the <configuration> element.
How do I determine which POM contains missing transitive dependency?
run “mvn -X”
How do I integrate static (x) html into my Maven site?
You can integrate your static pages in this several steps,

  • Put your static pages in the resources directory, ${basedir}/src/site/resources.
  • Create your site.xml and put it in ${basedir}/src/site. An example below:
<project name="Maven War Plugin">
    <bannerLeft>
      <name>Maven War Plugin</name>
      <src>http://maven.apache.org/images/apache-maven-project.png</src>
      <href>http://maven.apache.org/</href>
    </bannerLeft>
    <bannerRight>
      <src>http://maven.apache.org/images/maven-small.gif</src>
    </bannerRight>
    <body>
      <links>
        <item name="Maven 2" xhref="http://maven.apache.org/maven2/"/>
      </links>

<menu name=”Overview”>
<item name=”Introduction” xhref=”introduction.html”/>
<item name=”How to Use” xhref=”howto.html”/>
</menu>
${reports}
</body>
</project>

Link the static pages by modifying the <menu> section, create items and map it with the filename of the static pages.

<menu name="Overview">
    <item name="Introduction" xhref="introduction.html"/>
    <item name="How to Use" xhref="howto.html"/>
    <item name="<put-name-here>" xhref="<filename-of-the-static-page>"/>
  </menu>

How do I run an ant task twice, against two different phases?
You can specify multiple execution elements under the executions tag, giving each a different id and binding them at different phases.

<plugin>
         <artifactId>maven-antrun-plugin</artifactId>
         <executions>
           <execution>
              * <id>one</id>*
             <phase>generate-sources</phase>
             <configuration>
               <tasks>
                 <echo message="generate-sources!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"/>
               </tasks>
             </configuration>
             <goals>
               <goal>run</goal>
             </goals>
           </execution>

<execution>
*<id>two</id>*
<phase>package</phase>
<configuration>
<tasks>
* <echo message=”package!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!”/>*

</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>

Can a profile inherit the configuration of a “sibling” profile?
No. Profiles merge when their ID’s match – so you can inherit them from a parent POM (but you can’t inherit profiles from the same POM).
Inheritence and Interpolation, Plugins and Lifecycle, POM
How do I invoke the “maven dist” function from Maven 1.0, in Maven 2.0?
mvn assembly:assembly
See the Assembly Plugin documentation for more details.
General, Plugins and Lifecycle
How do I specify which output folders the Eclipse plugin puts into the .classpath file?

<build>
  ...
    <pluginManagement>
      <plugins>
        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-eclipse-plugin</artifactId>
          <configuration>
            <outputDirectory>target-eclipse</outputDirectory>
          </configuration>
        </plugin>
      </plugins>
    </pluginManagement>
  ...
  </build>

What is a Mojo?
A mojo is a Maven plain Old Java Object. Each mojo is an executable goal in Maven, and a plugin is a distribution of one or more related mojos.
How to produce execution debug output or error messages?
You could call Maven with -X parameter or -e parameter. For more information, run:

mvn --help

Maven compiles my test classes but doesn’t run them?
Tests are run by the surefire plugin. The surefire plugin can be configured to run certain test classes and you may have unintentionally done so by specifying a value to ${test}. Check your settings.xml and pom.xml for a property named “test” which would like this:

  ...
    <properties>
      <property>
        <name>test</name>
        <value>some-value</value>
      </property>
   </properties>
    ...

Or

  ...
    <properties>
      <test>some-value</test>
   </properties>
    ...

How do I include tools.jar in my dependencies?
The following code includes tools.jar on Sun JDKs (it is already included in the runtime for Mac OS X and some free JDKs).

...
    <profiles>
      <profile>
        <id>default-tools.jar</id>
        <activation>
          <property>
            <name>java.vendor</name>
            <value>Sun Microsystems Inc.</value>
         </property>
       </activation>
        <dependencies>
          <dependency>
            <groupId>com.sun</groupId>
            <artifactId>tools</artifactId>
            <version>1.4.2</version>
            <scope>system</scope>
            <systemPath>${java.home}/../lib/tools.jar</systemPath>
         </dependency>
       </dependencies>
     </profile>
   </profiles>
    ...

I have a jar that I want to put into my local repository. How can I copy it in?
If you understand the layout of the maven repository, you can copy the jar directly into where it is meant to go. Maven will find this file next time it is run.
If you are not confident about the layout of the maven repository, then you can adapt the following command to load in your jar file, all on one line.

mvn install:install-file
    -Dfile=<path-to-file>
    -DgroupId=<group-id>
    -DartifactId=<artifact-id>
    -Dversion=<version>
    -Dpackaging=<packaging>
    -DgeneratePom=true

Where: <path-to-file>  the path to the file to load
<group-id>      the group that the file should be registered under
<artifact-id>   the artifact name for the file
<version>       the version of the file
<packaging>     the packaging of the file e.g. jar

This should load in the file into the maven repository, renaming it as needed.
How do I set up Maven so it will compile with a target and source JVM of my choice?
You must configure the source and target parameters in your pom. For example, to set the source and target JVM to 1.5, you should have in your pom :

...
    <build>
    ...
      <plugins>
        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-compiler-plugin</artifactId>
          <version>2.0.2</version>
          <configuration>
            <source>1.5</source>
            <target>1.5</target>
         </configuration>
       </plugin>
     </plugins>
    ...
   </build>
    ...

How can I use Ant tasks in Maven 2?

There are currently 2 alternatives:

Maven 2.0 Eclipse Plug-in

Plugins are great in simplifying the life of programmers; it actually reduces the repetitive tasks involved in the programming. In this article our experts will show you the steps required to download and install the Maven Plugin with your eclipse IDE.
Why Maven with Eclipse
Eclipse is an industry leader in IDE market, it is used very extensively in developing projects all around the world. Similarly, Maven is a high-level, intelligent project management, build and deployment tool provided by Apache’s software foundation group. Maven deals with application development lifecycle management.

Maven–Eclipse Integration makes the development, testing, packaging and deployment process easy and fast. Maven Integration for Eclipse provides a tight integration for Maven into the IDE and avails the following features:
· It helps to launch Maven builds from within Eclipse
· It avails the dependency management for Eclipse build path based on Maven’s pom.xml
· It resolves Maven dependencies from the Eclipse workspace withoutinstalling to local Maven repository
· It avails an automatic downloading of the required dependencies from the remote Maven repositories
· It provides wizards for creating new Maven projects, pom.xml or to enable Maven support on plain Java project
· It helps to search quickly for dependencies in Maven remote repositories
· It quickly fixes in the Java editor for looking up required dependencies/jars by the class or package name.
What do you Need?
1. Get the Eclipse Development Environment :
In this tutorial we are using the eclipse-SDK-3.3-win32, which can be downloaded fromhttp://www.eclipse.org/downloads/
2. Get Maven-eclipse-plugin-plugin :
It is available at http://mevenide.codehaus.org/maven-eclipse-plugin-plugin/

Download and Install Eclipse
First download and install the eclipse plugin on your development machine then proceed with the installation process of the eclipse-maven plugin.

A Maven 2.0 Repository: An Introduction

Maven repository Types:

  • Public remote external repository: This public external repository exists at ibiblio.org and maven synchronizes with this repository.
  • Private remote internal repository: We set up this repository and make changes in the maven’s pom.xml or settings.xml file to use this repository.
  • Local repository: This repository is maintained by the developer and stays on the developer’s machine. It is synchronous to the maven repository defined in the settings.xml file that exists in the .m2 directory at its standard location i.e. C:\Documents and Settings\Administrator. If no private internal repository is setup and not listed in the pom.xml or in the setting.xml then the local repository exists on the developer’s machine is synchronized with the public maven repository at ibiblio.org.

Advantages of having an internal private repository :

  • Reduces conflicts among likelihood versions.
  • To build first time it requires less manual intervention.
  • Rather than having several separate independent libraries it provides a single central reference repository for all the dependent software libraries.
  • It quickly builds the project while using an internal repository as maven artifacts are retrieved from the intranet server rather than retrieving from the server on internet.

Use cases for maven repository:

  • It creates two sub-repository inside the internal repository.
  • Downloads ibiblio-cache from ibiblio for artifacts and make it available publically. This synchronizes with external repository from ibiblio.
  • internal-maven-repository: used for internal artifacts of an organization. It contains unique artifacts for the organization and is not synchronized with any repository.
  • Alternatively, another sub-repository that is not at ibiblio can be created for artifacts. This does not synchronize with any external repository.
  • Browse the remote repository by using a web browser.
  • Search the artifacts in the repository.
  • Download code from version control and make changes in settings.xml to point to the internal repository and build without any manual intervention.
  • Install new version of the artifacts.
  • Import artifacts into the repository in bulk.
  • Export artifacts from the repository in bulk.
  • Setup the task to backup the repository automatically.

Criteria for choosing a maven repository implementation: In ideal condition a maven repository implementation should be:

  • Free and open source
  • Provide admin tools
  • Easy to setup and use
  • Provide backup facility
  • Able to create, edit and delete sub repositories.
  • Anonymous read only access and also access control facility.
  • Deployable in any standard web server such as Tomcat or Apache.
  • Issue tracker, forums and other independent source of information.
  • Active community developers make the product enhanced and bugs fixed.
  • Bulk import/export facility to move groups of artifacts into the repository and out of the repository.
  • Provide a repository browser: should be a web browser instead of the desktop application.

Shifting from Apache Ant to Maven

Maven is entirely a different creature from Ant. Ant is simply a toolbox whereas Maven is about the application of patterns in order to achieve an infrastructure which displays the characteristics of visibility, reusability, maintainability, and comprehensibility. It is wrong to consider Maven as a build tool and just a replacement for Ant.
Ant Vs Maven
There is nothing that Maven does that Ant cannot do. Ant gives the ultimate power and flexibility in build and deployment to the developer. But Maven adds a layer of abstraction above Ant (and uses Jelly). Maven can be used to build any Java application. Today JEE build and deployment has become much standardized. Every enterprise has some variations, but in general it is all the same: deploying EARs, WARs, and EJB-JARs. Maven captures this intelligence and lets you achieve the build and deployment in about 5-6 lines of Maven script compared to dozens of lines in an Ant build script.
Ant lets you do any variations you want, but requires a lot of scripting. Maven on the other hand mandates certain directories and file names, but it provides plugins to make life easier. The restriction imposed by Maven is that only one artifact is generated per project (A project in Maven terminology is a folder with a project.xml file in it). A Maven project can have sub projects. Each sub project can build its own artifact. The topmost project can aggregate the artifacts into a larger one. This is synonymous to jars and wars put together to form an EAR. Maven also provides inheritance in projects.
Maven : Stealing the show
Maven simplifies build enormously by imposing certain fixed file names and acceptable restrictions like one artifact per project. Artifacts are treated as files on your computer by the build script. Maven hides the fact that everything is a file and forces you to think and script to create a deployable artifact such as an EAR. Artifact has a dependency on a particular version of a third party library residing in a shared remote (or local) enterprise repository, and then publish your library into the repository as well for others to use. Hence there are no more classpath issues. No more mismatch in libraries. It also gives the power to embed even the Ant scripts within Maven scripts if absolutely essential.

Maven 2.0: Features

Maven is a high-level, intelligent project management, build and deployment tool provided by Apache’s software foundation group. Maven deals with application development lifecycle management. Maven was originally developed to manage and to minimize the complexities of building the Jakarta Turbine project. But its powerful capabilities have made it a core entity of the Apache Software Foundation projects. Actually, for a long time there was a need to standardized project development lifecycle management system and Maven has emerged as a perfect option that meets the needs. Maven has become the de- facto build system in many open source initiatives and it is rapidly being adopted by many software development organizations.
Maven was borne of the very practical desire to make several projects at Apache work in a consistence manner. So that developers could freely move between these projects, knowing clearly how they all worked by understanding how one of them worked.

If a developer spent time understanding how one project built it was intended that they would not have to go through this process again when they moved on to the next project. The same idea extends to testing, generating documentation, generating metrics and reports, testing and deploying. All projects share enough of the same characteristics, an understanding of which Maven tries to harness in its general approach to project management.
On a very high level all projects need to be built, tested, packaged, documented and deployed. There occurs infinite variation in each of the above mentioned steps, but these variation still occur within the confines of a well defined path and it is this path that Maven attempts to present to everyone in a clear way. The easiest way to make a path clear is to provide people with a set of patterns that can be shared by anyone involved in a project.

The key benefit of this approach is that developers can follow one consistent build lifecycle management process without having to reinvent such processes again. Ultimately this makes developers more productive, agile, disciplined, and focused on the work at hand rather than spending time and effort doing grunt work understanding, developing, and configuring yet another non-standard build system.
Maven: Features

  1. Portable: Maven is portable in nature because it includes:
    • Building configuration using maven are portable to another machine, developer and architecture without any effort
    • Non trivial: Maven is non trivial because all file references need to be relative, environment must be completely controlled and independent from any specific file system.
  2. Technology: Maven is a simple core concept that is activated through IoC container (Plexus). Everything is done in maven through plugins and every plugin works in isolation (ClassLoader). Plugings are downloaded from a plugin-repository on demand.

Maven’s Objectives:
The primary goal of maven is to allow the developers to comprehend the complete state of a project in the shortest time by using easy build process, uniform building system, quality project management information (such as change Log, cross-reference, mailing lists, dependencies, unit test reports, test coverage reports and many more), guidelines for best practices and transparent migration to new features. To achieve to this goal Maven attempts to deal with several areas like:

  • It makes the build process easy
  • Provides a uniform building system
  • Provides quality related project information
  • Provides guidelines related to development to meet the best goal.
  • Allows transparent migration to new features.

Introduction to Maven 2.0

Maven2 is an Open Source build tool that made the revolution in the area of building projects. Like the build systems as “make” and “ant” it is not a language to combine the build components but it is a build lifecycle framework. A development team does not require much time to automate the project’s build infrastructure since maven uses a standard directory layout and a default build lifecycle. Different development teams, under a common roof can set-up the way to work as standards in a very short time. This results in the automated build infrastructure in more stable state. On the other hand, since most of the setups are simple and reusable immediately in all the projects using maven therefore many important reports, checks, build and test animation are added to all the projects. Which was not possible without maven because of the heavy cost of every project setup.

Maven 2.0 was first released on 19 October 2005 and it is not backward compatible with the plugins and the projects of maven1. In December 2005, a lot of plugins were added to maven but not all plugins that exists for maven1 are ported yet. Maven 2 is expected to stabilize quickly with most of the Open Source technologies. People are introduced to use maven as the core build system for Java development in one project and a multi-project environment. After a little knowledge about the maven, developers are able to setup a new project with maven and also become aware of the default maven project structure. Developers are easily enabled to configure maven and its plugins for a project. Developers enable common settings for maven and its plugins over multiple projects, how to generate, distribute and deploy products and reports with maven so that they can use repositories to set up a company repository. Developers can also know about the most important plugins about how to install, configure and use them, just to look for other plugins to evaluate them so that they can be integrated in their work environment.

Maven is the standard way to build projects and it also provides various other characters like clearing the definition of the project, ways to share jars across projects. It also provides the easy way to publish project information (OOS).
Originally maven was designed to simplify the building processes in the Jakarta Turbine project. Several projects were there containing their own slightly different Ant build files and JARs were checked into CVS. An apache group’s tool that can build the projects, publish project information, defines what the project consists of and that can share JARs across several projects. The result of all these requirement was the maven tool that builds and manages the java-based-project.

Why maven is a great build tool? how does it differ from other Build tools?
Tell me more about Profiles and Nodes in Maven?
Tell me more about local repositories?
How did you configured local repositories in different environment (Development, Testing , Production etc)?
What is Transcend Dependencies in maven 2?
Did you write plugins in maven? if so what are they?
Why a matrix report is required during a new release?  How does this benefit QA Team?
What are pre-scripts and post-scripts in maven? Illustrate with an example?
What are the checklists for artifacts ? and what are the checklists for source code artifact?
Tell me the experience about Static Analysis Code?

Reference:
http://www.javabeat.net

Tagged : / / / / / / / / / / / / / / / / /

Know About scmGalaxy – Introduction

about-scmgalaxy

scmGalaxy is a community initiatives based on Software configuration management that helps community members to optimize their software development process, Software Development Life Cycle optimization, Agile Methodologies and improve productivity across all aspects of Java development, including Build Scripts, Testing, Issue Tracking, Continuous Integration, Code Quality and more!

scmGalaxy is a community initiatives based on Software configuration management that helps community members to optimize their software development process, Software Development Life Cycle optimization, Agile Methodologies and improve productivity across all aspects of Java development, including Build Scripts, Testing, Issue Tracking, Continuous Integration, Code Quality and more. scmGalaxy group that helps organisations optimize their software development process. We provide consulting, training and mentoring services in Agile Development Practices such as Version Management, Continuous Integration, Build Management, Test-Driven Development, Acceptance-Test Driven Development, Build Automation, Code Quality Practices and Automated Testing.

We provide job oriented training in the area of Configuration management, Build and Release Engineering. Candidates with engineering or software background and looking to either start or change their career to Build and Release Engineering, would benefit most from this training. Instructor-led training course offered in India, Bangalore, Delhi, Pune, Mumbai and Hydrabad. Instructor is an expert in Software configuration management, Build and release engineering with more than 15 years industry experience in india.The Goal of the course make the training attendants equip with all the concepts of build and release engineering.

Course Objectives
To bring your team up to speed with agile development, We can also run the from Continuous Integration to Continuous Delivery with autoamted course within your premises.

Course Schedule
This course is an intensive 1-day & 2-day workshop with a mixture of teaching and lab exercises. Currently, this course is offered exclusively as an on-site course. Please contact us for more details.

Audience
This is a hands-on, practical course designed to teach specialised skills for real-world development situations. It is thus primarily aimed at a SCM Engineer, Build/Release Engineer and developer audience.

Approach
The course is modular and flexible – depending on specific student needs and requests. Through our trainings, you benefit from the wide experience and architectural expertise of our team. We bring that experience to you in an highly interactive, intensely hands-on setting.

Assumptions
We assume participants have a reasonable understanding of Development in any language as well as a basic understanding of the Software Development Life Cycle.

Lab Work
All our courses are above all practical in nature. We believe that the best way to learn is by doing. So the course contains approximately 80% lab work.

Learning Resources
Each registrant will receive a copy of the student notes and lab solutions, a certificate of completion, and a CD containing all the tools covered in the course and CD containing all the tools covered in the course.

Contact Us
This course is provided on-site, and can be tailored to your particular requirements. If you would like our trainings delivered at your premises, or for any additional information please contact us. Please email us at info@scmGalaxy.com

Authors and Contributors

Rajesh Kumar, India (Bangalore), over 8 years of extensive experience in SCM domain having depth knowledge of Configuration Management, Build Management, Packaging, Release Management and Application Maintenance. Expertise in Wide range of CM tools (Perforce, MKS, CVS and SVN(Subversion) and VSS), packaging tools (Wise Studio/InstallAnywhere) and Build Management (Ant, CruiseControl, Anthillpro, Maven, Bamboo, Hudson & OpenMake) and Quality related tools like (Sonar, PMD, CheckStyle, Clover and FindBugs). He writes blogs on http://www.scmGalaxy.com
His primary areas of involvement are in object-oriented development, agile methods, enterprise application architecture, Workflow Management System and Automated Build and Release, Continuous Integration, Build Automation, Test-Driven Development and Code Quality, using open source tools such as Maven, Hudson, and Nexus. Environment.

Praveen Marakkoor, Praveen currently working as Sr SCM Engineer for Blue Shield of California San Francisco,CA USA previously he held Development and SCM positions at Kyocera Wireless,24 Hour Fitness, Experience, Macys, Cisco and Intuit. He has extensive experience in Software Configuration Management and Release Engineering and its tools like Maven,Ant,Perforce,maven, SVN, CVS, Clearcase, PVCS, git, Hudson, Bamboo, perl, shell scripting on heterogeneous environments along with QA Engineering.
Praveen holds a Bachelors Degree in Computer Science and Engineering from Visweswaraya Technological University Belgaum, Karnataka India. Highly detail oriented leadership skilled and a team player. He writes blogs on http://www.scmGalaxy.com

Tushar Patil, Currently working with S1 Services, Pune as Sr SCM Engineer.Having around 8 years extensive experience in Configuration Management and Release Engineering. Expertise in tools like SVN, Ant, Maven, Izpack, Perl, Clover, Shell Scripting, Hudson, Bamboo. Having very good experience in OS installations on Vmware ESX servers(AIX,Red Hat,Solaris) and installtion/configuration/troubleshooting of various softwares like WebSphere ND, Jboss, DB2, Oracle on AIX,Solairs and RedHat operating systems.

Brajesh Kumar Rai, Over 7 years of extensive experience in Configuration Management, Build Management, Release Management and Application Maintenance. Expertise in Wide range of CM tools (Perforce, ClearCase, CVS and SVN), packaging tools (InstallAnywhere) and Build Management (Ant, Maven, Make, CruiseControl, Anthillpro, Electric Cloud, Buildbot) multisite projects. Apart from always I am very adventures ,love any kind of crazy adventure, expert in Rock Climbing and rafting, Loves music gazals preferred. He writes blogs on http://www.scmGalaxy.com

Michael Feighner, USA(San Francisco) Expertise in C/C + +, CMMI, change management, clearcase, clearquest, configuration/data management, database admin, delivery, doors, e-commerce, FORTRAN, GUI, meeting facilitation, MS Excel, MS Word, MS PowerPoint, OOD, perl scripting, process engineering, programming, quality control, requirements, scheduling, shell scripting, sw development, sw installation, sw testing, sql, unix, visual basic, visual studio, vxworks, web site production.

Praveen Thakur, India (Pune) Expertise in InstallShield and Application Packaging Domain. He writes blog on http://www.scmGalaxy.com

Tagged : / / / / / / / / / / / / / / / /