List of Default ports used in OpenStack components

List of Default ports used in OpenStack components 

OpenStack service

Default ports

Port type

Block Storage (cinder) 8776 publicurl and adminurl
Compute (nova) endpoints 8774 publicurl and adminurl
Compute API (nova-api) 8773, 8775
Compute ports for access to virtual machine consoles 5900-5999
Compute VNC proxy for browsers ( openstack-nova-novncproxy) 6080
Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy) 6081
Proxy port for HTML5 console used by Compute service 6082
Identity service (keystone) administrative endpoint 35357 adminurl
Identity service public endpoint 5000 publicurl
Image Service (glance) API 9292 publicurl and adminurl
Image Service registry 9191
Networking (neutron) 9696 publicurl and adminurl
Object Storage (swift) 6000, 6001, 6002
Orchestration (heat) endpoint 8004 publicurl and adminurl
Orchestration AWS CloudFormation-compatible API (openstack-heat-api-cfn) 8000
Orchestration AWS CloudWatch-compatible API (openstack-heat-api-cloudwatch) 8003
Telemetry (ceilometer) 8777 publicurl and adminurl

This table lists the ports that other OpenStack components use:

 

Service

Default port

Used by

HTTP 80 OpenStack dashboard (Horizon) when it is not configured to use secure access.
HTTP alternate 8080 OpenStack Object Storage (swift) service.
HTTPS 443 Any OpenStack service that is enabled for SSL, especially secure-access dashboard.
rsync 873 OpenStack Object Storage. Required.
iSCSI target 3260 OpenStack Block Storage. Required.
MySQL database service 3306 Most OpenStack components.
Message Broker (AMQP traffic) 5672 OpenStack Block Storage, Networking, Orchestration, and Compute.

On some deployments, the default port used by a service may fall within the defined

Tagged : / /

Ecosystem of chef and Its associated tools explained

Chef Apply
chef-apply is an executable program that runs a single recipe from the command line. Is part of the Chef development kit. A great way to explore resources

Chef
The chef executable is a command-line tool which Generates applications, cookbooks, recipes, attributes, files, templates, and custom resources (LWRPs) and Ensures that RubyGems are downloaded properly for the chef-client development environment along with Verifies that all components are installed and configured correctly

Knife
knife is a command-line tool that provides an interface between a local chef-repo and the Chef server. knife helps users to manage Nodes, Cookbooks and recipes, Roles, Environments, and Data Bags, Resources within various cloud environments, The installation of the chef-client onto nodes, Searching of indexed data on the Chef server

Chef Client
The Chef client works with the Chef server to bring nodes to their desired states with policies you provide as recipes. The chef-client executable can be run as a daemon. A chef-client is an agent that runs locally on every node that is under management by Chef. When a chef-client is run, it will perform all of the steps that are required to bring the node into the expected state, including:

  • Registering and authenticating the node with the Chef server
  • Building the node object
  • Synchronizing cookbooks
  • Compiling the resource collection by loading each of the required cookbooks, including recipes, attributes, and
  • all other dependencies
  • Taking the appropriate and required actions to configure the node
  • Looking for exceptions and notifications, handling each as required

Chef Development Kit
The Chef development kit contains all you need to develop and test your infrastructure, built by the awesome Chef community. Chef Development Kit has following Component installed…

  • fauxhai
  • kitchen-vagrant
  • openssl
  • delivery-cli
  • test-kitchen
  • git
  • berkshelf
  • chefspec
  • knife-spork
  • inspec
  • tk-policyfile-provisioner
  • opscode-pushy-client
  • chef-dk
  • chef-sugar
  • chef-client
  • generated-cookbooks-pass-chefspec
  • chef-provisioning
  • package installation

Chef Server
The Chef server makes it easy to automate your infrastructure, manage scale and complexity, and safeguard your systems.

Chef Server has following tools which should be running…

  • bookshelf
  • nginx
  • oc_bifrost
  • oc_id
  • opscode-erchef
  • opscode-expander
  • opscode-solr4
  • postgresql
  • rabbitmq
  • redis_lb

InSpec
InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements.

Push Jobs Client
The Push Jobs client communicates with the Push Jobs server, which extends the Chef Server to allow you to execute commands across hundreds or even thousands of nodes in your Chef-managed infrastructure.

Push Jobs Server
The Push Jobs server add-on, along with its associated client, extends the Chef Server to allow you to execute commands across hundreds or even thousands of nodes in your Chef-managed infrastructure.

Supermarket
Supermarket is an artifact repository that makes it easy to browse, use, and share communal cookbooks and tools within your organization.

Chef Automate
One platform with a unified workflow, end-to-end visibility, and automated compliance over your entire Chef ecosystem.

Chef Compliance
Assess and monitor infrastructure compliance and use InSpec compliance profiles to validate that production servers are properly configured.

Chef Backend
Chef High Availability makes it easy to build high-availability Chef clusters on any infrastructure.

Chef Manage
Chef Manage is an Enterprise Chef add-on that enables a web-based user interface for visualizing and managing nodes, data bags, roles, environments, cookbooks and role-based access control (RBAC).

Kitchen or Test Kitchen
kitchen is the command-line tool for Kitchen, an integration testing tool used by the chef-client. Kitchen runs tests against any combination of platforms using any combination of test suites. Each test, however, is done against a specific instance, which is comprised of a single platform and a single set of testing criteria.

“Test Kitchen is an integration tool for developing and testing infrastructure code and software on isolated target platforms.” It creates test machines, converges them, and runs post-convergence tests against them to verify their state. Test Kitchen is written in Ruby. It has a plugin system for supporting machine creation through a variety of virtual machine technologies such as vagrant, EC2, docker, and several others. Test Kitchen makes it easy for Chef developers to test cookbooks on a variety of platforms. It uses busser to install post-convergence integration test tools such as Serverspec or BATS that actually perform the tests.

foodcritic
Foodcritic is a helpful lint tool you can use to check your Chef cookbooks for common problems.
http://www.foodcritic.io/

ChefSpec
ChefSpec is a framework that tests resources and recipes as part of a simulated chef-client run. ChefSpec tests execute very quickly. When used as part of the cookbook authoring workflow, ChefSpec tests are often the first indicator of problems that may exist within a cookbook.
ChefSpec is packaged as part of the Chef development kit. To run ChefSpec
$ chef exec rspec
https://docs.chef.io/chefspec.html

RuboCop
Rubocop is a Ruby command-line tool that performs lint and style checks based on the community driven Ruby Style Guide. It performs static analysis of any Ruby code, which includes Chef recipes, resources, library helpers, and so forth. Rubocop can be configured via .rubocop.yml to exclude certain rules, and it can be run with “–lint” to perform only lint checking, excluding all style checks. Rubocop is used in the Chef community in cookbooks to make contributions more consistent and easier to manage.

Serverspec
Serverspec is an “outside-in” integration test framework. It is platform and tool agnostic, and is used by other configuration management systems to verify systems are configured as desired. It checks the actual state of the target node by executing commands locally, via SSH, via WinRM, or other remote transports. Serverspec is implemented in RSpec, and uses RSpec test syntax.

Tagged : / / / / / / / / / / / / / / / / / / / /

How to install Chef Development Kit(ChefDK)

How to install Chef Development Kit(ChefDK)?

The Chef development kit contains all you need to develop and test your infrastructure, built by the awesome Chef community. Its Chef Developer tool.

Platform – Linux RHEL

Download Chef Development Kit(ChefDK)

# Download Package URL from https://downloads.chef.io/chefdk/2.5.3
$ sudo -s
$ cd /opt/
$ yum install wget -y
$ wget https://packages.chef.io/files/stable/chefdk/2.5.3/el/7/chefdk-2.5.3-1.el7.x86_64.rpm

Install Chef Development Kit(ChefDK)

$ rpm -Uvh chefdk-2.5.3-1.el7.x86_64.rpm

warning: chefdk-2.5.3-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83ef826a: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:chefdk-2.5.3-1.el7               ################################# [100%]
Thank you for installing Chef Development Kit!

Verify Chef Development Kit(ChefDK) Quick way

$ chef -v
Chef Development Kit Version: 2.5.3
chef-client version: 13.8.5
delivery version: master (73ebb72a6c42b3d2ff5370c476be800fee7e5427)
berks version: 6.3.1
kitchen version: 1.20.0
inspec version: 1.51.21

Verify Chef Development Kit(ChefDK) Detailed way

$ chef verify
[WARN] This is an internal command used by the ChefDK development team. If you are a ChefDK user, please do not run it.
Running verification for component 'berkshelf'
Running verification for component 'test-kitchen'
Running verification for component 'tk-policyfile-provisioner'
Running verification for component 'chef-client'
Running verification for component 'chef-dk'
Running verification for component 'chef-provisioning'
Running verification for component 'chefspec'
Running verification for component 'generated-cookbooks-pass-chefspec'
Running verification for component 'fauxhai'
Running verification for component 'knife-spork'
Running verification for component 'kitchen-vagrant'
Running verification for component 'package installation'
Running verification for component 'openssl'
Running verification for component 'inspec'
Running verification for component 'delivery-cli'
Running verification for component 'git'
Running verification for component 'opscode-pushy-client'
Running verification for component 'chef-sugar'
.................................../opt/chefdk/embedded/lib/ruby/gems/2.4.0/gems/chef-provisioning-aws-3.0.2/lib/chef/resource/aws_route53_record_set.rb:48: warning: constant ::Fixnum is deprecated
cannot load such file -- fog
.......
---------------------------------------------
Verification of component 'fauxhai' succeeded.
Verification of component 'kitchen-vagrant' succeeded.
Verification of component 'openssl' succeeded.
Verification of component 'delivery-cli' succeeded.
Verification of component 'test-kitchen' succeeded.
Verification of component 'git' succeeded.
Verification of component 'berkshelf' succeeded.
Verification of component 'chefspec' succeeded.
Verification of component 'knife-spork' succeeded.
Verification of component 'inspec' succeeded.
Verification of component 'tk-policyfile-provisioner' succeeded.
Verification of component 'opscode-pushy-client' succeeded.
Verification of component 'chef-dk' succeeded.
Verification of component 'chef-sugar' succeeded.
Verification of component 'chef-client' succeeded.
Verification of component 'generated-cookbooks-pass-chefspec' succeeded.
Verification of component 'chef-provisioning' succeeded.
Verification of component 'package installation' succeeded.

 

Tagged : / /

Logstash explained in 5 mins

What is Logstash?
Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.”

Logstash Benefits

  • Logstash allows you to easily ingest unstructured data from a variety of data sources including system logs, website logs, and application server logs.
  • Logstash offers pre-built filters, so you can readily transform common data types, index them in Elasticsearch, and start querying without having to build custom data transformation pipelines.
  • With over 200 plugins already available on Github, it is likely that someone has already built the plugin you need to customize your data pipeline.

Logstash work in 3 phases….

Phase 1 – When it comes from ingests data from a multitude of sources simultaneously, which includes files, s3,beats, kafka etc. Data is often scattered or siloed across many systems in many formats. Logstash supports a variety of inputs that pull in events from a multitude of common sources, all at the same time.
List of sources from where logstash can ingest the data are as follows;
https://www.elastic.co/guide/en/logstash/current/input-plugins.html

Phase 2 – Next, It Parse & Transform Your Data On the Fly. As data travels from source to store, Logstash filters parse each event, identify named fields to build structure, and transform them to converge on a common format for easier, accelerated analysis and business value. Logstash dynamically transforms and prepare your data regardless of format or complexity

Phase 3 – Last, Logstash stored the parsed data into Elasticsearch, aws,hadoop, Mongodb and go-to output that opens up a world of search and analytics possibilities.Logstash has a variety of outputs that let you route data where you want, giving you the flexibility to unlock a slew of downstream use cases. Some of these are given below;
https://www.elastic.co/guide/en/logstash/current/output-plugins.html

Where you can use the Logstash?

  1. Log Analytics – Ingest un-structured and semi-structured logs generated by servers, applications, mobile devices, and more for a wide variety of applications such as digital marketing, application monitoring, fraud detection, ad tech, gaming, and IoT. Logstash provides plugins to quickly load data from a variety of data sources.
  2. IT Operations Monitoring – Capture server logs and push them into your Elasticsearch cluster using Logstash. Elasticsearch indexes the data and makes it available for analysis in near real-time (less than one second). You can then use Kibana to visualize the data and perform operational analyses like identifying network issues and disk I/O problems. Your on-call teams can perform statistical aggregations to identify root cause and fix issues.
Tagged :

What is Zabbix and use of it?

What is Zabbix and use of it?

Zabbix is the ultimate enterprise-level software designed for real-time monitoring of millions of metrics collected from tens of thousands of servers, virtual machines and network devices. Zabbix is Open Source and comes at no cost. This tool is 19 years old along with 300 000+ installation worldwide. it has capability to monitor anything such as…

  1. Network Monitoring
  2. Server Monitoring
  3. Cloud Monitoring
  4. Services Monitoring
  5. KPI/SLA monitoring

Zabbix uses MySQL, PostgreSQL, SQLite, Oracle or IBM DB2 to store data. Its backend is written in C and the web frontend is written in PHP. Zabbix offers several monitoring options such as:

Collect metrics from any devices, systems, applications

  • Multi-platform Zabbix agent
  • SNMP and IPMI agents
  • Agentless monitoring of user services
  • Custom methods
  • Calculation and aggregation
  • End user web monitoring

 

PROBLEM DETECTION – Define smart thresholds
Detect problem states within the incoming metric flow automatically. No need to peer at incoming metrics continuously.

  • Highly flexible definition options
  • Separate problem conditions and resolution conditions
  • Multiple severity levels
  • Root cause analysis
  • Anomaly detection
  • Trend prediction

VISUALIZATION – Single pane of glass
The native web interface provides multiple ways of presenting a visual overview of your IT environment:

  • Widget-based dashboards
  • Graphs
  • Network maps
  • Slideshows
  • Drill-down reports

NOTIFICATION AND REMEDIATION – Be notified in case of any issues, guaranteed

Inform responsible persons about occurred events using many different channels and options:

  • Send messages
  • Let Zabbix fix issues automatically
  • Escalate problems according to flexible user-defined Service Levels
  • Customize messages based on recipient’s role
  • Customize messages with runtime and inventory information
  • Save yourself from thousands of repetitive notifications and focus on root causes of a problem with Zabbix
  • Event correlation mechanism.

SECURITY AND AUTHENTICATION – Protect your data on all levels

  • Strong encryption between all Zabbix components
  • Multiple authentication methods: Open LDAP, Active Directory
  • Flexible user permission schema
  • Zabbix code is open for security audits

EFFORTLESS DEPLOYMENT – Save your time by using out-of-the-box templates

  • Install Zabbix in minutes
  • Use out-of-the-box templates for most of popular platforms
  • Build custom templates
  • Use hundreds of templates built by Zabbix community
  • Apply for Template building service from Zabbix team
  • Monitor thousands of similar devices by using configuration templates

AUTO-DISCOVERY – Automate monitoring of large, dynamic environments

Take automatic actions upon adding/removing/changing elements.

  • Network discovery: periodically scans network and discovers device type, IP, status, uptime/downtime, etc, and takes predefined actions.
  • Low-level discovery: automatically creates items, triggers, and graphs for different elements on a device.
  • Auto-registration of active agent: automatically starts monitoring new equipment with Zabbix agent.

DISTRIBUTED MONITORING – Scale without limits

Build distributed monitoring solution while keeping centralized control.

  • Collect data from thousands of monitored devices
  • Monitor behind the firewall, DMZ
  • Collect data even in case of network issues
  • Remotely run custom scripts on monitored hosts

ZABBIX API – Integrate Zabbix with any part of your IT environment

Get access to all Zabbix functionality from external applications through Zabbix API:

  • Automate Zabbix management via API
  • 200+ different methods available
  • Create new applications to work with Zabbix
  • Integrate Zabbix with third party software: Configuration management, ticketing systems
  • Retrieve and manage configuration and historical data

Tagged :

Berkshelf in Chef explained?

Configuration Management using chef is being implemented with the help of desire files, which is often called a “cookbooks” in chef. Usuallay separate cookbooks is written in practice for each module so its easy to maintain. Also, there are good numbers of cookbooks which is being used from community portal which is from supermarket.chef.io.

In order to meet the desire state in servers, once the multile cookbooks is used, Very often these cookbooks become large and highly interdependent, it becomes necessary to manage these cookbooks itself.

Berkshelf is the tool which makes ‘the management’ & dependency management between cookbooks easy. Berkshelf is a dependency manager for Chef cookbooks. With it, you can easily depend on community cookbooks and have them safely included in your workflow. Using Berkshelf, you need to package, bundle dependent cookbooks rather using it will download from “source” which is defined in Berksfile.

Berkshelf is included in the Chef Development Kit.

Quick Start
Running “chef generate cookbook” will, by default, create a Berksfile in the root of the cookbook, alongside the cookbook’s metadata.rb. As usual, add your cookbook’s dependencies to the metadata:

name 'my_first_cookbook'
version '0.1.0'
depends 'apt', '~> 5.0'

The default Berksfile will contain the following:

source 'https://supermarket.chef.io'
metadata

Now, when you run “berks install“, the apt cookbook will be downloaded from Supermarket into the cache…

$ berks install
Resolving cookbook dependencies...
Fetching 'my_first_cookbook' from source at .
Fetching cookbook index from https://supermarket.chef.io...
Installing apt (5.0.0)
Using my_first_cookbook (0.1.0) from source at .
Installing compat_resource (12.16.2)

Example of Berksfile

source "https://supermarket.chef.io"

metadata
cookbook 'zabbix-agent', path: 'cookbooks/zabbix-agent'
cookbook 'hostnames', path: 'cookbooks/hostnames'
cookbook 'chef-client', path: 'cookbooks/chef-client'
cookbook 'rethinkdb', path: 'cookbooks/rethinkdb'
cookbook 'zookeeper', path: 'cookbooks/zookeeper'
cookbook 'logstash', path: 'cookbooks/logstash'
cookbook 'kafka', path: 'cookbooks/kafka'
cookbook 'elasticsearch', path: 'cookbooks/elasticsearch'
cookbook 'testbook', path: 'cookbooks/testbook'
cookbook 'base-ubuntu', path: 'cookbooks/base-ubuntu'

Important Notes
For new users, we strongly recommend using Policyfiles rather than Berkshelf. Policyfiles provide more predictability, since dependencies are only resolved once, and a much improved way of promoting cookbooks from dev to testing, and then to production. Note that Policyfile is not supported as part of a Chef Automate workflow.

Tagged : /

List of AWS regions and availability zones

List of  AWS Regions

This is complete list of  AWS regions available currently.

S.No Code Name
1 us-east-1 US East (N. Virginia)
2 us-west-2 US West (Oregon)
3 us-west-1 US West (N. California)
4 eu-west-1 EU (Ireland)
5 eu-central-1 EU (Frankfurt)
6 ap-southeast-1 Asia Pacific (Singapore)
7 ap-northeast-1 Asia Pacific (Tokyo)
8 ap-southeast-2 Asia Pacific (Sydney)
9 ap-northeast-2 Asia Pacific (Seoul)
10 sa-east-1 South America (São Paulo)
11 cn-north-1 China (Beijing)
12 ap-south-1 India (Mumbai)

AWS upcoming regions

 

S.No Code Name
1 N/A OHIO
2 N/A MONTREAL
3 N/A UK
4 N/A INDIA
5 N/A NINGXIA

List of  AWS regions and their availability zones

S.No AWS region code AWS region name Number Of Availability Zones Availability Zone Names
1 us-east-1 Virginia 4 us-east-1a
us-east-1b
us-east-1c
us-east-1e
2 us-west-2 Oregon 3 us-west-2a
us-west-2b
us-west-2c
3 us-west-1 N. California 3 us-west-1a
us-west-1b
4 eu-west-1 Ireland 3 eu-west-1a
eu-west-1b
eu-west-1c
5 eu-central-1 Frankfurt 2 eu-central-1a
eu-central-1b
6 ap-southeast-1 Singapore 2 ap-southeast-1a
ap-southeast-1b
7 ap-southeast-2 Sydney 3 ap-southeast-2a
ap-southeast-2b
ap-southeast-2c
8 ap-northeast-1 Tokyo 2 ap-northeast-1a
ap-northeast-1c
9 ap-northeast-2 Seoul N/A N/A
10 sa-east-1 Sao Paulo 3 sa-east-1a
sa-east-1b
sa-east-1c
11 cn-north-1 China (Beijing) N/A N/A
12 ap-south-1 India (Mumbai) 2 ap-south-1a
ap-south-1b

If you are familiar with AWS CLI you can always check regions and availability zones using following aws cli commands

Find regions using AWS CLI

Command:  aws ec2 describe-regions

Tagged : / /

How to run UI testing in Docker container using Selenium

Docker is one of the revolutions technologies which has created lots of buzz in the Software development practices. Docker has not only helped to setup Continuous Integration and Delivery but also manage and replicate test environments and deploy a large at scale in no time. Here are the following advantages which benefit to testing team using docker

  • Docker will help setting up Continuous Integration and Delivery which will enable testing team to deploy and test application in very less time compare of using virtual machines.
  • Efficient software teams push code to production multiple times a day. But this only works with good processes in place. Pull requests, code reviews, and good test coverage are essential for enabling a fast pace and high output of new code. Docker will help QA team to enable this in no time
  • Docker Compose help to create a Application Stake for Developers and QA in very less time.
  • Docker allows you to run your tests in containers as well as isolate your tests in development and deployment.
  • Using Docker to manage and replicate test environments in no time.

But there is one limitation. The major problem of a Docker container for UI testing is that it does not have a screen output. There are in general two solutions:

  1. Use a headless browser such as HTMLUnit that does not require a graphical user interface or
  2. Simulate a screen output.

Recommended is second one because in this case you do not need to change my testing code to use a WebDriver for a headless browser. Moreover, headless browser may not have full functionalities of a real browser.  What I need is a display server called Xvfb or X virtual frame buffer. It performs all graphical operations in memory without showing any screen output.

You can find sample example in following urls.

https://medium.com/@yiquanzhou/run-selenium-ui-tests-in-docker-container-78be98e1b52d

http://testnblog.com/ui-automation-framework-on-docker/

Tagged : / /

“Run in Preflight” and “Run in Preflight Only” explained in IBM UBuild!

The Preflight plugin provides steps that can be used for developer Preflight builds

Explanation

The preflight client first starts a simple file server.
First, The server is used to transfer source files from the user desktop to a build agent and to transfer build artifacts from the build agent to the user desktop.
Next, the preflight client requests anthill to perform a build. It is expected that the build will have special preflight transfer steps already configured.

  1. One step will request source files from the preflight client that originated the build and lay them down in the workspace directory.
  2. The second step will send artifacts from the agent back to the preflight client.

The basic model for a preflight workflow is this.

  • Check out source.
  • Transfer modified source files from the preflight client. These files will overwrite the pristine copy.
  • Run build commands.
  • Transfer build artifacts to the preflight client.
  • Alternatively, the source checkout step could be skipped altogether and the preflight client will provide all source files.

Workflow

In normal usage, the preflight system depends on steps added to the normal build workflow. These steps are in the Preflight folder:

  • “Transfer Source Step” and
  • “Transfer Artifacts Step”.

These steps should be added at appropriate points in your build job.

All steps, including preflight steps, have preflight flags in the “Additional Options” section of the step configuration. The first, “Run in Preflight”, controls if the step is executed in a preflight build. Most steps will leave this on the default, true. The main exception would be artifact publishing steps. Artifacts from preflight builds are often not preserved on the server.

The second option, “Run in Preflight Only”, controls if the step is executed only in a preflight build. Normal builds do not execute steps with set to true. By default, this option is false and most steps will use this setting. The exceptions are the two preflight steps. These should only be used in a preflight a build as the communicate with the preflight client. When configuring these steps, set “Run in Preflight Only” to true.

The preflight flags are not limited to any particular steps. This is useful, for example, for skipping packing steps not useful to a preflight user or publishing preflight artifacts to special artifact sets.

Simple Build

The preflight client controls what files are sent and what files are received with command line options. For sending source files, “-i” will add an include pattern and “-x” will add an exclude pattern. For receiving files, the options are similiar: “-I” will add an artifact include pattern and “-X” will add an exclude pattern (note the difference is case).

Here’s a simple example:

C:\>preflight build -p petstore -w "build trunk" -i "src/**" -x "src/conf/**" -I "dist/*.war" -X "dist/util.war"

This command will run the “build trunk” workflow of the “petstore” project. All source files will be included except those in src/conf. All WAR files in dist will be returned, except for util.war.

 

Reference

ftp://ftp.software.ibm.com/software/rationalsdp/documentation/product_doc/UrbanCode/AnthillPro/AnthillProWiki/Pre-Flight_Tool.html

 

Tagged : / /