Step by Step Instruction to Upgrade Perforce to 2014

perforce-2014-upgrade-instructions

Step by Step Instruction to Upgrade Perforce to 2014

The step are as follows:

a) Check if the license is current.

p4 license -o

The expiry date must be later than the binary release date you are installing

b) Verify server archives using the command ‘p4 verify -q //… > verify.txt’

* Note – If the file ‘verify.txt’ contains a ‘BAD!’ or MISSING!’
signature during the verify command, please contact Perforce Support prior to upgrading.
** If the depot is large and you get an error message “Request too large for server memory”, you can split the verify to check chink of the depot:

Example:
p4 verify -q //depot/project1/… > prj1-verify.txt
p4 verify -q //depot/project2/… > prj2-verify.txt
p4 verify -q //depot/project3/… > prj3-verify.txt

c) Stop the Perforce Server.

p4 admin stop

d) Make a checkpoint of the database.

p4d -r P4ROOT -J journal -jc

e) Copy the db.* files to a safe location.

f) Backup your archive files.

g) Read the release notes so you are aware of the latest changes.

http://www.perforce.com/perforce/doc.current/user/relnotes.txt

h) Download the latest P4D version.

ftp://ftp.perforce.com/perforce/

i) Verify that the newly downloaded binary is the latest. You might need to copy the p4d.exe to p4s.exe

p4d.exe -V
p4s.exe -V

j) Install the upgraded version of the Perforce server.

k) Restore the Perforce Server using the new P4D Binary and the checkpoint taken from the old P4D binary, with the following command ‘p4d –r root –jr checkpoint.xxx’.

l) If the server contains more than 1000 changelists, please run the following command ‘p4d -r root -xu’ from the command line’.

m) Start the perforce Sever.

n) Verify server archive files.

p4 verify -q //… > upgrade-verify.txt

** Note – If the file ‘upgrade-verify.txt’ will contains a ‘BAD!’ or MISSING!’ signature during the verify command, please contact us.

o) Take a checkpoint
p4d -r root -J journal -jc

This is also explained in the below article:

Upgrading to 2013.3 and beyond – http://answers.perforce.com/articles/KB_Article/Upgrading-to-2013-3-and-beyond

Windows To Linux
Moving from Linux to Windows is not a trivial issue, and such a migration is also not supported and will most likely require consolidation of case sensitivity conflicts using our P4 Migrate product. Please see the link below:

Cross-Server Platform Migration
http://answers.perforce.com/articles/KB_Article/Cross-Platform-Perforce-Server-Migration/?q=linux+to+windows&l=en_US&fs=Search&pn=1
http://answers.perforce.com/articles/KB_Article/Cross-Platform-Perforce-Server-Migration

Change of Server is a “Self-serve” option, please fill out this form and licensing will update accordingly.
P4 Support Change of Server License Request
http://www.perforce.com/support-services/change-server-request

Please pass the -z switch during the replay as the checkpoint/journal you are replaying are compressed:
p4d -r P4ROOT -jr -z checkpoint.ckp.104.gz

You will need to request for a background user I believe to be able to do all the required command. This can be done by filling in this below form and using one background user for all of you build machines.
http://www.perforce.com/support-services/request-background-user

Moving a Perforce Server
http://answers.perforce.com/articles/KB_Article/Moving-a-Perforce-Server

 By Chris Lesemann <support@perforce.com

Our Knowledge Base has accurate and up to date information that will help you prepare your own procedure for upgrading. Because the database structures have changed, you will need to take an existing checkpoint and then replay that checkpoint with the 2014.1 version of the P4 Server.

It appear you are on a 32-bit Windows instance…will you be doing an inplace upgrade or moving to 64-bit hardware? If you are moving from 32-bit to -64bit you have to recover from checkpoint anyway.

Please review the two links below and if you have any further questions I would be happy to assist you:

 Upgrading to 2013.3 and beyond <– specific procedure for moving from 2013.2 and earlier P4 to 2013.3 and later

 Upgrading a Perforce Server <– in-place upgrade

 Moving a Perforce Server <– if you have to move to new hard – do NOT start a 64-bit P4D binary against 32-bit db.* databases, or vice versa.

 Upgrading the Perforce Proxy

General information on the new Btree structures and functionality:

 BTree Format Changed for Perforce Server Versions 2013.3 and Later

 Lockless Reads

Tagged : / / / / / / / / / / / / / / / / /

Ways to Perforce server Disk Space Cleanup and Repos Size Management

perforce-server-disk-space-cleanup-and-repos-size-management

DRAFT VERSION

  1. Cleaning up Old Checkpoints
  2. Playing with Symlink(Softlink) and Redirecting the ROOT folder to drive where we have enough place.
  3. Deleting db.have and recreate it manually
  4. Display disk space information on the server using “p4 diskspace”
  5. Display size information for files in the depot using “p4 sizes”
Tagged : / / / / / / / / / / / / / / / / /

How to assign computer startup scripts?

computer-startup-scripts

1. Open the Group Policy snap-in.
2. In the console tree, click Scripts (Startup/Shutdown). –

Where?

policy name Policy > Computer Configuration > Windows Settings > Scripts (Startup/Shutdown)or Start the policy editor of the local group: Start Menu > Run > Type gpedit.msc

3. In the details pane, double-click Startup.
4. In the Startup Properties dialog box, click Add.
5. In the Add a Script dialog box, type the following information, and then click OK:

Script Name: Type the path to the script, or click Browse to search for the script file in the Netlogon share of the domain controller.

Script Parameters: Type any parameters that you want, the same way as you would type them on the command line. For example, if your script includes parameters called //logo (display banner) and //I (interactive mode), type the following: //logo //I

6. In the Startup Properties dialog box, specify the options that you want, as follows, and then click OK:

Startup Scripts for Group Policy object: Lists all the scripts that are currently assigned to the selected Group Policy object. If you assign multiple scripts, the scripts are processed in the order that you specify. To move a script up in the list, click it, and then click Up. To move a script down in the list, click it, and then click Down.

Add: Opens the Add a Script dialog box, where you can specify any additional scripts to use.

Edit: Opens the Edit Script dialog box, where you can modify script information, such as name and parameters.

Remove: Removes the selected script from the Startup Scripts list.

Show Files: Displays the script files that are stored in the selected Group Policy object.

Tagged : / / / / / / / / / / / /

How to Select the Right Software version Control Product?

software-version-control-product

Original link: http://www.cmcrossroads.com/cm-articles/275-articles/14286-selecting-the-right-software-version-control-product

Selecting the Right Software Version Control Product Written by Mike Feighner For many years, I worked loyally for the same company where my expertise was restricted to one single software version control tool—the one that was already in place. I was not involved with determining whether the selected tool was appropriate for the company’s needs; my role was to learn it well and use it within the accepted standards, guidelines, and internally approved processes. When I recently found myself back in the job market, I realized that there were many other version control tools in use in the industry. I then began to consult with other engineers in the field and gathered data from them on which version control tools their organizations were using and the selection criteria they used in choosing a version control tool for their organization. This experience opened up a whole new world for me and soon I realized that selecting and implementing the right tools was essential for the success of any software or systems development effort. From this experience, I have drawn up a checklist of version-control-tool criteria to aid an organization in selecting the best tool. Although there are many good descriptions available on the Internet about the many tools currently on the market, I do not recommend one specific tool over the other, because some tools may fit great in one organization, but fail in another. My goal here is to suggest a partial framework for deciding which tool best fits your needs while at the same time adhering to configuration management best practices. First, you will need to define your organization’s goals, which could include any or all of the following: Good documentation should accompany your tool purchase. The tool should be portable to more than one platform. If your organization is spread out over different time zones or even across international borders, your tool should support a multi-site rather than a single-site function. The cost should be affordable. The tool should be easy to use. The tool should easily support branching and merging. The tool should have the ability to lock a file when being edited to prevent two engineers from editing the same file simultaneously. The tool should be able to accurately merge changes between two or more files. The tool should log a history of changes with notes on who made the change and why the change was made. The tool should have the ability to establish immutable baselines. Next, you will need to review your organization’s requirements for software version control and carefully review the various software tools on the market to determine which one best matches your organization’s needs. Some may be better suited for your project than others. Selecting the wrong tool can have devastating effects on your team’s effort. Lack of a good version control tool can be a major cause for delays and problems during your organization’s application development. The project will run over cost and behind schedule and can be prone to risks associated with the manual management and distribution of your project’s dependencies. Unauthorized changes and results will occur, and the customer will not be satisfied. The tool you select should not impede the implementation of configuration management best practices, which serve as a means to add value to the process, improve quality, and increase productivity. The following are some, but not all, of the questions you will need to consider. Documentation Is the tool well documented? Is support documentation available on this tool’s installation and for the tool’s use by both administrators and developers? As a reference manual, the tool’s documentation should adequately describe the tool’s features—each dialog box, tab, field, button, etc. It should answer a user’s questions about completing a specific task in a clear and concise way. Frequently, the user documentation is available in the tool’s help pull-down menu or can be downloaded electronically from the vendor’s website. Portability Is the tool portable? Can it be used on multiple platforms and operating systems? From a business perspective, your software team will be supporting a broad user market occupying various platforms, including Windows, Mac OS, or some flavor of UNIX. Your tool should be compatible on any of these platforms, as each release will simultaneously need a separate version to support each platform of your user base. Multi-site vs. Single Site Is your team located at one location site, or is your team globally distributed across the nation and across different time zones throughout the world? A “single site” constitutes one physical location such as a single building or office. A “multisite” constitutes an organization in more than one office or location in a single time zone, in multiple time zones within one country or in multiple countries. If your organization is spread out over various time zones or even across international borders, your tool should support a multi-site rather than a single-site function. A multi-site project established as a set of several individual independent single-site systems would be prone to added costs and risks associated with the manual management and distribution of the project’s dependencies. Life for a multisite organization would be a lot easier to maintain a single multisite system with access to one shared repository. A single-site organization would be wise to select a single-site system over the additional costs of a multi-site system Cost What are the costs and terms of licensing of the tool? Is it something your organization can afford? Generally, a tool designed for a large organization will have a much larger cost than one designed for a smaller organization. No one should assume paying a higher cost for your tool will solve all of your problems. Even a high cost tool may fall sort of satisfying your organization’s needs. Your decision should consider overall cost and your organization’s needs. Supporting your organization’s needs should never be underfunded. Ease of Use How easy is it to install and deploy the system? Will the tool be dependent on other tools to conduct software builds, or are some of these capabilities already an integral part of the tool? Is this a tool that is easy to use from the first day of installation throughout the entire development lifecycle? Will training and customer support be available? Will the tool require training an in-house administrator dedicated to the administration of the tool? If you are not able to immediately pick up a source code management system and start building your site, odds are it is not entirely user friendly. Ultimately, the key to a successful deployment of a source code management system is reliability and ease-of-use. A tool that is difficult to use will likely conceal the functional benefits that it would otherwise provide. If training is an affordable option for your organization, consider whether it is available through the vendor or through an unbiased third party. For a lower-cost approach in the long term, consider sending a representative from your organization to a vendor training event who will later write up a training program for your own company-based organization. Branching Does the tool accommodate branch creation? The capability to create a branch should allow duplication of an object under revision control. Thus, code modifications can happen securely in parallel along both branches at the same time whether the change involves adding a new requirement or attempting to resolve an issue from an earlier release. The tool you select should support your particular choice of branching strategy, be it by revision or release via a simple copybranch or by creating adelta branch off of the main branch or trunk. Or the tool may go a step further by creating streams that include supporting metadata and workflow automation as an enhancement in managing multiple variants in the code. Branching is one of the most important features to consider in your choice of a good source control management system as it allows you to easily support at the same time the same source code and a parallel subset of the same code being modified securely to support either a new requirement or a bugfix from an earlier release. Merging Does the tool allow merges of changes and assists in resolving conflicts between different edits to the same file? Can a merge be done via a graphical user interface or on a command line? Take into consideration that merging concurrent changes made by different developers can increase the amount of effort to resolve conflicts and achieve a merge safely. The best advice in such cases is to merge little and merge often. File Locking Does the system have a means of preventing concurrent access to the same file? Does it prevent more than one user at a time from writing to the file until the current user either checks in the file or cancels the checkout? History of Changes Does the tool maintain a log of the history of changes? Can the tool display the history of changes graphically, as in a version tree? Does the tool easily identify the current baselined version and list previous baselined versions? A history change log that maintains accurate records facilitates traceability through the code’s lifecycle, from its beginning to its eventual release and beyond. A change history log can trace a change back its origins to who made the change and to the authority who authorized the change. It plays a vital role in baselining the code. This is a basic requirement of any organization wishing to maintain proper controls and compliance. Baselining Baselining, also known as setting a control point for your code, lets you know the exact versions of all source code and other configuration items that were included in a release. Does the tool have a way of baselining the code (be it via labeling, tagging, or snapshots) to a particular version that would allow us to back out and return to the previous baselined version in the event of a problem? Is the baseline immutable? If the tool allows changes to the previously released baseline, it jeopardizes the integrity of the software. A baseline must be locked down to prevent modifications. Any subsequent release should be based on a previously release baseline. Conclusion No one software version control tool can possibly fit every organization’s needs. There is no one size fits all. The right tool should help you safeguard your code and help your process improve productivity and quality. Remember that thorough evaluation and selection of the right tool will require funding, but this vetting must stay within your organization’s budget. Careful selection of the appropriate version control tool will help your firm improve productivity and quality and ensure that you are adhering to industry standards regarding configuration management best practices. About the Author Mike Feighner has more than twenty years’ experience in information technology in the aerospace industry, including more than ten years in configuration management. Mike graduated from San Jose State University with a bachelor’s degree in German and has a master’s degree in political science from the University of Tübingen in Germany and a master’s degree in software engineering from National University.

Tagged : / / / / / / / / / / / / / / /

How we reduced build time from 8 hours to 1 hour ? – Complete Guide

hudson-master-slave-setup

Situation

  1. For one of our clients, Build is taking 8 hours and nightly build is failing frequently.
  2. Test case execution is consuming more time than the compilation.
  3. Low confidence levels for developers on nightly builds and subsequently during integration cycle.

Approach

  1. We reviewed the whole set of build scripts being used and test case suite to check how we can refatcor them.
  2. Verified the build infrastructure i.e., build server capacity from hardware point of view.
  3. Feasibility study done to check how a CI environment help in giving faster feedback on the quality.


Solution

  1. Refactoring done for build scripts to make modules compilation parallel (wherever possible).
  2. Some test cases are selected (which don’t require external services like database) to run as part of the build.
  3. CI environment (using open source cruise control) is established to run builds per commit and execute only above set of test cases as part of the build.
  4. Set up low-cost remote agent machine which will run remaining test suite once per day.

Results

  1. Build took only 1 hour and gave faster feedback to dev team.
  2. CI environment provided good confidence to dev team.
  3. Team is able to implement more best practices.
  4. Effective use of existing build  infrastructure (hardware) ensured no further cost is required.
  5. This helped the client in ensuring smooth release management.

 

Tagged : / / / / / / / / / / / / / /

How to read XML file by using shell script ?

read-xml-file-using-shell-script

This was like the first time where I had to write something that will be able to read something out of a XML file using a shell script. Usually I would use Python/Perl as my favorite choices in such a scenario but in this one I really *had* to do all within a shell script.

This is an example of the type of XML file I had to read:

Fixed a new bug
Shoaib Mir
Sun, 02 May 2010
shoaibmir[@]gmail.com

I ended up having a shell script like this:

#!/bin/bash

#Looking for four keywords in here
for key in changelog name date email
do
OUTPT=`grep $key log.xml | tr -d '\t' | sed 's/^\([^<].*\)$/\1/' `
eval ${key}=`echo -ne \""${OUTPT}"\"`
done

# Getting the results in four specific arrays
changelogarr=( `echo ${changelog}` )
namearr=( `echo ${name}` )
datearr=( `echo ${date}` )
emailarr=( `echo ${email}` )

#Print all Arrays
echo ${changelogarr[@]}
echo ${namearr[@]}
echo ${datearr[@]}
echo ${emailarr[@]}

Which gives me an output:

shoaib@shoaib-desktop:~/Desktop$ ./readxml.sh
Fixed a new bug Shoaib Mir
Sun, 02 May 2010
shoaibmir[@]gmail.com

 

Reference:

Reading XML file using shell script

 

Tagged : / / / / / / / / / / / / / / /

Usage of ANT_OPTS in Ant Script | ANT_OPTS capabilities

ant_opts-in-ant-script

Usage of ANT_OPTS in Ant Script | ANT_OPTS capabilities

Ant has three environment variables that you can use to set its default behavior.
• ANT_ARGS Set this variable to include those options you use frequently.
• ANT_OPTS is a list of arguments that you want to pass to the JVM that will run Ant.
• JAVACMD is the absolute path to the Java executable you want Ant to use. you may specify a value for the JAVACMD environment variable. This defaults to %JAVA_HOME%\bin\java, typically invoking the JVM provided in Sun’s Java Development Kit. Ant provides JAVACMD for those who wish to specify an alternate JVM. full path of the Java executable. Use this to invoke a different JVM than JAVA_HOME/bin/java(.exe).

These are useful environment variables that the Ant wrapper scripts use when
invoking Ant: ANT_OPTS and ANT_ARGS. Neither of these is typically set by users,
but each can provide value for certain situations.

The ANT_OPTS environment variable provides options to the JVM executing Ant,
such as system properties and memory configuration. ANT_OPTS – command-line arguments that should be passed to the JVM. For example, you can define system properties or set the maximum Java heap size here.

For authenticated proxy:
Set your ANT_OPTS environment variable to configure your proxy if you have one. For instance:
set ANT_OPTS=-Dhttp.proxyHost=myproxy -Dhttp.proxyPort=3128

You can set these properties by either modifying Ant’s startup script, or by
using the ANT_OPTS environment variable. The following example shows the Windows commands to specify these properties using ANT_OPTS, and then to invoke Ant:
set ANT_OPTS=-DproxySet=true -DproxyHost=localhost -DproxyPort=80
ant mytarget
The same trick works on Unix, although the syntax is slightly different depending on which
shell you use:
$ export ANT_OPTS=”-DproxySet=true -DproxyHost=localhost -DproxyPort=80″
$ ant mytarget

set ANT_OPTS=-Dhttp.proxyHost=myproxyhost -Dhttp.proxyPort=8080 -Dhttp.proxyUserName=myproxyusername -Dhttp.proxyPassword=myproxypassword -Dhttps.proxyHost=myproxyhost -Dhttps.proxyPort=8080

 

Use ANT_OPTS to control Ant’s virtual machine settings.
Some tasks may require more memory, which you can set in the ANT_OPTS environment variable, using the appropriate mechanism for your platform:
set ANT_OPTS=-Xmx500M
export ANT_OPTS=-Xmx500M

Setting the maximum heap size is another common use of ANT_OPTS. Here is how we set the maximum size to 128 MB when using Sun’s JDK:
set ANT_OPTS=-Xmx128m

One environment variable you may wish to set is ANT_OPTS. The value of this variable is passed as a JVM argument. Specifying system properties is a common use. In this simple
example, we pass the log.dir system property to the JVM running Ant:
$ set ANT_OPTS=-Dlog.dir=C:\logs
$ ant run
Now this property is available within the buildfile, for instance:
<echo>Log directory is set to: ${log.dir}</echo>
If the buildfile runs a Java application, the property may be retrieved from within it as
follows:
String logDir = System.getProperty(“log.dir”);

 

Troubleshoot: Illegal Java options in the ANT_OPTS variable
The environment variable ANT_OPTS provides a means to pass options into Ant,
such as a permanent definition of some properties, or the memory parameters for
Java. The variable must contain only options the local JVM recognizes. Any invalid
parameter will generate an error message such as the following (where ANT_OPTS was
set to –3):
Unrecognized option: -3
Could not create the Java virtual machine.

If the variable contains a string that is mistaken for the name of the Java class to run
as the main class, then a different error appears:

Exception in thread “main” java.lang.NoClassDefFoundError: error-string
Test: Examine ANT_OPTS and verify that the variable is unset or contains valid
JVM options.
Fix: Correct or clear the variable.

Tagged : / / / / / / / / / / / / / / /

Power Point PPT: Using Ant To Build J2 Ee Application – Complete Guide

ant-to-build-j2ee-application

Power Point PPT: Using Ant To Build J2 Ee Application

 

Tagged : / / / / / / / / / / / / / / / / / /

How to Run/Deploy Java EE applications on Amazon EC2?

running-java-ee-applications-on-amazon-ec2

Running Java EE applications on Amazon EC2: deploying to 20 machines with no money down

Computer hardware has traditionally been a scarce, expensive resource. In the early days of computing developers had to share a single machine. Today each developer usually has their own machine but it’s rare for a developer to have more than one. This means that running performance tests often involves scavenging for machines.  Likewise, replicating even just part of a production environment is a major undertaking. With Amazon’s Elastic Compute Cloud (EC2), however, things are very different. A set of Linux servers is now just a web service call away. Depending on the type of the servers you simply pay 10-80 cents per server per hour for up to 20 servers! No more upfront costs or waiting for machines to be purchased and configured.

To make it easier for enterprise Java developers to use EC2, I have created EC2Deploy.  It’s a Groovy framework for deploying an enterprise Java application on a set of Amazon EC2 servers. EC2Deploy provides a simple, easy to use API for launching a set of EC2 instances; configuring MySQL, Apache and one or more Tomcat servers; and deploying one or more web applications. In addition, it can also run JMeter and collect performance metrics.

Here is an example script that launches some EC2 instances; configures MySQL with one slave, Tomcat and Apache; deploys a single web application on the Tomcat server; and runs a JMeter test with first one thread and then two.

class ClusterTest extends GroovyTestCase {
  void testSomething() {
    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")

    def ec2 = new EC2(awsProperties)

    def explodedWar = '…/projecttrack/webapp/target/ptrack'

    ClusterSpec clusterSpec =
       new ClusterSpec()
            .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
            .slaves(1)
            .tomcats(1)
            .webApp(explodedWar, "ptrack")
            .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

    SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx",
        [1, 2])

    cluster.stop()
  }
}

Let’s look at each of the pieces.

First, we need to configure the framework as follows:

    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")
    def ec2 = new EC2(awsProperties)

The aws.properties file contains various properties including the Amazon WS security credentials and the EC2 AMI (i.e. OS image) to launch. All servers use my EC2 appliance AMI that has Java, MySQL, Apache, Tomcat, Jmeter and some other useful tools pre-installed.

Next we need to configure the servers:

     ClusterSpec clusterSpec =
        new ClusterSpec()
             .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
             .slaves(1)
             .tomcats(1)
             .webApp(explodedWar, "ptrack")
             .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

     SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

This code first creates a ClusterSpec, which defines the configuration of the machines and the applications:

  • schema() – specifies the name of the database schema to create; names of the users and their passwords; the DML scripts to execute once the database has been create
  • slaves() – specifies how many MySql slaves to create
  • tomcats() – specifies how many Tomcats to run.
  • webApp() – configures a web application. This method takes two parameters: the path to the exploded WAR directory (conveniently created by Maven) and the context to deploy the web application under.
  • catalinaOptsBuilder() – supplies a closure that takes a builder and the DNS name of the MySQL server as arguments and returns the CATALINA_OPTS used to launch Tomcat. It’s primary purpose is to configure the web application(s) to use the correct database server

It then creates a cluster with that specification.

We then start the cluster:

    cluster.start()

At this point EC2Deploy will:

  1. Launch the EC2 instances running my appliance AMI.
  2. Initialize the MySql master database
  3. Create the MySql slave
  4. Create the database schema and the users
  5. Run any DML scripts (these are cached on S3 in a bucket called “tmp–dml” for the reasons described next)
  6. Upload the web applications to Amazon S3 (Simple Storage Service) where they are cached in order to avoid time consuming uploads (over slow DSL connections, for example). EC2Deploy only uploads new and changed files, which means that the bulky 3rd party libraries are only uploaded once. Each web application is stored in an S3 bucket called -tmp-war. If this bucket does not exist you will see some warning messages and the bucket will be created.
  7. Deploy the web applications on each of the Tomcat servers
  8. Configure Apache to load balance across the Tomcat servers

Once the cluster is started we can run a JMeter load test:

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx", [1, 2])

The first argument specifies the test to run and the second argument is a list of JMeter thread counts. In this example, EC2deploy first runs the load test with one thread and then two threads. For each test run, it generates a report describing CPU utilization for each machine, average response time and throughput.

Finally, we stop the EC2 instances:

cluster.stop()

As you can see, EC2Deploy makes it pretty easy to deploy and test your enterprise Java application. I’ve used it to clone a production environment and run load tests. NOTE 1/28/08: The source code EC2Deploy along with a very cool Maven plugin is now available !

Tagged : / / / / / / / / / / / / / / / / / /

Amazon EC2 key pairs and other stumbling blocks – Guide

amazon-ec2-key-pairs-stumbling-blocks

While working with Cloud Tools and Cloud Foundry users, I have noticed that EC2 key pairs and security group configuration are common stumbling blocks for people who are new to Amazon EC2. When you sign up for an AWS account you get what can be, at first, a confusing set of credentials:  an access key id,  a secret access key, X509 certificate and a corresponding private key. You authenticate an AWS request using either the access key id and secret access key or the X509 certificate and private key. Some APIs and tools support both options, where was others support just one. And, to make matters worse, to launch an EC2 instance and access it via SSH you must use a (named) EC2 key pair. This EC2 key pair is not the same as the X509 certificate/private key given to you by AWS during sign up. But they are easily confused since they both consist of private and public keys.

You create a EC2 key pair by using one of the AWS tools: command line tools, ElasticFox plugin or the rather nice AWS console. Under the covers these tools make an AWS request to create the key pair.

Here is a screenshot of the AWS Console showing how you create a key pair.

Creating a Key Pair

There are three steps:

  1. Select Key Pairs
  2. Click  Create Key Pair
  3. Enter the name of the Key Pair you want to create – you chose the name

The console will then create the key pair and prompt you to save the private key.

Saving a key pair

You specify the key pair name in the AWS request that launches the instances and specify the private key file as the -i argument to ssh when connecting to the instance.Just make sure you save the key pair in safe place.

Another stumbling block is that you need to enable SSH in the AWS firewall. Both Cloud Tools and Cloud Foundry use SSH to configure the instances and deploy the application. If SSH is blocked then they won’t work. Fortunately, the AWS firewall (a.k.a. security groups) is extremely easy to configure using the AWS tools – command line tools, ElasticFox plugin or the nice AWS console – by editing the default security group to allow SSH traffic.

The good news is that these are relatively minor hurdles to overcome. Once you have sorted out your EC2 key pair and edited the security groups to enable SSH using Cloud Tools or Cloud Foundry to deploy your web application is very easy.

Tagged : / / / / / / / / / / / / / / /