Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

How to Run/Deploy Java EE applications on Amazon EC2?

running-java-ee-applications-on-amazon-ec2

Running Java EE applications on Amazon EC2: deploying to 20 machines with no money down

Computer hardware has traditionally been a scarce, expensive resource. In the early days of computing developers had to share a single machine. Today each developer usually has their own machine but it’s rare for a developer to have more than one. This means that running performance tests often involves scavenging for machines.  Likewise, replicating even just part of a production environment is a major undertaking. With Amazon’s Elastic Compute Cloud (EC2), however, things are very different. A set of Linux servers is now just a web service call away. Depending on the type of the servers you simply pay 10-80 cents per server per hour for up to 20 servers! No more upfront costs or waiting for machines to be purchased and configured.

To make it easier for enterprise Java developers to use EC2, I have created EC2Deploy.  It’s a Groovy framework for deploying an enterprise Java application on a set of Amazon EC2 servers. EC2Deploy provides a simple, easy to use API for launching a set of EC2 instances; configuring MySQL, Apache and one or more Tomcat servers; and deploying one or more web applications. In addition, it can also run JMeter and collect performance metrics.

Here is an example script that launches some EC2 instances; configures MySQL with one slave, Tomcat and Apache; deploys a single web application on the Tomcat server; and runs a JMeter test with first one thread and then two.

class ClusterTest extends GroovyTestCase {
  void testSomething() {
    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")

    def ec2 = new EC2(awsProperties)

    def explodedWar = '…/projecttrack/webapp/target/ptrack'

    ClusterSpec clusterSpec =
       new ClusterSpec()
            .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
            .slaves(1)
            .tomcats(1)
            .webApp(explodedWar, "ptrack")
            .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

    SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx",
        [1, 2])

    cluster.stop()
  }
}

Let’s look at each of the pieces.

First, we need to configure the framework as follows:

    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")
    def ec2 = new EC2(awsProperties)

The aws.properties file contains various properties including the Amazon WS security credentials and the EC2 AMI (i.e. OS image) to launch. All servers use my EC2 appliance AMI that has Java, MySQL, Apache, Tomcat, Jmeter and some other useful tools pre-installed.

Next we need to configure the servers:

     ClusterSpec clusterSpec =
        new ClusterSpec()
             .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
             .slaves(1)
             .tomcats(1)
             .webApp(explodedWar, "ptrack")
             .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

     SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

This code first creates a ClusterSpec, which defines the configuration of the machines and the applications:

  • schema() – specifies the name of the database schema to create; names of the users and their passwords; the DML scripts to execute once the database has been create
  • slaves() – specifies how many MySql slaves to create
  • tomcats() – specifies how many Tomcats to run.
  • webApp() – configures a web application. This method takes two parameters: the path to the exploded WAR directory (conveniently created by Maven) and the context to deploy the web application under.
  • catalinaOptsBuilder() – supplies a closure that takes a builder and the DNS name of the MySQL server as arguments and returns the CATALINA_OPTS used to launch Tomcat. It’s primary purpose is to configure the web application(s) to use the correct database server

It then creates a cluster with that specification.

We then start the cluster:

    cluster.start()

At this point EC2Deploy will:

  1. Launch the EC2 instances running my appliance AMI.
  2. Initialize the MySql master database
  3. Create the MySql slave
  4. Create the database schema and the users
  5. Run any DML scripts (these are cached on S3 in a bucket called “tmp–dml” for the reasons described next)
  6. Upload the web applications to Amazon S3 (Simple Storage Service) where they are cached in order to avoid time consuming uploads (over slow DSL connections, for example). EC2Deploy only uploads new and changed files, which means that the bulky 3rd party libraries are only uploaded once. Each web application is stored in an S3 bucket called -tmp-war. If this bucket does not exist you will see some warning messages and the bucket will be created.
  7. Deploy the web applications on each of the Tomcat servers
  8. Configure Apache to load balance across the Tomcat servers

Once the cluster is started we can run a JMeter load test:

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx", [1, 2])

The first argument specifies the test to run and the second argument is a list of JMeter thread counts. In this example, EC2deploy first runs the load test with one thread and then two threads. For each test run, it generates a report describing CPU utilization for each machine, average response time and throughput.

Finally, we stop the EC2 instances:

cluster.stop()

As you can see, EC2Deploy makes it pretty easy to deploy and test your enterprise Java application. I’ve used it to clone a production environment and run load tests. NOTE 1/28/08: The source code EC2Deploy along with a very cool Maven plugin is now available !

Tagged : / / / / / / / / / / / / / / / / / /

EC2Deploy and the Cloud Tools Maven plugin are now available

ec2deploy-cloud-tools-maven-plugin

I’m pleased to announce that EC2Deploy – a Groovy-based framework for deploying Java EE applications to Amazon EC2 – is now available as part of the Cloud Tools open source project.

There are three main parts to Cloud Tools:

  • The EC2Deploy framework
  • Amazon Machine Images (AMIs) that are configured to run Tomcat and work with EC2Deploy
  • A Maven plugin that uses EC2Deploy to deploy a web application to EC2

I’m especially excited about the Maven plugin. Once you have configured the plugin for your web application you can use the following goals:

  • cloudtools:deploy – launch the EC2 instances and deploy the web application
  • cloudtools:redeploy – redeploy the web application (upload the changes and restart tomcat)
  • cloudtools:jmeter – run a Jmeter test
  • cloudtools:stop – stop the EC2 instances

Cloudtools is still work in progress but it let’s you deploy a web application on EC2 in just a few minutes.  To learn more go to Cloud Tools.

Tagged : / / / / / / / / / / / /

Cloud Tools now supports Amazon Elastic Block Store

amazon-elastic-block-store-

One of the exciting new features of Amazon EC2 is Elastic Block Store, which provides truly durable storage for your instances. Prior to EBS, the contents of the file system disappeared once an instance was terminated. This meant that if you wanted to run a database server on EC2 you had to use MySql master-slave replication with frequent backups to Amazon S3. With EBS running a database on EC2 is a lot easier. You can simply create an EBS volume, attach it to an instance, and create a filesystem that gives you long-lived disk storage for your database. Moreover, you can easily back up an EBS volume by creating a snapshot (stored in S3). And, if you ever need to restore your data, you can create a volume from a snapshot.

Cloud tools now supports Amazon EBS. You can launch an application with a database stored on a brand new volume; on an existing volume; or on a volume created from a snapshot. You can also convert an already running application to use elastic block storage. Finally, you can create an EBS snapshot of the database. Currently, only the Maven plugin supports this functionality but I plan to update the Grails plugin shortly.

Please check out the project’s home page for more information and send me feedback.

Tagged : / / / / / / / / / / / /