Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Powerful New Amazon EC2 Boot Features – Introduction

amazon-ec2-boot-features

Today a powerful new feature is available for our Amazon EC2 customers: the ability to boot their instances from Amazon EBS (Elastic Block Store).

Customers like the simplicity of the AMI (Amazon Machine Image) model where they either choose a preconfigured AMI or upload their own AMI into Amazon S3. A wide variety of operating systems and software configurations is available for use. But customers have also asked us for more flexibility and control in the way that Amazon EC2 instances are booted such that they have finer grained control over for example what software configurations and data sets are available to the instance at boot time.

serverfolders-small.jpg

The ability to boot from Amazon EBS gives customers very powerful control over the boot configuration of the Amazon EC2 instances. In the traditional boot process, the root partition of the image will be the local disk, which is created and populated at boot time. In the new Amazon EBS boot process, the root partition is an Amazon EBS volume, which is created at boot time from an Amazon EBS snapshot. Other Amazon EBS volumes beyond the root disk can also made part of the instance before it is booted. This allows for a very fine-grain control of software and data configuration. An additional advantage of using the Amazon EBS boot process is that root partitions are no longer constrained by the size of the local disk and can be up to 1TB in size. And the new boot process is significantly faster because a local disk no longer needs to be populated.

With this new boot process another powerful feature is available to our Amazon EC2 customers: the ability to stop an instance and restart it at a later time with the disk configuration intact. When an instance is restarted, the customer can choose to use a different instance type (e.g., with more memory or CPU), a different operating system (e.g., with new security patches installed), or add new user data. While the instance is stopped it does not accrue any usage hours and customers are only charged for the storage associated with the Amazon EBS volume. The ability to stop and restart an instance is a very powerful mechanism that makes management of instances much easier; many scenarios related to adaptive instance sizing and software management have now become much simpler.

The new boot from Amazon EBS feature is an important step in our continuing quest to remove more and more of the heavy lifting that comes with today’s computer environments.

Tagged : / / / / / / / / / / / / / / / /

How to Run/Deploy Java EE applications on Amazon EC2?

running-java-ee-applications-on-amazon-ec2

Running Java EE applications on Amazon EC2: deploying to 20 machines with no money down

Computer hardware has traditionally been a scarce, expensive resource. In the early days of computing developers had to share a single machine. Today each developer usually has their own machine but it’s rare for a developer to have more than one. This means that running performance tests often involves scavenging for machines.  Likewise, replicating even just part of a production environment is a major undertaking. With Amazon’s Elastic Compute Cloud (EC2), however, things are very different. A set of Linux servers is now just a web service call away. Depending on the type of the servers you simply pay 10-80 cents per server per hour for up to 20 servers! No more upfront costs or waiting for machines to be purchased and configured.

To make it easier for enterprise Java developers to use EC2, I have created EC2Deploy.  It’s a Groovy framework for deploying an enterprise Java application on a set of Amazon EC2 servers. EC2Deploy provides a simple, easy to use API for launching a set of EC2 instances; configuring MySQL, Apache and one or more Tomcat servers; and deploying one or more web applications. In addition, it can also run JMeter and collect performance metrics.

Here is an example script that launches some EC2 instances; configures MySQL with one slave, Tomcat and Apache; deploys a single web application on the Tomcat server; and runs a JMeter test with first one thread and then two.

class ClusterTest extends GroovyTestCase {
  void testSomething() {
    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")

    def ec2 = new EC2(awsProperties)

    def explodedWar = '…/projecttrack/webapp/target/ptrack'

    ClusterSpec clusterSpec =
       new ClusterSpec()
            .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
            .slaves(1)
            .tomcats(1)
            .webApp(explodedWar, "ptrack")
            .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

    SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx",
        [1, 2])

    cluster.stop()
  }
}

Let’s look at each of the pieces.

First, we need to configure the framework as follows:

    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")
    def ec2 = new EC2(awsProperties)

The aws.properties file contains various properties including the Amazon WS security credentials and the EC2 AMI (i.e. OS image) to launch. All servers use my EC2 appliance AMI that has Java, MySQL, Apache, Tomcat, Jmeter and some other useful tools pre-installed.

Next we need to configure the servers:

     ClusterSpec clusterSpec =
        new ClusterSpec()
             .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
             .slaves(1)
             .tomcats(1)
             .webApp(explodedWar, "ptrack")
             .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

     SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

This code first creates a ClusterSpec, which defines the configuration of the machines and the applications:

  • schema() – specifies the name of the database schema to create; names of the users and their passwords; the DML scripts to execute once the database has been create
  • slaves() – specifies how many MySql slaves to create
  • tomcats() – specifies how many Tomcats to run.
  • webApp() – configures a web application. This method takes two parameters: the path to the exploded WAR directory (conveniently created by Maven) and the context to deploy the web application under.
  • catalinaOptsBuilder() – supplies a closure that takes a builder and the DNS name of the MySQL server as arguments and returns the CATALINA_OPTS used to launch Tomcat. It’s primary purpose is to configure the web application(s) to use the correct database server

It then creates a cluster with that specification.

We then start the cluster:

    cluster.start()

At this point EC2Deploy will:

  1. Launch the EC2 instances running my appliance AMI.
  2. Initialize the MySql master database
  3. Create the MySql slave
  4. Create the database schema and the users
  5. Run any DML scripts (these are cached on S3 in a bucket called “tmp–dml” for the reasons described next)
  6. Upload the web applications to Amazon S3 (Simple Storage Service) where they are cached in order to avoid time consuming uploads (over slow DSL connections, for example). EC2Deploy only uploads new and changed files, which means that the bulky 3rd party libraries are only uploaded once. Each web application is stored in an S3 bucket called -tmp-war. If this bucket does not exist you will see some warning messages and the bucket will be created.
  7. Deploy the web applications on each of the Tomcat servers
  8. Configure Apache to load balance across the Tomcat servers

Once the cluster is started we can run a JMeter load test:

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx", [1, 2])

The first argument specifies the test to run and the second argument is a list of JMeter thread counts. In this example, EC2deploy first runs the load test with one thread and then two threads. For each test run, it generates a report describing CPU utilization for each machine, average response time and throughput.

Finally, we stop the EC2 instances:

cluster.stop()

As you can see, EC2Deploy makes it pretty easy to deploy and test your enterprise Java application. I’ve used it to clone a production environment and run load tests. NOTE 1/28/08: The source code EC2Deploy along with a very cool Maven plugin is now available !

Tagged : / / / / / / / / / / / / / / / / / /

Cloud Tools now supports Amazon Elastic Block Store

amazon-elastic-block-store-

One of the exciting new features of Amazon EC2 is Elastic Block Store, which provides truly durable storage for your instances. Prior to EBS, the contents of the file system disappeared once an instance was terminated. This meant that if you wanted to run a database server on EC2 you had to use MySql master-slave replication with frequent backups to Amazon S3. With EBS running a database on EC2 is a lot easier. You can simply create an EBS volume, attach it to an instance, and create a filesystem that gives you long-lived disk storage for your database. Moreover, you can easily back up an EBS volume by creating a snapshot (stored in S3). And, if you ever need to restore your data, you can create a volume from a snapshot.

Cloud tools now supports Amazon EBS. You can launch an application with a database stored on a brand new volume; on an existing volume; or on a volume created from a snapshot. You can also convert an already running application to use elastic block storage. Finally, you can create an EBS snapshot of the database. Currently, only the Maven plugin supports this functionality but I plan to update the Grails plugin shortly.

Please check out the project’s home page for more information and send me feedback.

Tagged : / / / / / / / / / / / /

Amazon EC2 key pairs and other stumbling blocks – Guide

amazon-ec2-key-pairs-stumbling-blocks

While working with Cloud Tools and Cloud Foundry users, I have noticed that EC2 key pairs and security group configuration are common stumbling blocks for people who are new to Amazon EC2. When you sign up for an AWS account you get what can be, at first, a confusing set of credentials:  an access key id,  a secret access key, X509 certificate and a corresponding private key. You authenticate an AWS request using either the access key id and secret access key or the X509 certificate and private key. Some APIs and tools support both options, where was others support just one. And, to make matters worse, to launch an EC2 instance and access it via SSH you must use a (named) EC2 key pair. This EC2 key pair is not the same as the X509 certificate/private key given to you by AWS during sign up. But they are easily confused since they both consist of private and public keys.

You create a EC2 key pair by using one of the AWS tools: command line tools, ElasticFox plugin or the rather nice AWS console. Under the covers these tools make an AWS request to create the key pair.

Here is a screenshot of the AWS Console showing how you create a key pair.

Creating a Key Pair

There are three steps:

  1. Select Key Pairs
  2. Click  Create Key Pair
  3. Enter the name of the Key Pair you want to create – you chose the name

The console will then create the key pair and prompt you to save the private key.

Saving a key pair

You specify the key pair name in the AWS request that launches the instances and specify the private key file as the -i argument to ssh when connecting to the instance.Just make sure you save the key pair in safe place.

Another stumbling block is that you need to enable SSH in the AWS firewall. Both Cloud Tools and Cloud Foundry use SSH to configure the instances and deploy the application. If SSH is blocked then they won’t work. Fortunately, the AWS firewall (a.k.a. security groups) is extremely easy to configure using the AWS tools – command line tools, ElasticFox plugin or the nice AWS console – by editing the default security group to allow SSH traffic.

The good news is that these are relatively minor hurdles to overcome. Once you have sorted out your EC2 key pair and edited the security groups to enable SSH using Cloud Tools or Cloud Foundry to deploy your web application is very easy.

Tagged : / / / / / / / / / / / / / / /