What is master in cloud computing?

Tagged : / / / / / / / / / / / /

Provision a AWS ec2 vm using chef | Step by Step Guide | AWS ec2 vm Tutorial

provision-a-aws-ec2-vm-using-chef

Provision a AWS ec2 vm using chef

Step 1: Install chefdk

Step 2: Setup AWS Credentails

Step X: Setup your knife config

Step X: Make sure following is set and exported in env.

 

AWS_ACCESS_KEY_ID=secrets AWS_SECRET_ACCESS_KEY=secrets AWS_DEFAULT_REGION=us-east-1 AWS_SSH_KEY=your_ssh_key_name AWS_ACCESS_KEY=secrets AWS_SECRET_KEY=secrets

 

Step 3: Genrate a new repository using the chef generate command

> chef generate repo chefdk-provision-demo
> cd chefdk-provision-demo

Step 4: Generate a provision cookbook. This is the required name, and it must be in the current directory.
> chef generate cookbook provision

Step 5: Edit the default recipe, $EDITOR provision/recipes/default.rb with following code…

context = ChefDK::ProvisioningData.context with_driver 'aws::us-west-2' options = { ssh_username: 'admin', use_private_ip_for_ssh: false, bootstrap_options: { key_name: 'jtimberman', image_id: 'ami-0d5b6c3d', instance_type: 'm3.medium', }, convergence_options: context.convergence_options, } machine context.node_name do machine_options options action context.action converge true end

Understand the code:
> To break this down, first we get the ChefDK provisioning context that will pass in options to chef-provisioning.
> Then we tell chef-provisioning to use the AWS driver, and in the us-west-2 region.
> The options hash is used to setup the instance.
> We’re using Debian 8, which uses the admin user to log in, an SSH key that exists in the AWS region, the actual AMI, and finally the instance type.
> Then, we’re going to set the convergence options automatically from ChefDK. This is the important part that will ensure the node has the right run list.

Step 6: Generate a Policyfile.rb and And edit its content, $EDITOR Policyfile.rb.
> chef generate policyfile
> vi policyfile.rb

name            "chefdk-provision-demo" default_source  :community run_list        "recipe[libuuid-user]" cookbook        "libuuid-user"

Here we’re simply getting the libuuid-user cookbook from Supermarket and applying the default recipe to the nodes that have this policy.

Step 7: The next step is to install the Policyfile. This generates the Policyfile.lock.json, and downloads the cookbooks to the cache, ~/.chefdk/cache/cookbooks. If this isn’t run, chef will complain, with a reminder to run it.

> chef install

Step 8: Finally, we can provision a testing system with this policy:

> chef provision testing –sync -n debian-libuuid

Reference:
http://jtimberman.housepub.org/blog/2015/05/15/quick-tip-chefdk-provision/

Tagged : / / / / / / / / / / / / / / / / /

Powerful New Amazon EC2 Boot Features – Introduction

amazon-ec2-boot-features

Today a powerful new feature is available for our Amazon EC2 customers: the ability to boot their instances from Amazon EBS (Elastic Block Store).

Customers like the simplicity of the AMI (Amazon Machine Image) model where they either choose a preconfigured AMI or upload their own AMI into Amazon S3. A wide variety of operating systems and software configurations is available for use. But customers have also asked us for more flexibility and control in the way that Amazon EC2 instances are booted such that they have finer grained control over for example what software configurations and data sets are available to the instance at boot time.

serverfolders-small.jpg

The ability to boot from Amazon EBS gives customers very powerful control over the boot configuration of the Amazon EC2 instances. In the traditional boot process, the root partition of the image will be the local disk, which is created and populated at boot time. In the new Amazon EBS boot process, the root partition is an Amazon EBS volume, which is created at boot time from an Amazon EBS snapshot. Other Amazon EBS volumes beyond the root disk can also made part of the instance before it is booted. This allows for a very fine-grain control of software and data configuration. An additional advantage of using the Amazon EBS boot process is that root partitions are no longer constrained by the size of the local disk and can be up to 1TB in size. And the new boot process is significantly faster because a local disk no longer needs to be populated.

With this new boot process another powerful feature is available to our Amazon EC2 customers: the ability to stop an instance and restart it at a later time with the disk configuration intact. When an instance is restarted, the customer can choose to use a different instance type (e.g., with more memory or CPU), a different operating system (e.g., with new security patches installed), or add new user data. While the instance is stopped it does not accrue any usage hours and customers are only charged for the storage associated with the Amazon EBS volume. The ability to stop and restart an instance is a very powerful mechanism that makes management of instances much easier; many scenarios related to adaptive instance sizing and software management have now become much simpler.

The new boot from Amazon EBS feature is an important step in our continuing quest to remove more and more of the heavy lifting that comes with today’s computer environments.

Tagged : / / / / / / / / / / / / / / / /

How to Run/Deploy Java EE applications on Amazon EC2?

running-java-ee-applications-on-amazon-ec2

Running Java EE applications on Amazon EC2: deploying to 20 machines with no money down

Computer hardware has traditionally been a scarce, expensive resource. In the early days of computing developers had to share a single machine. Today each developer usually has their own machine but it’s rare for a developer to have more than one. This means that running performance tests often involves scavenging for machines.  Likewise, replicating even just part of a production environment is a major undertaking. With Amazon’s Elastic Compute Cloud (EC2), however, things are very different. A set of Linux servers is now just a web service call away. Depending on the type of the servers you simply pay 10-80 cents per server per hour for up to 20 servers! No more upfront costs or waiting for machines to be purchased and configured.

To make it easier for enterprise Java developers to use EC2, I have created EC2Deploy.  It’s a Groovy framework for deploying an enterprise Java application on a set of Amazon EC2 servers. EC2Deploy provides a simple, easy to use API for launching a set of EC2 instances; configuring MySQL, Apache and one or more Tomcat servers; and deploying one or more web applications. In addition, it can also run JMeter and collect performance metrics.

Here is an example script that launches some EC2 instances; configures MySQL with one slave, Tomcat and Apache; deploys a single web application on the Tomcat server; and runs a JMeter test with first one thread and then two.

class ClusterTest extends GroovyTestCase {
  void testSomething() {
    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")

    def ec2 = new EC2(awsProperties)

    def explodedWar = '…/projecttrack/webapp/target/ptrack'

    ClusterSpec clusterSpec =
       new ClusterSpec()
            .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
            .slaves(1)
            .tomcats(1)
            .webApp(explodedWar, "ptrack")
            .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

    SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx",
        [1, 2])

    cluster.stop()
  }
}

Let’s look at each of the pieces.

First, we need to configure the framework as follows:

    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")
    def ec2 = new EC2(awsProperties)

The aws.properties file contains various properties including the Amazon WS security credentials and the EC2 AMI (i.e. OS image) to launch. All servers use my EC2 appliance AMI that has Java, MySQL, Apache, Tomcat, Jmeter and some other useful tools pre-installed.

Next we need to configure the servers:

     ClusterSpec clusterSpec =
        new ClusterSpec()
             .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
             .slaves(1)
             .tomcats(1)
             .webApp(explodedWar, "ptrack")
             .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

     SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

This code first creates a ClusterSpec, which defines the configuration of the machines and the applications:

  • schema() – specifies the name of the database schema to create; names of the users and their passwords; the DML scripts to execute once the database has been create
  • slaves() – specifies how many MySql slaves to create
  • tomcats() – specifies how many Tomcats to run.
  • webApp() – configures a web application. This method takes two parameters: the path to the exploded WAR directory (conveniently created by Maven) and the context to deploy the web application under.
  • catalinaOptsBuilder() – supplies a closure that takes a builder and the DNS name of the MySQL server as arguments and returns the CATALINA_OPTS used to launch Tomcat. It’s primary purpose is to configure the web application(s) to use the correct database server

It then creates a cluster with that specification.

We then start the cluster:

    cluster.start()

At this point EC2Deploy will:

  1. Launch the EC2 instances running my appliance AMI.
  2. Initialize the MySql master database
  3. Create the MySql slave
  4. Create the database schema and the users
  5. Run any DML scripts (these are cached on S3 in a bucket called “tmp–dml” for the reasons described next)
  6. Upload the web applications to Amazon S3 (Simple Storage Service) where they are cached in order to avoid time consuming uploads (over slow DSL connections, for example). EC2Deploy only uploads new and changed files, which means that the bulky 3rd party libraries are only uploaded once. Each web application is stored in an S3 bucket called -tmp-war. If this bucket does not exist you will see some warning messages and the bucket will be created.
  7. Deploy the web applications on each of the Tomcat servers
  8. Configure Apache to load balance across the Tomcat servers

Once the cluster is started we can run a JMeter load test:

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx", [1, 2])

The first argument specifies the test to run and the second argument is a list of JMeter thread counts. In this example, EC2deploy first runs the load test with one thread and then two threads. For each test run, it generates a report describing CPU utilization for each machine, average response time and throughput.

Finally, we stop the EC2 instances:

cluster.stop()

As you can see, EC2Deploy makes it pretty easy to deploy and test your enterprise Java application. I’ve used it to clone a production environment and run load tests. NOTE 1/28/08: The source code EC2Deploy along with a very cool Maven plugin is now available !

Tagged : / / / / / / / / / / / / / / / / / /

EC2Deploy and the Cloud Tools Maven plugin are now available

ec2deploy-cloud-tools-maven-plugin

I’m pleased to announce that EC2Deploy – a Groovy-based framework for deploying Java EE applications to Amazon EC2 – is now available as part of the Cloud Tools open source project.

There are three main parts to Cloud Tools:

  • The EC2Deploy framework
  • Amazon Machine Images (AMIs) that are configured to run Tomcat and work with EC2Deploy
  • A Maven plugin that uses EC2Deploy to deploy a web application to EC2

I’m especially excited about the Maven plugin. Once you have configured the plugin for your web application you can use the following goals:

  • cloudtools:deploy – launch the EC2 instances and deploy the web application
  • cloudtools:redeploy – redeploy the web application (upload the changes and restart tomcat)
  • cloudtools:jmeter – run a Jmeter test
  • cloudtools:stop – stop the EC2 instances

Cloudtools is still work in progress but it let’s you deploy a web application on EC2 in just a few minutes.  To learn more go to Cloud Tools.

Tagged : / / / / / / / / / / / /

Cloud Tools now supports Amazon Elastic Block Store

amazon-elastic-block-store-

One of the exciting new features of Amazon EC2 is Elastic Block Store, which provides truly durable storage for your instances. Prior to EBS, the contents of the file system disappeared once an instance was terminated. This meant that if you wanted to run a database server on EC2 you had to use MySql master-slave replication with frequent backups to Amazon S3. With EBS running a database on EC2 is a lot easier. You can simply create an EBS volume, attach it to an instance, and create a filesystem that gives you long-lived disk storage for your database. Moreover, you can easily back up an EBS volume by creating a snapshot (stored in S3). And, if you ever need to restore your data, you can create a volume from a snapshot.

Cloud tools now supports Amazon EBS. You can launch an application with a database stored on a brand new volume; on an existing volume; or on a volume created from a snapshot. You can also convert an already running application to use elastic block storage. Finally, you can create an EBS snapshot of the database. Currently, only the Maven plugin supports this functionality but I plan to update the Grails plugin shortly.

Please check out the project’s home page for more information and send me feedback.

Tagged : / / / / / / / / / / / /

Amazon EC2 key pairs and other stumbling blocks – Guide

amazon-ec2-key-pairs-stumbling-blocks

While working with Cloud Tools and Cloud Foundry users, I have noticed that EC2 key pairs and security group configuration are common stumbling blocks for people who are new to Amazon EC2. When you sign up for an AWS account you get what can be, at first, a confusing set of credentials:  an access key id,  a secret access key, X509 certificate and a corresponding private key. You authenticate an AWS request using either the access key id and secret access key or the X509 certificate and private key. Some APIs and tools support both options, where was others support just one. And, to make matters worse, to launch an EC2 instance and access it via SSH you must use a (named) EC2 key pair. This EC2 key pair is not the same as the X509 certificate/private key given to you by AWS during sign up. But they are easily confused since they both consist of private and public keys.

You create a EC2 key pair by using one of the AWS tools: command line tools, ElasticFox plugin or the rather nice AWS console. Under the covers these tools make an AWS request to create the key pair.

Here is a screenshot of the AWS Console showing how you create a key pair.

Creating a Key Pair

There are three steps:

  1. Select Key Pairs
  2. Click  Create Key Pair
  3. Enter the name of the Key Pair you want to create – you chose the name

The console will then create the key pair and prompt you to save the private key.

Saving a key pair

You specify the key pair name in the AWS request that launches the instances and specify the private key file as the -i argument to ssh when connecting to the instance.Just make sure you save the key pair in safe place.

Another stumbling block is that you need to enable SSH in the AWS firewall. Both Cloud Tools and Cloud Foundry use SSH to configure the instances and deploy the application. If SSH is blocked then they won’t work. Fortunately, the AWS firewall (a.k.a. security groups) is extremely easy to configure using the AWS tools – command line tools, ElasticFox plugin or the nice AWS console – by editing the default security group to allow SSH traffic.

The good news is that these are relatively minor hurdles to overcome. Once you have sorted out your EC2 key pair and edited the security groups to enable SSH using Cloud Tools or Cloud Foundry to deploy your web application is very easy.

Tagged : / / / / / / / / / / / / / / /

Deployment Foundation Issues

deployment-foundation-issue

Deployment Foundation Issues

Establish Key Roles/Charter for Deployment

The very first order of business is to firmly establish “who’s on first” for getting deployment done. Senior management is crucial at this point for making sure all their direct reports and managers are on board with this
and that it comes from the top. I mention this because at one place I worked, we immediately got into interdepartment squabbling due to a lack of senior management support and direction. If you hear a manager
say things like “do what you want — but don’t touch my area,” you will have deployment problems. I strongly recommend the formation of a process group as the focal point for all matters related to process and process deployment. This group has to have both the authorization and responsibility for process. If you have a distributed set of “process owners,” consolidate that responsibility and authority to this new group. My requirements for membership in this process group are:

Six to eight people. Larger process groups tend to be less efficient and more cumbersome. A smaller group tends to be ineffective. It is not necessary to have representatives from all corners of your organization. It is important that these domain experts get called in as necessary for process development and inspection. One company had a 15-person process group established by a non–process-oriented vice president. It was a disaster to get a
repeatable quorum present for any meeting. We spent subsequent meetings repeating stuff from earlier meetings to accommodate a different set of participants at every meeting.

Process-group commitments. My most successful process group was when I insisted that members commit 5 percent of their workweek to process-group meetings. Group members and their managers had to sign the commitment. The 5 percent figure is doable — even for busy people. Two one-hour meetings per week reflect that percentage. I also had fixed time meetings both by time and day of week. It became automatic to show up. To make this really work, I was the process-group lead and I dedicated 100 percent to this effort. I had clerical support services available to me. The most effective process-group meetings are concentrated sessions
with a time-stamped agenda and where my support staff and I do all extracurricular activities. You want to restrict extra time (beyond actual process-group meeting time) needed by your key process participants because they tend to be super busy.

Showing up on time. We could not tolerate people wandering in five or ten minutes late. We started promptly on the hour and stopped promptly on the hour. At one company, I removed a person for being late because it held everyone up. Promptness became so important at one commercial company that other process- group members would be “all over” tardy people. The tardiness stopped quickly when peers got involved in any discipline.

People who are process oriented. Do not have people in this group who don’t fit this requirement! At one company, a vice president insisted on naming people to the group (which became double the size I had wanted) who were almost completely ignorant about process. We spent almost all our precious process-group time just getting these people to understand the most fundamental aspects of process. It was painful. The VP wondered why progress was slow. Duh!

People who are opinionated — i.e., not afraid to speak up on issues. You cannot afford to have people just show up and suck air out of the room and not participate. The best processes I’ve developed came from sessions where it was not clear who would walk out alive after spirited process discussions.

People that others look up to. They may be leads or workers. Every organization has these types of people and they may not be in the management ranks. The reason for this requirement is to form an initial set of process champions right out of the box. These initial process champions will develop more champions.

People who are willing to have an enterprise perspective versus an organizational perspective. This could be a huge problem if process- group discussions degenerate into preservation of turf — no matter what. At one place, I actually went to a paint store, bought disposable painting hats, placed a big “E” for enterprise on the hats, and made process-group members wear the hats at our meetings to reinforce that enterprise focus. It got a few laughs and some grumbles but it worked.

People who are not “who” oriented. A process group avoids the “who” question and concentrates on the “whats.” Once the “what you have to do” is addressed, the “who” looks after itself. When process-group meetings degenerated into discussing “who does this” and “who does that,” I routinely stopped the meeting and reminded everyone that when you have a hole in the bottom of the boat, this is not the time to discuss whose hole it is! I got laughs but my point was taken.

This is your key group for process development and deployment. It’s obvious, but if you have this marvelous group put together without regard to an overall process architectural goal, you will fail. This is where this software process model will help you enormously. Ideally, the processgroup lead has an in-depth knowledge of the targeted process architecture with an initial goal to get the process group up to speed on this aspect first — before any company processes are tackled. If you are under pressure to “just get on with it” (without getting all process members up on the target process architecture), you will fail. You will end up flailing around for a large amount of time. You will also end up with a hodgepodge of process elements and no encompassing architecture. You want to end up with a hierarchy of goals supported by tasks that are measurable for earned value and progress reporting by the process group itself. Essentially, you want to create a balanced scorecard for process progress. This makes your process group accountable for progress just like any other project team.
For deployment success, I will repeat an important division of labor within the process group itself. You absolutely need to develop advocates for the process framework architecture itself and make sure the integrity
of the process model is maintained. This book will be invaluable for that aspect. These people are very different from most process-group members, who should be domain experts. The process framework advocates are
the folks that put the “meat on the bone” for process and they will make sure that the process parts all fit within that framework architecture, whereas the domain folks make sure to develop process elements that are useful and make sense.

I make this point because uneducated management personnel may pressure you to “just get on with it” without considering the importance of making sure that all process elements fit within a framework architecture.
The worst thing you can do is crank out process into an ever larger pile of stuff that increasingly gets more and more useless for the organization. The main litmus test for process is that it is useful. I have run into
managers who seem to think that bigger piles mean success. In reality, you may have just the opposite result. Resist those who are pushing you in that direction for success. The most successful process group I led was when I was not only the lead but also the process architect and had management backing to do what was needed. I mention management backing because at another place, I had the exact same situation but had a boss who was so insecure that all my suggestions and recommendations were either ignored or rejected because they didn’t come from him! Anything from me was dead on arrival. If you’re ever in that position, run, don’t walk! You cannot succeed. There are people like that out there and (sadly) some are in senior management positions. I simply didn’t want to manipulate him to have him believe that all ideas were his ideas. That’s what it would take
to deal with this kind of person.

Ensure an Inspection Procedure Is in Place

When actually doing process deployment for the software process model, there is one how-to procedure that absolutely needs to be addressed early on: the inspection procedure. This particular procedure is fundamental to
all the activities within this software process model as a quality gate. If you have a lousy how-to procedure here, you will have an awful time in getting people to buy into this model. Conversely, a good how-to will
take off like wildfire and become engrained in an organization real fast. The software process model wants quality built in the “what you have to do” world by placing the quality responsibility on the producer’s back.
The inspection procedure is critical to this end goal. I worked at one place that had a “review” procedure in place. It was hardly used, did not work well, and the management protected it with
their lives. I had the gall to suggest a better way of doing things. I had to present this new way at three different hearings to this management group, finally receiving a disposition of “rejected.” They could not handle
the fact that this software process model allows for better mousetraps. Both methods could coexist in this model. I knew that once the better way was an option, the bad way would drop off for usage very naturally.
These managers had a personal and vested interest in preserving the status

quo — regardless of usefulness. They had invested time in the existing process element. They wanted no interlopers on their possessive world. This company was very closed in their thinking. Consequently, we had
no effective inspection procedure at this company and had a huge management barrier to ever getting a better way proposed or deployed. This same company has the same ineffectual review procedure in place
today that is really bad and is barely used. Go figure! In another job, I had the privilege of working for a section of a very large company and had incredible support from the head person. In that
environment, I was able to provide this part of the company with a slick, efficient, Web-based inspection procedure that was up to ten times faster than the existing inspection procedure. My new inspection procedure also
produced higher-quality inspections and had built-in defect prevention to boot. What happened was incredible. The word spread like wildfire within my own group about how great this procedure was. That worker enthusiasm spilled over to other organizational elements that clamored to get onboard with our solution. I was deluged with training requests and guest appearances to various “all-hands” meetings regarding this way of doing
things. I didn’t have to do a thing to sell this. It sold itself. I knew that the software process model approach encourages better ways of doing things and encourages variances in scale or location quite naturally.

Why is the inspection procedure so critical to this software process model?

 Every activity at the “what you need to do” level has built-in inspections across the board (i.e., the inspection procedure is a how-to elaboration on all the “Inspect” verbs in all activities).
 A bad inspection procedure can have a huge detrimental effect on all activities’ elapsed completion times. Conversely, an efficient inspection procedure can vastly improve activity execution times across the board.
 A good inspection procedure increases work product quality and reduces rework. Rework is expensive and should be avoided at all costs.
 A good inspection procedure gives you the basis for defect prevention — in addition to defect detection. With the software process model, you now have the ability to ask, “Where should this defect have been found?” This provides the mechanism to improve any earlier inspection checklist associated with any earlier work product. With this inspection procedure you have a built-in process-improvement mechanism in this software process model.
 Finally, an efficient inspection procedure will be used and will become part of the company culture. A bad one will not be used.

Get at Pain Issues

To be successful with process deployment, you really want to keep coming back to pain issues for any organization. The big question is, how do you do that? And how do you do it so that the data is believable? This
is independent of the type of process model you’re using. You will achieve higher levels of buy-in from all levels of the company if the perception is that you’re solving real-world problems. If you separate
process initiatives from “pain” issues, you will get a lot of cold shoulders about this process stuff. An absolute killer is to tie process initiatives to a maturity model (like CMMI) in a vacuum. As I mentioned before, a
particular model or standard can be viewed as the flavor of the month. Some people may view all this with an “if I keep a low profile, this too shall pass” attitude. There’s nothing like solving real problems — especially
if people can reduce their 60-hour weeks to something more reasonable. I learned one big lesson when I got married — don’t discount the power of a spouse! As Dr. Phil has said repeatedly, “If Mom’s not happy, no one
is happy.” For most employees, you really have a shadow employee to deal with as well — the employee’s spouse. If the employee can get home earlier, play with the kids more, do family things more, etc., how
do you think that family unit is going to support you? Do you think you’ll get early support for your next process initiative? The people part of process improvement can be enormous as a huge positive factor or a
huge negative factor. The process group needs to come to grips with this aspect of deploying new processes in an organization. It is not enough to have a marvelous process framework architecture into which all the
process parts fit nicely. Personal interviews have mixed results for actually getting at pain issues. Can you be trusted as an interviewer? Will the person being interviewed be forthright or will he or she give you politically correct data? Will there be retribution if he or she dares to be totally honest? For these reasons, I would not get process problem data this way. Two companies where I worked tried the survey route. In my opinion,
surveys are best suited for getting simple check-off answers to specific questions. They are not suitable for open-ended responses. I still laugh at a British sitcom called “Yes, Prime Minister,” where you can organize
sets of questions and get a totally opposing poll result based on the question set — even by surveying the same people. My point here is that polls and surveys can be manipulated. Busy people tend to kick and
scream about surveys and certainly want to get them off their plates as fast as possible. This means that open-ended surveys don’t end up with a lot of useful data. For these reasons, surveys are not the way to go.
As an adjunct for getting at pain issues, always leave the door open for having process practitioners critique or suggest things directly or via

An Implementation Technique for Getting at Pain Issues

I have used two of the 7 M tools (modified somewhat) very successfully to get at both enterprise process pain issues and project pain issues (as a project postmortem). These two techniques have fancy names:

 Infinity brainstorming
 Interrelational digraphs

I don’t use these terms when I conduct these techniques — I just call them “focus groups,” “action groups,” or “postmortem.” Using fancy terms will turn people off. Don’t do it. A focus group is fast (it usually takes
less than two hours) and is totally anonymous (no retribution). This particular technique levels the playing field for quiet, introverted people versus loud, dominant people. That quiet, shy person may be the very
person with a lot to express anonymously. The most successful focus group in my experience was done with about 35 people in a single session of about an hour and a half. At this point, you’re probably thinking it’s impossible to have a successful session with 35 people. Conventional wisdom says the success of any meeting is conversely proportionate to the number of attendees. The higher number of people produces lower success. The lower number of people produces the higher success. This technique is just the opposite. You need at least 12 people to be successful. A small group simply won’t work for this technique.

Here are the supplies needed to conduct these sessions:
 Large Post-it notes — enough for about 20 Post-its minimum per participant.
 Butcher paper or flip-chart paper —

these are taped to three walls of the conference room. Four or five charts are taped to one wall. Five to six charts are taped to the opposite wall. One chart is taped
on a third wall (for infinity brainstorming rules). One chart will be used to capture the major impact analysis after we collect the data from the infinity brainstorming part of the session. The size of the room will affect how many walls are actually used. No matter what, you need two walls for charts.

 Masking tape for the large paper sheets above.
 Fine-point felt pens — enough for participants and facilitator.
You need a large conference room that will hold all the participants and has wall space onto which you can tape large paper charts on three walls. Reserve this room for about two and a half to three hours to allow
time for the facilitator to set up, for the actual session, and for wrapping up. The participants show up about half an hour after the room’s reserved start time. At that point, all supplies should be out and the paper should
be up on the walls. This is what you need to do ahead of time:
 Write down the session rules on a single chart. The rules are:
– One finding per Post-it
– You can write as many Post-its as you want within the allotted
time
– Use only the supplied fine-point felt pen for writing
– No handwriting — print your finding
– No names (i.e., anonymous)
– Don’t get personal — it’s process related
– Be businesslike (not crude) in your remarks
– Make finding clear as to your intent: Can another person understand
your point?
– Be quiet when writing findings
 Take a few minutes to explain what you will be doing to the assembled group. Make sure the group knows about your expectations and desired results. I have even put this in written form and sent it to the group ahead of time to make sure that everyone is onboard with this technique. This sets the foundation. (5 minutes maximum)
 Announce that participants are to write one finding per Post-it note on as many Post-it notes as they want — within a ten-minute time frame. This is a totally quiet part of the technique. After writing,
participants take their individual Post-its and stick them onto one wall’s paper charts. Random placement is in order. This part actually creates all the pain issues as experienced by the participants in a
nonretributional way because no names are used. (10 minutes maximum).

 Explain that the findings should be placed into “like” groupings by placing Post-its from one wall into Post-it groupings on another wall. Like things should be clustered together; some adjustments
may need to be made later. Also point out that there is a predetermined category called “orphans.” (When conducting a project postmortem, I add a “good” category for the things we did right
on a project.) Forget trying to establish any category names. (About 1 minute)

 Have everyone stand up, grab a pile of Post-its from one wall, and place them on another wall as Post-it clusters. Remind them that once a finding is placed, it can’t be removed. Some talk among
people can happen at this point. If you do this correctly, you will try to limit the category clusters to about 10–12 groups at a maximum. Have orphaned Post-its be placed under “orphans.”
(About 10–12 minutes)  Identify a “reader” from the group. This individual will read the Post-its to the entire group and possibly rearrange some Post-its. (About 1-2 minutes)
 Have the reader stand up and read each Post-it finding in each cluster out loud. This accomplishes the following:
– Everyone gets to hear all the findings.
– Everyone gets a chance to persuade the reader to remove a
Post-it if it is not in a “like” group.
– Finally, the group establishes a mailbox name for each cluster
of Post-its. Keep the name short if possible. (For project postmortems,
I found that using the names from one project as predetermined names for subsequent postmortems was helpful for metrics data. However, one group disagreed with this and felt it was stifling to have a set of mostly predetermined names, especially when they disagreed with an earlier group over those names.)
 The reader repeats this for all Post-it clusters until all cluster groups have category names. During this time frame, some Post-it notes may be moved from one group to another. Finally, an attempt is made to place any and all orphaned Post-it notes into a named category. If not, they stay as orphans. This part takes the findings and attempts to categorize them for the interrelational digraph part of this technique. (15–20 minutes)
 The moderator takes a large blank matrix and writes all the category names down the left side of the matrix and then writes the same set across the top of the matrix. The moderator shades out where
each category intersects with itself. You should end up with a diagonal line of shaded boxes from the top left down to the bottom right in that matrix. This is the foundation for the interrelationship digraph. We want to end up with some idea of what we need to work on first, second, third, etc., to get the biggest bang for the buck in process. (About 2 minutes)

 The moderator reads each category name down the left side of the matrix, and asks for each, “For this category, what are the other categories that have a major impact on it?” The group participates in identifying other categories that have that major impact. The moderator simply places an “X” across the row for that targeted category. This gets repeated for each category name down the left until done. (10 minutes maximum)

 The moderator tallies up the number of “X” marks per column and writes the totals at the bottom of each column. This provides a good idea of what categories should be attacked first that have the most impact on other things. (About 2 minutes)

 Thank the group for their time and dismiss them.

Is this a perfect technique? No. Is it fast? Yes. Does it get at process pain issues? You bet. By spending about one and a half to two hours on this, you will extract pain issues from everybody. There is no retribution
because there are no names involved. The quiet person can write stuff down anonymously just like the extroverted person can. The inputs come from the very people seeing and suffering from those pain issues.
What I have done after the session is to record all the findings by category into an Excel spreadsheet. This is a great application for counting things and coming up with percentages. This completed spreadsheet gets
sent back to all the participants immediately. I have cautioned this group to keep this data under wraps because it is confidential. The next step is to convene a senior management meeting to go over
the findings and categories. The senior staff needs an understanding of what went on and that this technique gathers data rapidly. As a moderator, take the top three categories in particular and concentrate on those for
this senior management group. This is done to:

 Acquaint the senior management on pain issues “from the trenches” and in a written form (not sanitized)
 Identify the top three categories that, if worked, should give the biggest bang for the buck in improving or removing pain issues
 Have this top-level management group develop an initial plan to attack the top three categories (or a subset of them) Finally, I arrange for a feedback meeting with all the participants, so that a member of senior staff:

 Tells participants that management has heard their pain issues

 Informs participants on the plan to attack pain issues This feedback meeting can be powerful to all involved. It closes the loop with participants and makes them feel like they have not wasted their time. It involves senior management directly with unsanitized pain issues. They can’t say they didn’t know about this or that. There’s no place to hide. They have to do something about it. It does cause action. When any improvements are made, you will keep going back to these pain issues. You don’t tell the rank and file that you’ve now satisfied the first goal of some part of the CMMI! They will not relate to that at all. Tell them that these processes directly address the pain issues that were established. When regular folks get to see less pain, you will rapidly develop more and more champions to your cause. If upper management sees smoother operations, better quality, smaller time-to-market costs, better repeatability, etc., which all contribute to a healthier bottom line, you will get more champions at that level. You can do this periodically to see how you’re doing. You can do this
as part of a preappraisal drill for process maturity. You can do this as a preaudit drill. The periodic approach will give you some powerful metrics related to pain issues. There’s nothing like solid numbers to show your
workforce that you are serious about reducing workforce pain.

Develop a Top-Level Life-Cycle Framework

This may be obvious but you really need to provide that top-level lifecycle framework into which to fit all the process pieces being developed. Without that top-level picture, there is no cohesive way of creating process
elements that “fit” into anything. One vice president I worked for insisted on forming various Process Action Teams (PATs) to get some deployment items done without this in place. I was even ordered to get these groups
going despite my strong objections. The results of this VP’s order were absolute chaos and a huge waste of time. I sure hope none of you will deal with some of the characters I’ve had to endure for process development
and improvement! People like that are out there. Some of them even get promoted! Hopefully, the top-level life cycle has been developed before insertion takes place. You can do a subset top-level life cycle if your initial
deployment efforts only deal with that part of the overall life cycle. For example, if you are attacking proposal-related processes, you can get away with just developing the proposal part of your life cycle. The bottom

line is that you absolutely need a framework into which to fit any process elements, so that you develop once and don’t need rework. With that top-level life cycle laid out with PADs per life-cycle phase,
you now have the ability to tie your pain issues to activities and to associative procedures. You also have the ability to tie event-driven procedures to any and all life-cycle phases.

Reference: Defining and Deploying Software Processes

Tagged : / / / / / / / / / / / / / / /