Amazon EC2 key pairs and other stumbling blocks – Guide

amazon-ec2-key-pairs-stumbling-blocks

While working with Cloud Tools and Cloud Foundry users, I have noticed that EC2 key pairs and security group configuration are common stumbling blocks for people who are new to Amazon EC2. When you sign up for an AWS account you get what can be, at first, a confusing set of credentials:  an access key id,  a secret access key, X509 certificate and a corresponding private key. You authenticate an AWS request using either the access key id and secret access key or the X509 certificate and private key. Some APIs and tools support both options, where was others support just one. And, to make matters worse, to launch an EC2 instance and access it via SSH you must use a (named) EC2 key pair. This EC2 key pair is not the same as the X509 certificate/private key given to you by AWS during sign up. But they are easily confused since they both consist of private and public keys.

You create a EC2 key pair by using one of the AWS tools: command line tools, ElasticFox plugin or the rather nice AWS console. Under the covers these tools make an AWS request to create the key pair.

Here is a screenshot of the AWS Console showing how you create a key pair.

Creating a Key Pair

There are three steps:

  1. Select Key Pairs
  2. Click  Create Key Pair
  3. Enter the name of the Key Pair you want to create – you chose the name

The console will then create the key pair and prompt you to save the private key.

Saving a key pair

You specify the key pair name in the AWS request that launches the instances and specify the private key file as the -i argument to ssh when connecting to the instance.Just make sure you save the key pair in safe place.

Another stumbling block is that you need to enable SSH in the AWS firewall. Both Cloud Tools and Cloud Foundry use SSH to configure the instances and deploy the application. If SSH is blocked then they won’t work. Fortunately, the AWS firewall (a.k.a. security groups) is extremely easy to configure using the AWS tools – command line tools, ElasticFox plugin or the nice AWS console – by editing the default security group to allow SSH traffic.

The good news is that these are relatively minor hurdles to overcome. Once you have sorted out your EC2 key pair and edited the security groups to enable SSH using Cloud Tools or Cloud Foundry to deploy your web application is very easy.

Tagged : / / / / / / / / / / / / / / /

Cloud Computing Trends | Cloud Adoption Analysis | Organizations

cloud-computing

We just finished the first decade of this century/millennium. The early part of this decade saw great worry about the Year 2000 problem. Much gloom and doom was predicted, but things passed off smoothly. No apocalyptic upheaval.

As we usher in the next decade, the biggest buzzword is “Cloud Computing”, a rapprochement of ASP, SaaS, SOA, Virtualization, Grid Computing, Enterprise 2.0, etc. All these buzzwords have been making the rounds over past few years. Finally, computing as a “utility” seems practical and doable. Amazon took the lead in introducing AWS (Amazon Web Services) way back in 2003. It then brought in Storage as a Service concept via S3 (Simple Shared Storage). It also introduced EC2 (Elastic Computing Cloud), where Infrastructure as a Service became viable.

I just read a nice summary of this written by M.R. Rangaswamy of the Sand Hill Group. While the momentum is on, MR says large enterprises are going to be slow adapters. Much cloud adoption is in the SMB arena where lower TCO and capex override any concern for security and scale. Older vendors like IBM will offer a hybrid model – In-house systems and cloud. This is a no-brainer, as there is a huge legacy of production systems in Fortune 1000 companies running in the premises. But “pure cloud” vendors like Google, Amazon, and SalesForce.com will push for “cloud-only” approach.

Another area of interest is data management, the volume of which has never been seen before. There is the NoSQL movement to deal with unstructured data and framework like Hadoop combined with the MapReduce algorithm is getting quick adoption for fast search.

This decade will see a big landscape change in the computing arena – from the model of computing to how we store and manage data for access and analytics.

Welcome to 2010.

Tagged : / / / / / / / / / / /

Workforce Management Software Helps Call Centers Save Money

workforce-management

One way to increase revenues in your inbound call center might be via workforce management software.

For call centers that realize revenue by answering calls (be they catalogues, reservation centers, what have you), workforce management automation can help reduce queue times and improve service, thereby reducing the number of abandoned calls and increasing revenue calls completed.

These call centers can increase revenues by tens of thousands of dollars per year in addition to the cost savings.

And since cost is a prime consideration, you’ll want to look at the SaaS (News – Alert)-based model.

Do be careful, though: “Often vendors who sell on-premise software may offer a hosted model for on-demand options and often misleadingly call it SaaS-based software,” say officials of Monet Software, which offers cloud-based WFM. “However, sometimes it’s simply a hosted client server application on a server at the vendor’s site, providing an application that was not originally designed to be hosted and delivered, with a few changes, over the Web via a single, dedicated server.”

You’ll be able to tell such impostors as they’ll almost always lack multi-tenant architecture and require separate servers and installations for each customer. In the end, Monet officials warn, they’re “much more costly and less scalable, and also usually require support for multiple releases, which is very resource intensive.”

Genuinely useful SaaS workforce management software, however, is a boon to users. A product such as Monet WFM Live uses a new multi-tenant architecture “designed to deliver Web-based applications at the lowest possible cost,” company officials say, focusing on “fast set up, low operating costs through shared services, highest security for Web-based deployment and high performance and scalability through the scaling of computer resources also called ‘elastic cloud computing.'”

This is nicely cost-effective, as it ensures available computing capacity only when you need it, at the lowest possible cost.

With SaaS there’s no large upfront investment for software and hardware either, it’s usually offered via a low monthly subscription fee that includes support, maintenance and upgrades.

And of course with the SaaS provider managing the IT infrastructure, costs are lowered by avoiding IT participation time for hardware and software issues as well as the personnel resources required for IT management. These “hidden costs” for hardware replacements, upgrades, and IT operation resources are typical for other premise-based software.

Tagged : / / / / / / / / / / / / / / / / / / / /

Running MSBuild 4.0 and MSBuild 3.5 on Continuous Integration

msbuild-40-and-msbuild-35

With Visual Studio 2010 RC released recently, we jumped on the release and began to code with VS2010.  One issue that popped up was that now all builds were targeting MSBuild 4.0.

That doesn’t seem to be a big problem until our CruiseControl CI server kicked in, downloaded our updated code and failed building the upgraded projects.

Fortunately there is a very quick solution to this little problem.  There are a couple of requirements.

1. You need to have VS2010 RC installed somewhere
2. You need to download the .Net Framework 4.0 (I recommend the full version and not just the Client Profile, it ensures you don’t miss anything)

To fix, do the following:

1. download and install the .Net Framework 4.0 on the CI server (then restart the server)
2. on the computer where VS2010 RC is installed go to the following path:
%programfiles%\MSBuild\Microsoft\VisualStudio
3. copy the v10.0 folder located in that directory into the CI server at the same path (or wherever our MSbuild path is on the CI server)
4. Once that is done, edit the ccnet.config file at the tag and change it to the new .Net 4.0 Framework installed (you should only need to change the section “\v3.5\” to “\v4.0.xxxxx\”

Hope this helps

Tagged : / / / / / / / / / / / / / / /

Issues Compiling VS2010 solutions (with web projects) from Nant | MSB4064 error

vs2010-compiling-issues

Recently I upgraded a project of mine (the Dimecasts code base) to use VisualStudio 2010.  In the process everything worked just fine from the IDE, but when I tried to compile it from the command line I would get the following errors:

Error MSB4064: The “Retries” parameter is not supported by the “Copy” task.
Error MSB4063: THe “Copy” task could be initialized with its input parameter.

After a bit a googling I came across a post which (and of course i cannot find it now) said that if you open up your .proj files and change the line that pointed to the v10.0 build of web applications and reset it back to 9.0 everything would compile.  And this did work… BUT when you try to open that project up again in VS 2010 it will simply revert your changes… this is not a working solution.

Next I decided to switch my target framework in Nant from 3.5 to 4.0, but of course my nant.exe.config file does not support 4.0 yet.  So after a bit more googling I found this post that gives details on how to add the missing values to the config file.

When I added the config information to my Nant.exe.config file things were better, but still not great.  Now I was getting an error that said:

The “vendor” attribute does not exist, or has no value.

To resolve this I added the following under the node in my config
vendor=”Microsoft”

After this I got another error…. This time it said that .Net Framework 4.0 was not installed.  But I know this is not valid.  After looking at the information for a few more seconds I realized the issue.  The example config from the post above was build on an older version of the 4.0 framework (.20506) and I have .30128.

I changed all values in the nant.exe.config value that was v4.0.20506 to be v4.0.30128 and NOW I am able to compile.

So long story short, if you are getting the MSB4064 error you need to do the following:

1. Point nant to use the 4.0 framework tools
2. Follow this post  and copy the framework section to your Nant.exe.config file
3. Add the missing ‘vendor’ attribute to the new framework section
4. Update the version in the new framework section to match the version you have on disk (check C:\Windows\Microsoft.NET\Framework for versions)
5. Compile again

Tagged : / / / / / / / / / / / / / / / / / / /

JUnit 4 Test Logging Tips using SLF4J

junit-4-test-logging-using-slf4j

When writing JUnit tests developers often add log statements that can help provide information on test failures. During the initial attempt to find a failure a simple System.out.println() statement is usually the first resort of most developers.

Replacing these System.out.println() statements with log statements is the first improvement on this technique. Using SLF4J (Simple Logging Facade for Java) provides some neat improvements using parameterized messages. Combining SLF4J with JUnit 4 rule implementations can provide more efficient test class logging techniques.

Some examples will help to illustrate how SLF4J and JUnit 4 rule implementation offers improved test logging techniques. As mentioned the inital solution by developers is to use System.out.println() statements. The simple example code below shows this method.

01    import org.junit.Test;
02
03    public class LoggingTest {
04
05      @Test
06      public void testA() {
07        System.out.println(“testA being run…”);
08      }
09
10      @Test
11      public void testB() {
12        System.out.println(“testB being run…”);
13      }
14    }

The obvious improvement here is to use logging statements rather than the System.out.println() statements. Using SLF4J enables us to do this simply whilst allowing the end user to plug in their desired logging framework at deployment time. Replacing the System.out.println() statements with SLF4J log statements directly results in the following code.
view source

01    import org.junit.Test;
02    import org.slf4j.Logger;
03    import org.slf4j.LoggerFactory;
04
05    public class LoggingTest {
06
07      final Logger logger =
08        LoggerFactory.getLogger(LoggingTest.class);
09
10      @Test
11      public void testA() {
12        logger.info(“testA being run…”);
13      }
14
15      @Test
16      public void testB() {
17        logger.info(“testB being run…”);
18      }
19    }

Looking at the code it feels that the hard coded method name in the log statements would be better obtained using the @Rule TestName JUnit 4 class. This Rule makes the test name available inside method blocks. Replacing the hard coded string value with the TestName rule implementation results in the following updated code.

01    import org.junit.Rule;
02    import org.junit.Test;
03    import org.junit.rules.TestName;
04    import org.slf4j.Logger;
05    import org.slf4j.LoggerFactory;
06
07    public class LoggingTest {
08
09      @Rule public TestName name = new TestName();
10
11      final Logger logger =
12        LoggerFactory.getLogger(LoggingTest.class);
13
14      @Test
15      public void testA() {
16        logger.info(name.getMethodName() + ” being run…”);
17      }
18
19      @Test
20      public void testB() {
21        logger.info(name.getMethodName() + ” being run…”);
22      }
23    }

SLF4J offers an improved method to the log statement in the example above which provides faster logging. Use of parameterized messages enable SLF4J to evaluate whether or not to log the message at all. The message parameters will only be resolved if the message will be logged. According to the SLF4J manual this can provide an improvement of a factor of at least 30, in case of a disabled logging statement.

Updating the code to use SLF4J parameterized messages results in the following code.

01    import org.junit.Rule;
02    import org.junit.Test;
03    import org.junit.rules.TestName;
04    import org.slf4j.Logger;
05    import org.slf4j.LoggerFactory;
06
07    public class LoggingTest {
08
09      @Rule public TestName name = new TestName();
10
11      final Logger logger =
12        LoggerFactory.getLogger(LoggingTest.class);
13
14      @Test
15      public void testA() {
16        logger.info(“{} being run…”, name.getMethodName());
17      }
18
19      @Test
20      public void testB() {
21        logger.info(“{} being run…”, name.getMethodName());  }
22      }
23    }

Quite clearly the logging statements in this code don’t conform to the DRY principle.

Another JUnit 4 Rule implementation enables us to correct this issue. Using the TestWatchman we are able to create an implementation that overrides the starting(FrameworkMethod method) to provide the same functionality whilst maintaining the DRY principle. The TestWatchman Rule also enables developers to override methods invoked when the test finishes, fails or succeeds.

Using the TestWatchman Rule results in the following code.

01    import org.junit.Rule;
02    import org.junit.Test;
03    import org.junit.rules.MethodRule;
04    import org.junit.rules.TestWatchman;
05    import org.junit.runners.model.FrameworkMethod;
06    import org.slf4j.Logger;
07    import org.slf4j.LoggerFactory;
08
09    public class LoggingTest {
10
11      @Rule public MethodRule watchman = new TestWatchman() {
12        public void starting(FrameworkMethod method) {
13          logger.info(“{} being run…”, method.getName());
14        }
15      };
16
17      final Logger logger =
18        LoggerFactory.getLogger(LoggingTest.class);
19
20      @Test
21      public void testA() {
22
23      }
24
25      @Test
26      public void testB() {
27
28      }
29    }

And there you have it. A nice test code logging technique using JUnit 4 rules taking advantage of SLF4J parameterized messages.

I would be interested to hear from anyone using this or similar techniques based on JUnit 4 rules and SLF4J.

Reference: http://www.catosplace.net/

Tagged : / / / / / / / / / / / / / / / / / /

Why and how to use Jetty in mission-critical production

how-to-use-jetty

This article is a summary of a seminar I had on the topic. If it seems like it’s a continuation of an existing discussion that’s because, to some extent, it is. If you haven’t been discussing exchanging your app server, this article probably isn’t very interesting to you.

By putting the application server inside my application instead of the other way around, I was able to leap tall buildings in a single bound.

The embedded application server

This is how I deploy my sample application to a new test environment (or to production):

  1. mvn install
  2. scp -server/target/-1.0.onejar.jar appuser@appserver:/home/appuser/test-env1/
  3. ssh appuser@appserver “cd /home/appuser/test-env1/ && java -jar -1.0.onejar.jar&”

This require no prior installed software on the appserver (with the exception of the JVM). It requires no prior configuration. Rolling back is a matter of replacing one jar-file with another. Clustering is a matter of deploying the same application several times.

In order to make this work in a real environment, there are a many details you as a developer need to take care of. As a matter of fact, you will have to take responsibility for your operational environment. The good news is that creating a good operational environment is not more time-consuming than trying to cope with the feed and care of a big-a Application Server.

In this scheme every application comes with its own application server in the form of jetty’s jar-files embedded in the deployed jar-file.

The advantages

Why would you want to do something like this?

  • Independent application: If you’ve ever been told that you can’t use Java 1.5 because that would require an upgrade of the application server. And if we upgrade the application server, that could affect someone else adversely. So we need to start a huge undertaking to find out who could possibly be affected.
  • Developer managed libraries: Similar problems can occur with libraries. Especially those that come with the application server. For example: Oracle OC4J helpfully places a preview version of JPA 1.0 first in your classpath. If you want to use Hibernate with JPA 1.0-FINAL, it will mostly work. Until you try to use a annotation that was changed after the preview version (@Discriminator, for example). The general rule is: If an API comes with your app server, you’re better served by staying away from it. A rather bizarre state of affairs.
  • Deployment, configuration and upgrades: Each version of the application, including all its dependencies is packaged into a single jar-file that can be deployed on several application server, or several times on the same application server (with different ports). The configuration is read from a properties-file in the current working directory. On the minus side, there’s no fancy web UI where you can step through a wizard to deploy the application or change the configuration. On the plus side, there is no fancy web UI …. If you’ve used one such web UI, you know what I mean.
  • Continuous deployment: As your maven-repository will contain stand alone applications, creating a continuous deployment scheme is very easy. In my previous environment, a cron job running wget periodically was all that was needed to connect the dots. Having each server environment PULL the latest version gives a bit more flexibility if you want many test environments. (However, if you’re doing automated PUSH deployment, it’s probably just as practical for you).
  • Same code in test and production: The fact that you can start Jetty inside a plain old JUnit test means that it is ideal for taking your automated tests one step further. However, if you test with Jetty and deploy on a different Application Server, the difference will occasionally trip you. It’s not a big deal. You have to test in the server environment anyway. But why not eliminate the extra source of pain if you can?
  • Licenses: Sure, you can afford to pay a few million $ for an application server. You probably don’t have any better use for that money, anyway, right? However, if you have to pay licenses for each test-server in addition, it will probably mean that you will test less. We don’t want that.
  • Operations: In my experience, operations people don’t like to mess around with the internals of an Application Server. An executable jar file plus a script that can be run with [start|status|stop] may be a much better match.

The missing bits

Taking control of the application server takes away a lot of complex technology. This simplifies and makes a lot of stuff cheaper. It also puts you back in control of the environment. However, it forces you to think about some things that might’ve been solved for you before:

  • Monitoring: The first step of monitoring is simple: Just make sure you write to a log file that is being monitored by your operations department. The second step requires some work: Create a servlet (or a Jetty Handler) that a monitoring tool can ping to check that everything is okay. Taking control of this means that you can improve it: Check if your data sources can connect, if your file share is visible, if that service answers. Maybe add application-calibrated load reporting. Beyond that, Jetty has good JMX support, but I’ve never needed it myself.
  • Load balancing: My setup supports no load balancing or failover out of the box. However, this is normally something that the web server or routers in front of the application server anyway. You might want to look into Jetty’s options for session affinity, if you need that.
  • Security: Jetty supports JAAS, of course. Also: In all the environments I’ve been working with (CA SiteMinder, Sun OpenSSO, Oracle SSO), the SSO server sends the user name of the currently logged in user as an HTTP header. You can get far by just using that.
  • Consistency: If you deploy more than one application as an embedded application server, the file structure used by an application (if any) should be standardized. As should the commands to start and stop the application. And the location of logs. Beyond that, reuse what you like, recreate what you don’t.

Taking control of your destiny

Using an embedded application server means using the application server as a library instead of a framework. It means taking control of your “main” method. There’s a surprisingly small number of things you need to work out yourself. In exchange, you get the control to do many things that are impossible with a big-A Application Server.

Reference: http://www.javaworld.com/community/

Tagged : / / / / / / / / / / / / / / /

Understand Cloud Computing in Simple Terms – Maximumbit Inc

cloud-computing-maximumbit-

Cloud Computing is an emerging computing technology that uses the internet and central remote servers to maintain data and applications. Cloud computing allows consumers and businesses to use applications without installation and access their personal files at any computer with internet access. This technology allows for much more efficient computing by centralizing storage, memory, processing and bandwidth. Cloud computing is broken down into three segments: “applications,” “platforms,” and “infrastructure.” Each segment serves a different purpose and offers different products for businesses and individuals around the world.

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT’s existing capabilities.

In June 2009, a study conducted by Version One found that 41% of senior IT professionals actually don’t know what cloud computing is and two-thirds of senior finance professionals are confused by the concept, highlighting the young nature of the technology. In Sept 2009, an Aberdeen Group study found that disciplined companies achieved on average an 18% reduction in their IT budget from cloud computing and a 16% reduction in data center power costs.

Depending on who you are talking to, you will see different perceptions about what Cloud Computing actually is, from the simplest web-hosted solutions right through to virtualized processing environments with Web-Service initiated provisioning and decommissioning.

The main challenges for Cloud Computing before it is likely to enjoy wide-spread adoption are the following:

Persistence & Availability – The ability to continue working during outages or the ability to mitigate outages.
Privacy and National Security Concerns – The hosting of information outside of your country’s borders does concern Public Sector organizations. The US Patriot Act for example is a concern for some countries in adopting cloud services. It is thought that Country-silted Clouds may be able to address this.
Geo-Political Information Management Concerns – The Political risk a country takes on by housing information for another country.

Cloud Computing is all about:

1. SaaS (Software as a Service)


These type of cloud computing delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting.

2. Utility computing


The idea is not new, but this form of cloud computing is getting new life from Amazon.com, Sun, IBM, and others who now offer storage and virtual servers that IT can access on demand. Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter. Other providers offer solutions that help IT create virtual datacenters from commodity servers, such as 3Tera’s AppLogic and Cohesive Flexible Technologies’ Elastic Server on Demand. Liquid Computing LiquidQ offers similar capabilities, enabling IT to stitch together memory, I/O, storage, and computational capacity as a virtualized resource pool available over the network.

3. Web services in the cloud


Closely related to SaaS, Web service providers offer APIs that enable developers to exploit functionality over the Internet, rather than delivering full-blown applications. They range from providers offering discrete business services to the full range of APIs and even conventional credit card processing services.

4. Platform as a service


Another SaaS variation, this form of cloud computing delivers development environments as a service. You build your own applications that run on the provider’s infrastructure and are delivered to your users via the Internet from the provider’s servers.

5. MSP (managed service providers)


One of the oldest forms of cloud computing, a managed service is basically an application exposed to IT rather than to end-users, such as a virus scanning service for e-mail or an application monitoring service (which Mercury, among others, provides). Managed security services delivered by Secure Works, IBM, and Verizon fall into this category, as do such cloud-based anti-spam services as Postini, recently acquired by Google. Other offerings include desktop management services, such as those offered by Center Beam or Ever dream.

6. Service commerce platforms


A hybrid of SaaS and MSP, this cloud computing service offers a service hub that users interact with. They’re most common in trading environments, such as expense management systems that allow users to order travel or secretarial services from a common platform that then coordinates the service delivery and pricing within the specifications set by the user. Think of it as an automated service bureau. Well-known examples include Rearden Commerce and Ariba.

7. Internet integration


The integration of cloud-based services is in its early days. OpSource, which mainly concerns itself with serving SaaS providers, recently introduced the OpSource Services Bus, which employs in-the-cloud integration technology from a little startup called Boomi. SaaS provider Workday recently acquired another player in this space, CapeClear, an ESB (enterprise service bus) provider that was edging toward b-to-b integration. Way ahead of its time, Grand Central — which wanted to be a universal “bus in the cloud” to connect SaaS providers and provide integrated solutions to customers — flamed out in 2005.

 

Citrix Cloud Center

C3 is designed to give cloud providers a complete set of service delivery infrastructure building blocks for hosting, managing and delivering cloud-based computing services. C3 includes a reference architecture that combines the individual capabilities of several Citrix product lines to offer a powerful, dynamic, secure and highly available service-based infrastructure ideally suited to large-scale, on-demand delivery of both IT infrastructure and application services. This architecture consists of four key components:

Platform – Powered by Citrix XenServerTM Cloud Edition:  The new XenServer Cloud Edition is a powerful virtual infrastructure solution optimized for service provider environments. It combines the cloud-proven scalability of the Xen® hypervisor which powers most of the world’s largest clouds, with all the virtualization management and dynamic workload provisioning capabilities of the full Citrix XenServer product line enabling cloud providers to host and manage any combination of Windows® and Linux environments. XenServer Cloud Edition also features an innovative consumption based pricing model to meet the needs of service providers that charge their customers based on metered resource use.

Delivery – Powered by Citrix® NetScaler’s®:  Through its rich policy-based AppExpert engine, Citrix NetScaler’s delivers cloud-based resources to users over the Web, continually optimizing user application performance and security by dynamically scaling the number of virtual machines (VMs) or servers available in response to changing workload demands and infrastructure availability. This allows cloud providers to balance workloads across large distributed cloud environments and transparently redirect traffic to alternate capacity on or off premise in the event of network failures or datacenter outages.  NetScaler’s can also dramatically reduce server requirements in large cloud centers by offloading protocol and transaction processing from backend server pools. NetScaler’s proven architecture is designed for highly scalable, multi-tenant Web applications and delivers Web services to an estimated 75 percent of all Internet users each day.

Bridge – Powered by Citrix WANScaler:  As larger enterprises begin experimenting with cloud-based services for parts of their own infrastructure and application hosting strategy, cloud providers will also need reliable and secure ways to provide a seamless bridge between hosted cloud services and premise-based enterprise services. Over time, C3 will incorporate a set of open interfaces that allow customers to easily move virtual machines and application resources into a cloud-based datacenter and back again as needed. WANScaler technology will play a critical role in this enterprise bridge by accelerating and optimizing application traffic between the cloud and the enterprise datacenter, even over long distances.

Orchestration – Powered by Citrix Workflow Studio TM: Tying it all together, Citrix Workflow Studio provides a powerful orchestration and workflow capability that allows the products in the C3 portfolio to be dynamically controlled and automated, and integrated with customer business and IT policy. Workflow Studio allows customers to control their infrastructure dynamically–integrating previously disconnected processes and products into a single powerful, orchestrated and cohesive system. This unique capability will make it easier for cloud providers to enable highly efficient burst able clouds that automatically scale resources up and down based on demand, shifting hardware resources to where they are most needed and powering them down for maximum power savings when not needed.

Today, with such cloud-based interconnection seldom in evidence, cloud computing might be more accurately described as “sky computing,” with many isolated clouds of services which IT customers must plug into individually. On the other hand, as virtualization and SOA permeate the enterprise, the idea of loosely coupled services running on an agile, scalable infrastructure should eventually make every enterprise a node in the cloud. It’s a long-running trend with a far-out horizon. But among big megatrends, cloud computing is the hardest one to argue with in the long term.

Tagged : / / / / / / / / / / / / / / / /

Cloud Computing: The Computer is out the Window!

cloud-computing

Debates have been heating up about Cloud Computing (CC). Biggest challenge is security and bigger bigger challenge is ‘control’ of a company’s tech assets. The only limitation so far has been internet bandwidth, reason why it took CC a while to become mainstream. Futurists such as Nicolas Negroponte saw it coming a while back and evangelized about it repeatedly in his book ‘being digital’ (a masterpiece). Entrepreneurs like Marc Andreessen saw the opportunities early and started Loud Cloud back in 1999 (now Opsware) and Amazon today generates millions in revenue because of Amazon Web Services (Amazon launched its Elastic Compute cloud (EC2) for companies to use back 2006: yes, commercially). What really triggered CC is none other than Web 2.0: all them browser-based enterprise applications! In Summary: we’ve all contributed to Cloud Computing, without realizing it. You’ve been using Cloud Computing.

Cloud Computing is fantastic for emerging economies and their speed in adopting ‘affordable’ new technology. Look what’s happening in Africa, where mobile internet and new telecom infrastructures are making it possible to leap into internet adoption. So why a computer in the first place. Computers are becoming more of a luxury item vs. a need?

Conclusion: Cloud Computing is not a trend, but a major shift in how we ’smartly’ manage technology. For those who are still in denial and resisting change, they’re already lagging and need to catch up fast, cuz that computer is out of the Window!

Great reference here on the history of CC and how far it dates back (60’s) thanks to Computer Weekly http://tinyurl.com/yj7rln3

Tagged : / / / / / / / / / / / / / / / /

Cloud Computing Selection: Cloud Infrastructure Service Providers

cloud-computing-service-providers

There are list of solution which provides Cloud Infrastructures for Hardware as a service (HAAS) or Software as a Services(SAAS).

AllenPort
AllenPort’s technology handles file management chores like backup, file sharing, disaster recovery, remote access and managing user requirements.

AppZero
AppZero offers OS-free Virtual Application Appliances that are self-contained, portable units, meaning enterprises can experiment with moving applications to the cloud while avoiding cloud lock-in.

Boomi
Boomi and its AtomSphere connect any combination of cloud and on-premise applications without software or appliances.

CA
NetQoS’s monitoring prowess and Cassatt’s data center automation and policy-based optimization expertise, CA can boost the functionality of its Spectrum Automation Manger to let it manage network and systems traffic in both public and private cloud computing environments.

Cast Iron Systems

Cast Iron offers an option for integrating SaaS applications with the enterprise. That method, which involves configuration, not coding, can in some cases slash integration costs up to 80 percent.

Citrix
Citrix Cloud Center (C3) ties together virtualization and networking products, arming cloud providers with a virtual infrastructure platform for hosted cloud services. The service, which is available on a monthly, usage-based pricing model and support mode, is an architecture comprising five key components: a platform powered by Citrix XenServer; applications and desktop services via Citrix XenApp; delivery powered by Citrix NetScaler; a bridge using Citrix Repeater; and orchestration through Citrix Workflow Studio.

Elastra
Elastra makes software that enables enterprises to automate modeling, deployment and policy enforcement of the application infrastructure. Its products tie in with provisioning and virtualization tools. Elastra’s Enterprise Cloud Server software handles the management and provisioning of complex systems. Users can quickly model and provision application infrastructure; automate changes to the system deployment process; efficiently utilize internal, external and virtualized resources on demand and enforce IT policy rules. Elastra Cloud Server can also run on Amazon Web Services.

EMC
With its Atmos and Atmos onLine offerings, EMC is evangelizing its approach to the cloud to deliver scalability, elasticity and cost savings by building, virtualizing and deploying services and applications. Atmos onLine is a cloud storage service built on Atmos, EMC’s policy-based information management platform. EMC Atmos onLine provides Cloud Optimized Storage, or COS, capabilities for moving and managing large amounts of data with reliable service levels and in a secure fashion.

Informatica
Informatica basically pioneered cloud computing for data integration, offering a host of offerings for customers of various shapes and sizes. It offers fast and easy pay-as-you-go and pay-for-use options that let users move data into or out of the cloud or manage data within the cloud of from one app to another.

NetApp
Call it IT-as-a-Service (ITaaS) or call it an enterprise cloud infrastructure. Data ONTAP 8, NetApp’s latest cloud computing infrastructure, ties together its two previously separate platforms: Data ONTAP 7G and Data ONTAP GX. It delivers improved data management functions and tighter integration with data center management systems. Ultimately, NetApp Data ONTAP 8 enables storage, server, network and applications layers to talk to each other.

New Relic
New Relic is running full throttle with its RPM offering, an on-demand performance management tool for Web applications. It takes only minutes to implement and offers visibility and code-level diagnostics for Web apps deployed in both private and public clouds, along with traditional and dedicated infrastructures, and any combination thereof. With RPM, New Relic delivers real-time metrics, unlocking the ability to monitor, troubleshoot and fine tune app performance in the cloud.

Novell
Novell is looking to the cloud to tie together all things IT. It is combining products like Moblin, a cloud-centric desktop OS developed by Novell and Intel; the SUSE Appliance Program, a program for ISVs to build software appliances and receive go-to-market support; Novell Cloud Security Service; and PlateSpin Workload Management Solutions for IT managers.

Open Nebula
This open-source toolkit fits snuggly into existing data center environments to build any type of cloud deployment. OpenNebula can be used to manage virtual infrastructure in the data center or to manage a private cloud. It also supports hybrid clouds to combine local infrastructure with public cloud infrastructure for hosting environments. Additionally, it supports public clouds by offering cloud interfaces to expose its functionality for virtual machine, storage and network management.

OpSource
OpSource is all about cloud operations, offering everything from an enterprise-grade cloud infrastructure to fully managed hosting and apps management. Essentially, OpSource Cloud is a virtual private cloud within the public cloud, giving users control over their degree of Internet connectivity. Meanwhile, OpSource On-Demand combines technical operations, application operations and business operations into a Web operations offering that includes application management, compliance and business services. Lastly, OpSource Billing CLM is a self-service offering for SaaS and Web customer on-boarding, subscription management and payment processing.

Paglo
This IT search and management service startup recently launched its Log Management application to let IT managers capture and store their logs as well as search and analyze them in the cloud. Paglo compares it to a Google-like search for logs, collecting data from all network devices. Paglo has also recently launched a new application to monitor Amazon EC2 application instances, such as disk reads and writes, CPU utilization and network traffic. Users can access the cloud-based information from any Web browser.

RightScale
RightScale’s Cloud Management Platform eases deploying and managing apps in the cloud and enables automation, control and portability. The platform helps users get into the cloud quickly with cloud-ready ServerTemplates and best-practice deployment architectures. And users retain complete visibility into all levels of deployment by managing, monitoring and troubleshooting applications. Lastly, RightScale’s Cloud Management Platform helps users avoid lock-in by letting them choose their deployment language, environment, stack, data store and cloud for portability.

Stoneware
Stoneware’s mission is simple: To enable organizations to move from a client-centric to a Web-based, private cloud computing environment. With products aimed specifically at core verticals education, healthcare, manufacturing, legal, financial and enterprise Stoneware offers private cloud technology that is being used to create solutions that enable organizations to access applications, content, data and services from anywhere in a secure fashion.

VMware
Last August, VMware acquired SpringSource which provides Web application development and management services. SpringSource speeds the delivery of applications in the cloud using a process that has become known as lean software. VMWare also acquired Hyperic, an open-source monitoring and troubleshooting vendor. The VMWare-SpringSource-Hyperic trifecta creates an amalgamation that ties together VMWare’s virtualization vision, SpringSource’s strong development tools and application servers as well as Hyperic’s monitoring.

Zeus Technology
Zeus gives users the ability to create, manage and deliver online services in cloud, physical or virtual environments, letting companies visualize and manipulate the flow of traffic to Web-enabled apps. And early this year, they will release the Zeus Cloud Traffic Manager so customers can monitor and control cloud usage, offering a single control point for distributed applications, reporting on datacenter usage and allowing for goals like cost, SLA, security and compliance to be applied.

Tagged : / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / / /