Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Shelving operations in P4V

I found very good feature in perforce about Shelving operations in P4V. This is something id deport is not ready to check in the src code due to release or code freeze, still you can store changes in depot without check in.

Tagged : / / / /

Hook Implementation in Subversion

Lets Disucuss best way to implement Hook with SVN?

This is very useful content to know aboutr hook in SVN

svnbook.red-bean.com/en/1.1/ch05s02.html

pre-commit hook, what is the best way of implementing using ActiveDirectory accounts, and the right way to test it before implementing it in real?

Tagged : / / / /

Advantages of Git over SVN and perforce

What are the advantage of GIT over Subversion and perforce?

Code development has its negative and positive sides, but anything that brings more relief and gains time in a project is the developer’s best friend. CVS was for a long time the best solution for version control, adopted by all programmers in major projects. While CVS has slowly evolved into other more successful concurrent version systems like Subversion, the latest trend that manifests across the development world is the usage of DCVSs (distributed version control systems) as the main project tracking managers.

Since its launch in the mid ‘80’s and till 2000, CVS was the only real alternative as a revision control system for programmers. Not a very good one, but better than anything else. Built in 2000 by CollabNet Inc., Subversion was marketed from the start as “a better CVS” and “CVS done right.” It was truly better than CVS; unfortunately, it didn’t bring in that many new features, and under the hood, nothing has changed its core.

Because of that, there was a surge in revision control software development at the start of the 2000’s, which led to the birth and development of many DCVS projects. GNU arch and Monotone where about the first to be launched, followed by Darcs and BitKeeper.

A serious step in DVCS’ development took place after the quarrel between Linux developers and BitKeeper’s management, after BitKeeper accused one of the Linux developers of reverse engineering one of their commercially licensed features. Soon after, Linux programmers led by Linus Torvalds himself released Git as an improved open source solution to BitKeeper’s soft. Besides Git, Mercurial and Bazaar were also very successful alternatives to SVN, sharing many of Git’s features, but lagging in performance.

While SVN (Subversion) was extremely popular at the beginning, it established itself as the premiere solution in code development. Soon after, as DVCSs were developed and released, users migrated to them, abandoning SVN, opting for the more loose and faster solutions.

Nevertheless, old CVS users still tend to linger around SVN due to its familiar interface and old coding habits established over the years. Many of them question to this day Git’s efficiency in quickly putting versions together, while Git supporters, on the other hand, criticize SVN’s stiffness regarding offline work.

In many cases, the debate between Git and SVN supporters usually starts with the following features, or lack.

Git features

Offline work: Git permits any developer to branch a project and store it in a local folder as a standalone repository. After doing all the work offline, the central server or storage unit, they can than simply merge it with the central repository committing all the changes already made. This permits developers that don’t have Internet access all the time to work on a project with ease.

Central project information: Unlike SVN, which uses a .svn folder in each directory, Git uses a central .git folder in the checkout root for all project data and logs. This way, it’s easier to track changes to specific folders, on specific dates or by certain development teams. Thus, project code management is much more simplified. This also allows easy renaming of files or folders, Git automatically transmitting these changes in its history.

Easy branching: Branches and tags are much easier to create and switch in Git.

Merge history: While SVN has a branch merge history, it records all the events as coming from one user (the merger). In case a user has previously worked on a file and has not merged it, if that file is merged by another user, all the changes will be attributed to the user that merged the branch. Git, unlike SVN, can remember file changes beyond a merging point and track every user’s actions beyond project commits. Git also automatically starts the next merge at the last point it recorded.

Disk space: When the Mozilla project was ported from SVN to Mercurial (very similar to Git in performance), disk space usage went down from 12GB to 420MB, 30 times smaller than the original size. Git is supposed to use the same storage algorithms, so file size should be around the same value.

Speed: This is a no-brainer. Because all operations except for push or fetch are performed locally, Git’s speed is overwhelmingly faster than SVN could ever supply.

Better access and administration: SVN relies on an authentication module and access lists to permit users to push and merge branches. Git reduces the time spent for providing commit access to users and just lets the administration team decide what to merge and from whom.

Synchronization: It can occur over various types of media like an SSH channel, over FTP, HTTP, WebDAV, or by emails holding attached patches.

SVN features

GUI: From the beginning, SVN users have been using a GUI when managing their repository. This feature is not found at all in Git, a port for TortoiseSVN still being in the works. Besides the lack of a GUI, Git, coming from a UNIX environment, has a very complicated CLI interface with many options and arguments for its base commands.

Single major repository: Other may say this is not a good thing, but it’s a strictly SVN-specific feature. While Git allows each individual user to copy the entire repository on their computer and work on the project’s code, Subversion has always relied on permissions being given to users to only work on one single repository stored online. This feature allows the developer to always be able to download the latest version of their project without having to wait for all that work on the project to log online, upload and merge their latest branch. This is very useful in fast development environments or application troubleshooting.

Partial checkouts: Git doesn’t offer the possibility to download only a single folder from the repository. This may be a disadvantage to developers not having great bandwidth or speed.

Version numbers: Another keeper from the UNIX fathers of Git is the complicated version control numbers. While SVN uses a simple incremented decimal system, Git employs an SHA1 algorithm to output a 40-character hexadecimal string as the version number. This could get very tiring when having hundreds or more versions.

The abovementioned features are only a scratch on the surface when discussing about the version control system war between CVS/SVN supporters and DVCS users. More on the topic can be read here, which puts all the major platforms’ features next to one another so they can be compared.

Let’s now take a look at the parties involved and analyze their impact on the current programming and development world.

Subversion, as of 2009, was recently included in the Apache Incubator project, mainly because of its long usage history in most of the Apache Foundation’s projects. More details are available in one of my past articles on this topic. Other famous open source or commercial projects that have a weakness for SVN include market players like Ruby, Mono, Free Pascal, ExtJS, Tigris, PHP, MediaWiki, GCC, Django and FreeBSD. All having their source code administrated from an SVN repository.

On the other hand, the community’s favorite project, Git, is currently expanding its horizon every day, recently taking GNOME and The Perl Foundation away from SVN, placing them alongside other notable Git-powered projects like Android, Linux, openSUSE, Yahoo User Interface, x264, Digg, jQuery, X.org, Samba, Ruby on Rails, CakePHP, Fedora, Merb, Freenet, GIMP, Parrot, Qt, rsync, Wine and VLC.

Other Git-like products like Mercurial (originally designed to replace BitKeeper for Linux development) have also been adopted by major corporations and programs like Mozilla, OpenJDK, OpenSolaris, Netbeans, OpenOffice, Vim, SAGE, Growl, Wget, Symbian OS and Adblock Plus. In 2010, The Python Foundation is going to join this list, migrating from SVN.

Another Git-like DVCS platform, generally regarded by the community as being slower than Git, but much easier to learn is Bazaar. This is a product developed especially for Ubuntu, but which saw adoption in many other projects around the web, some of them like Squid, APT, MySQL, GNU Emacs, Gnash and Inkscape.

The trend is easy to see. DVCSs are adopted in more and more projects, while SVN is headed for the history books alongside its precursor, CVS. The final battle in this version control systems war is being waged on the grounds of project storing platforms the likes of SourceForge, Google Code or CodePlex. The winner of this confrontation will surely decide whether SVN will be used in the coming future or whether it will fade away from our minds like the early PC consoles.

Currently, SourceForge and GNU Savannah have the biggest and widest hosting platforms, providing version control platforms like CVS, SVN, Git, Bazaar and Mercurial to all of their users free of charge. For SourceForge, by default, a project will be hosted on Subversion. The same happens in Google Code, where SVN is the default, but the Mountain View-based crew also provides Mercurial as a DVCS alternative. Renowned hosting platform, CodePlex, provides SVN, Mercurial and Microsoft TFS hosting, while the smaller service on Project Kenai offers SVN, Git and Mercurial.

Also lately, due to high technical costs, services are starting to opt only for one version control system, putting some heat in the discussions between the CVS communities. The list is as follows: Mercurial has exclusivity on Bitbucket, Bazaar on Launchpad; Git on Github, Codaset and Gitorious, and SVN on BountySource, Freepository, GridyZone and Origo. A slim crop for SVN, but being the default service on Google Code and SourceForge might give it a fighting chance against the up-and-coming Github.

Some of you might not agree with the previous claim that this is a “battle” and should not be at all compared with the browser wars, but the community deeply rooted in the tech world already knows this is more of a fact than a myth. As proof of concept, we bring you this video from a conference at Google back in 2007, where Linus Torvalds, the inventor of Linux and Git, made some outrageous statements regarding SVN, and especially its users.

If you don’t have the time to view this one-hour long video, we’ve listened to the conference and taken some interesting quotes from Mr. Torvalds: “Subversion has been the most pointless project ever started,” continuing with “Subversion used to say CVS done right: with that slogan there is nowhere you can go. There is no way to do CVS right” and ending with “If you like using CVS, you should be in some kind of mental institution or somewhere else.”

Not very heart-warming comments from a public person like Linus Torvalds. Especially being made in the headquarters of one of the companies that have ignored Git and failed to include it in the Google Code project. Nevertheless, Mr. Torvalds might also be under the influence of an inner demon, specific to most UNIX users to prove that any project developed and coming from a Linux environment is better than anything else.

True or not, Git has seen a rise in usage, as proved by the 2008 and 2009 Kernel.org surveys, which had it ranked above any other versioning control platform like SVN, Bazaar and Mercurial, and with a crushing 94.6% rate of overall satisfaction toward the Git user experience. As a conclusion, it is generally acknowledged that future versions of Git will be adopted in more and more environments, and Git will have to fight only against other DVCSs for supremacy in the programming world.

One more good link for Advantahe of git over SVN(Subversion) …

http://markmcb.com/2008/10/18/3-reasons-to-switch-to-git-from-subversion/

Tagged : / / / / / / / / /

Integration of Jboss and Apache2 and SSL

My Application(.ear) is running in Jboss with any issues on 7001 port. There are following requirement as such with me.

Task 1. Integrate Jboss with Apache2 so all the request should be coming from Apache Instead of jboss

Task 2. Implement SSLwith apache2 so it should open with https instead of http.

For task 1, I have followed carefully community.jboss.org/wiki/UsingModjk12WithJBoss with some issues. 1. Application is getting up and running without any issues but logout has some issues. 2. I want to stop JBOSS access point but not getting any clue

For task 2 Once this is up and running, i will have to implement SSL with Apache so it should only get open with HTTPS instead of http. any help on this front as well..any links or Reference.

To follow this issues properly, you can find my work update on this link..I will keep posted the issues…

Tagged : / / / / /

Everything – fast file/folder search in Windows

 

It is hard and time consuming to search a file/folder in Window 7 and 8. “Everything” is a great tool to solve this issue.

Here are some of the benefits of Everything search engine:

  1. Small installation file
  2. Clean and simple user interface
  3. Quick file indexing
  4. Quick searching
  5. Minimal resource usage
  6. Share files with others easily
  7. Real-time updating

 

Details….URL 

http://www.voidtools.com/

 

1.1 What is “Everything”?

“Everything” is an administrative tool that locates files and folders by filename instantly for Windows.
Unlike Windows search “Everything” initially displays every file and folder on your computer (hence the name “Everything”).
You type in a search filter to limit what files and folders are displayed.

1.2 How long will it take to index my files?

“Everything” only uses file and folder names and generally takes a few seconds to build its database.
A fresh install of Windows XP SP2 (about 20,000 files) will take about 1 second to index.
1,000,000 files will take about 1 minute.

1.3 Does Everything search file contents?

No, “Everything” does not search file contents, only file and folder names.

1.4 Does “Everything” hog my system resources?

No, “Everything” uses very little system resources.
A fresh install of Windows XP SP2 (about 20,000 files) will use about 3-5mb of ram and less than 1mb of disk space.
1,000,000 files will use about 45mb of ram and 5mb of disk space.

1.5 Does “Everything” monitor file system changes?

Yes, “Everything” does monitor file system changes.
Your search windows will reflect changes made to the file system.

1.6 Is “Everything” free?

Yes, “Everything” is Freeware.
If you use “Everything” in a commercial environment and find it useful a donation would be appreciated.

1.7 Does “Everything” miss changes made to the file system if it is not running?

No, “Everything” can be closed and restarted without missing changes made to the file system (even across system restarts).
“Everything” updates the database when it is started.

1.8 What are the system requirements for “Everything”?

“Everything” will run on Windows 2000, XP, 2003 and Vista and Windows 7
“Everything” will only locate files and folders on local NTFS volumes.
“Everything” requires administrative privileges for low level read access to volumes.

1.9 How do I convert a volume to NTFS?

http://support.microsoft.com/kb/307881

1.10 Can “Everything” index a mapped network drive?

No, “Everything” only indexes local or removable NTFS volumes.

To search a networked computer you will need to run Everything on both computers.
One computer will need to Start an ETP server.
The other computer will need to connect to that ETP server.

To start an ETP server:
1. In Everything, On the Tools menu, click Start ETP server.

To connect to an ETP server:
1. In Everything, On the Tools menu, click Connect to ETP Server….
2. Type in the ETP server name.
3. Type in the ETP server port.
4. Type in the ETP server user.
5. Type in the ETP server password.
6. Click OK.

1.11 How do I install the language pack?

Download the language pack Everything.lng.zip
Unzip the language pack into the folder where “Everything” is installed.
Restart Everything.
In “Everything”, On the Tools menu, click Options.
Click the General tab.
Select your language from the Language dropdown list.
Click OK.
In the “language change” popup, Click OK.
Restart Everything.

1.12 How do I bypass the UAC to run “Everything” with administrative privileges on system startup?

Disable run on system startup in “Everything”.
Follow the Make Vista launch UAC restricted programs at startup with Task Scheduler guide at
http://blogs.techrepublic.com/window-on-windows/?p=616
Make sure you use -startup in the Add Arguments box

1.13 How do I bypass the UAC to run “Everything” with administrative privileges when I start it from a shortcut ?

http://blogs.techrepublic.com/window-on-windows/?p=730

2 Searching

2.1 How do I search for a file or folder?

Type the partial file or folder name into the search edit, the results will appear instantly.

2.2 How do I use boolean operators?

AND is the default boolean operator.
For example, here is how to search for foo and bar: foo bar
To search for either of two search terms, add a | between the terms.
For example, here is how to search for .jpg or .bmp: .jpg | .bmp
To exclude something from the search include a ! at the front of the term.
For example, here is how to search for abc and not 123: abc !123

2.3 How do I use wildcards?

Using a * in your search will match any number of any type of character.
For example, here is how to search for files and folders that start with e and end with g: e*g
Using a ? in your search will match one character.
For example, here is how to search for files that have a 2 letter file extension: *.??

2.4 How do I use regex?

| A vertical bar separates alternatives. For example, gray|grey can match “gray” or “grey“.
() Parentheses are used to define the scope and precedence of the operators (among other uses). For example, gray|grey and gr(a|e)y are equivalent patterns which both describe the set of “gray” and “grey“.
? The question mark indicates there is zero or one of the preceding element. For example, colou?r matches both “color” and “colour“.
* The asterisk indicates there are zero or more of the preceding element. For example, ab*c matches “ac“, “abc“, “abbc“, “abbbc“, and so on.
+ The plus sign indicates that there is one or more of the preceding element. For example, ab+c matches “abc“, “abbc“, “abbbc“, and so on, but not “ac“.
. Matches any single character except newlines (exactly which characters are considered newlines is flavor, character encoding, and platform specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches “abc“, etc., but [a.c]matches only “a“, “.“, or “c“.
[ ] A bracket expression. Matches a single character that is contained within the brackets. For example, [abc]matches “a“, “b“, or “c“. [a-z] specifies a range which matches any lowercase letter from “a” to “z“. These forms can be mixed: [abcx-z] matches “a“, “b“, “c“, “x“, “y“, and “z“, as does [a-cx-z]
[^ ] Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than “a“, “b“, or “c“. [^a-z] matches any single character that is not a lowercase letter from “a” to “z“. As above, literal characters and ranges can be mixed.
^ Matches the starting position within the string. In line-based tools, it matches the starting position of any line.
$ Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line.
{m,n} Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only “aaa“, “aaaa“, and “aaaaa“. This is not found in a few, older instances of regular expressions.

2.5 How do I include spaces in my search?

To include spaces in your search enclose your search in double quotes.
For example, here is how to search for foo<space>bar: “foo bar”

2.6 How do I search for a file type?

To search for a file type, type the file extension into the search edit,
ie to search for the mp3 file type, type *.mp3 into the search edit.
To search for more than one type of file type use a | to separate file types,
ie *.bmp|*.jpg will search for files with the extension bmp or jpg.

2.7 How do I search for files and folders in a specific location?

To search for files and folders in a specific location include a \ in your search string.
For example, here is how to search for all your avis in a downloads folder: downloads\ .avi
You could alternately enable Match Path in the Search menu and include the location in your search string.
For example, here is how to search for all your avis in a downloads folder with Match Path enabled: downloads .avi

3 Results

3.1 How do I jump to a file or folder in the result list?

Make sure the result list has focus by tabbing to it with the keyboard or clicking in it with the mouse.
Type in the partial or full name of the file or folder you want to jump to.
For example, to jump to files or folders begining with “New” type New into the result list.

4 Customizing

4.1 How can I change the “Everything” icon?

Requires “Everything” 1.2.0 beta or later.
Copy your icon file into “Everything”‘s installation folder and rename it to “Everything.ico”.
Restart “Everything”.

4.2 How can I set “Everything” to use an external file manager?

Requires “Everything” 1.2.0 beta or later.
Exit Everything.
Open Everything.ini in “Everything”‘s installation folder.
Add the following 2 lines to the bottom of the ini:
open_folder_command=$exec(“ExternalFileManager.exe” “%1”)
open_folder_path_command=$exec(“ExternalFileManager.exe” “$parent(%1)”)
Replace the text ExternalFileManager.exe with the full path and file name of your file manager executable.
Check your external file manager help for any required command line parameters.
Restart “Everything”.

5 Troubleshooting

5.1 Everything requests for administrator privileges in Windows Vista SP1

“Everything” requires administrator privileges because it needs raw read access to your hard drives.
Click accept to allow “Everything” to continue running.

5.2 The result list is empty

Make sure you have atleast one local NTFS volume.
See How do I convert a volume to NTFS.

Make sure “Everything” has administrator privileges.

To manually enable all local NTFS volumes for indexing:
1. In Everything, On the Tools menu, click Options.
2. Click the Volumes tab.
3. For each volume in the Local NTFS volumes list:
4. Check Check Media.
5. Check Enable USN Journal logging.
6. Check Include in database.
7. Check Monitor changes.
8. Repeat for each volume.
9. Click OK.

5.3 Right clicking on a file or folder crashes

Please replace your Everything.exe with the following beta to workaround the problem:
http://www.voidtools.com/Everything-1.2.1.375b.zip.

 

Tagged : / / / / /

Continuous Delivery – Vision vs. Reality

Source – http://www.datical.com/continuous-delivery-vision-vs-reality/

SEE FIRST. UNDERSTAND FIRST. ACT FIRST. FINISH DECISIVELY.

Those were the words I saw on the presentation screen during my first introduction to the Army’s concept for the Stryker brigade. Some two-star general program manager was explaining to us why the Stryker brigade was the Army’s newest whiz-bang gizmo.

It sounded like malarkey to me. I was more interested in actually seeing a Stryker vehicle for the first time – we hadn’t received any of them yet. Then, when we finally received our first shipment of Strykers to the unit, I was disappointed – it was basically a shorter, meaner-looking Winnebago, painted some hideous shade of green, with a paltry-looking .50 caliber machine gun mounted on top. WHAT?!? Where are the phaser cannons?!? What exactly do you want us to “FINISH DECISIVELY” riding around in that sardine can?!?

Fast forward two and a half years, and I finally got it. I wasn’t able to understand all that “SEE FIRST” malarkey until I saw Stryker units operating in context amongst other, “regular” Army units. Stryker brigades are FAST – we were able to identify opportunities, process intel, plan on the fly, and reach the objective before more regular units even knew something was going on.

What was the secret to all this speed and agility? It certainly wasn’t the sardine can, and I only wish I could attribute it to phaser cannons…

It was the IT infrastructure – the system of systems that allowed each “department” of the brigade to communicate with each other, to share relevant information with each other at lightning speed. You see, the Army, back then at the turn of the century, understood what today’s enterprises are starting to grasp – IT is a strategic asset, and if employed correctly, IT is a key enabler of corporate strategy. The ability to identify business opportunities and make sense of what’s going on faster than the competition allows enterprises to FINISH DECISIVELY in the market.

I know, I know. What does the Army know about trying to get a release of changes into production during the assigned maintenance window? It sounds bizarre, but imagine yourself in the middle of a maintenance window where the satellite system goes down during an Iraqi sandstorm right before the unit is to depart on mission. Pre-combat checks have occurred and units are staged, ready to depart. All we’re waiting on is for all vehicles to digitally receive the final version of the plans, which is now held up because the satellite dish went down in this stupid sandstorm. No pressure, IT guys and gals – you are now the lynchpin to the entire operation.

As enterprises grasp the strategic importance of IT they’ve begun to explore and launch initiatives to accelerate the delivery of services. One of these practices is Continuous Delivery, synonymous with the underlying principles of DevOps. Continuous Delivery is all about ensuring production-ready code at all times, and shortening the feedback cycles from the market back to the business. In most cases, ensuring production-ready code at all times requires automation of the delivery pipeline, reducing the risk of human error in deployment processes and cutting down the time it takes to complete manual processes.

The enterprise has a vision of Continuous Delivery that will enable it to ACT FIRST in the marketplace, but the reality of the situation is that there are still manual processes in place which prevent the enterprise from achieving that vision. Companies have invested in release automation to help them automate their delivery pipelines, orchestrating a deployment as a system of systems. This is a terrific step in the right direction toward Continuous Delivery, but there are still some application components which are relegated to manual processes that hamper the overall pipeline – database changes being one of those quirky components.

Datical DB was architected to enable initiatives like Continuous Delivery for the database component of application releases, and to “snap” into your existing automation frameworks so you can leverage your investment in automation. If you’re investigating Continuous Delivery or have already invested in release automation, I invite you to join us for an upcoming webcast hosted by our partners at Serena Deployment Automation on the topic of Automating Database Deployments in Your Continuous Delivery Pipeline.

SEE FIRST. UNDERSTAND FIRST. ACT FIRST. FINISH DECISIVELY.

IT is the key.

Tagged : / / /

Checklist for Validating A DevOps Architecture

Source – http://blog.flux7.com/blogs/devops/checklist-for-validating-a-devops-architecture-part-1

Author – Ali Hussain

Checklist: Validate DevOps Architecture

Understand business needs

An organization moving to the cloud truly understand cloud’s benefits only when setting up good DevOps methodologies and cloud automation to meets its needs. The process is replete with tool choices at every stage and the overall goal is to understand and meet the organization’s needs.

From our experience in setting up DevOps infrastructure multiple times the business needs of your organization can be summed up as below:

Business Continuity And Disaster Recovery

Disasters are inevitable and it is necessary for an organization to be prepared to handle them. The Disaster Recovery method depends

  • On the size of the organization and what’s at stake.

  • Cost of a downtime

  • Cost to prevent downtime

It should be noted that there are diminishing returns on implementing good disaster recovery and availability. In the same vein, the cost of an outage increases super-linearly with the duration. So even if your organization is small, there is a huge incentive for picking the low-hanging fruit and having a rudimentary disaster recovery plan in place.

Meeting Customer Demands

The goal of any service is to meet varying customer demands. Questions to consider for the varying demands:

  • Would there be surges in demand?

  • To what level does our system scale?

No system can scale indefinitely. An investment in the architecture from the ground up is required to attain higher levels of scaling. These solutions are inevitably more expensive if not used to their full capabilities.

Security

It is critical to protect business IP and customer data not only for a competitive advantage and for customer privacy purposes but also for the legal requirements on various kinds of data. The role of a DevOps architecture is to ensure the required security constraints are not compromised in the transition to a DevOps workflow which means that there are strict access rules for resources’ access. For instance, entities have access to a certain resource which does not mean a new entity when added will be granted the same access.

Reducing Time To Market

An organization needs to run like a well-oiled machine. This encompasses using the right tools that enable rapid turnaround on the application development, setting up a good dev workflow, improve software QA, and improving operations turnaround time.

Minimizing Cost

Minimizing cost in terms of machines or manpower is always a significant need. The cloud forces rethinking on operational vs capital costs and how to handle the cost variability during budgeting.

Several other sub-points could be added to the above list including latency, quality of service, bug rate. However these are just different aspects of the above points and not orthogonal ideas as such. Understanding these business needs is necessary to have your DevOps strategy make a meaningful impact on your organization.

Next Monday we will discuss how these goals translate into questions you can use to validate your DevOps architecture.

we explored how business goals should inform every good DevOps strategy. This week we’ll discuss how to use those goals to validate your DevOps architecture. From our experience at Flux7, the best way to do this is to define the workflows of key users.

To ensure that an architecture will meet a client’s business goals, we ask ourselves the following questions:

  1. What is the developer workflow and how will we enable it?

  2. How will we handle mirroring environments for disasters?

  3. How will we handle scaling up and down?

  4. How will we update the environment?

  5. How will we update the code?

  6. How will we keep the code and environment aligned?

  7. How will we make changes to the infrastructure?

To illustrate how these questions inform our work, we’ll walk you through them using our setup from the previous post, “The Best Way To Deploy Ruby On Rails in AWS”, which was as follows:

  • Chef used to deploy and bake the environment.

  • Capistrano used to handle code deployments.

  • Git repository on GitHub used to store code.

We used CloudFormation templates for infrastructure deployment.

Now let’s examine how this setup addressed the seven questions above.

What was the developer workflow and how did we enable it?

Using CloudFormation templates to orchestrate infrastructure deployment, the developers selected a pre-baked AMI with the correct environment setup. Even though we deployed the code with Capistrano, we also created a Chef recipe for deployment.

How did we handle mirroring environments for disasters?

Our Ruby on the Rails deployment was a real-time experience for a startup client. They could afford a cold DR provided the right alerts were set up for monitoring the website. It’s a good idea to make regular production-AMI backups to S3 and to make a copy to the DR region. In case of disaster, the environment can be retrieved by using the CloudFormation template with the latest AMI in the new region and then updating the route 53 to point to the new region.

How did we handle scaling up and down?

We implemented autoscaling. It’s important to know that an app server is “hot” when online without having to intervene manually. This may require scripting because the same AMI needs to work in several different environments.

How did we update the environment?

We edited the Chef recipe, checked for proper functioning and then baked the AMI. To improve Chef recipe debug loops, we experimented with recipes inside a Docker container. This approach ensured rapid revert to a previous state in case of failure.

How did we update the code?

We pushed the code from the dev branch to the master branch and ran the Capistrano recipe. Capistrano connected to the GitHub account and checked the latest copy of the required code revision. Since the code was pulled at deployment, rather than being baked into the AMI, baking a new AMI for each code update wasn’t needed. This approach is particularly suitable for hotfixes.

How did we keep the code and environment aligned?

Manual overhead made sure that deployed code worked in its respective environment. Docker may come in handy in such cases since it versions both code and environment, but we haven’t yet tried this approach.

How did we make changes to the infrastructure?

We updated the CloudFormation template, deployed the environment and code, checked for complete proper functioning, and qualified template changes. We assessed the outage caused by the template update and, depending on the outage, updated the previous stack or created a new stack, and transitioned to S3 when completed.

Given the wide variety of needs for various organizations, there’s no right or wrong approach to developing your DevOps architecture. But it’s always best to make small iterative-but-real improvements because a huge project that tries to accomplish everything is far more likely to fail. The key to success is not to prevent failure, but rather to maintain a low failure cost.

Tagged : / / / /

Apache Ant

Ant is a Java library and command-line tool. Ant’s mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications. Ant supplies a number of built-in tasks allowing to compile, assemble, test and run Java applications. Ant can also be used effectively to build non Java applications, for instance C or C++ applications. More generally, Ant can be used to pilot any type of process which can be described in terms of targets and tasks.

Ant is written in Java. Users of Ant can develop their own “antlibs” containing Ant tasks and types, and are offered a large number of ready-made commercial or open-source “antlibs”.

Ant is extremely flexible and does not impose coding conventions or directory layouts to the Java projects which adopt it as a build tool.

Software development projects looking for a solution combining build tool and dependency management can use Ant in combination with Ivy.

Tagged : / / / /

Introduction of Apache Ant

Apache Ant is a software tool for automating software build processes. It is similar to Make but is implemented using the Java language, requires the Java platform, and is best suited to building Java projects.

The most immediately noticeable difference between Ant and Make is that Ant uses XML to describe the build process and its dependencies, whereas Make has its Makefile format. By default the XML file is named build.xml.

Ant is an Apache project. It is open source software, and is released under the Apache Software License.

Tagged : / / / / / / / /

Apache Ant: A Build Tool

Apache Ant (or simply Ant) is an XML-based build scripting language used heavily by the Open Source community. Ant automates tasks such as compiling source code, building deployment packages and automatically checking dependencies of what items need to be updated in a build set.

Tagged : / / /