General SCM Interview Questions

The previous chapters outlined the state of CM technology from the standpoint of a spectrum of concepts underlying automated CM, and from the standpoint of the reflection of some of these concepts in commercial CM products. Clearly, no CM product supports all CM concepts; similarly, not all CM concepts are necessary in the support of all possible end-user requirements. That is, different CM tools (and the concepts which underlie these tools) may be required by different organizations or projects, or within projects at different  phases of the software development life cycle. This observation, coupled with the observed,continuing industry effort to adopt computer-aided software engineering (CASE) tools, leads us to conclude that integration is key to providing automated CM support in software development environments.
In this chapter we define what we mean by integration by way of a three-level model of integration. We illustrate where CM integration fits into this three-level model.  e then describe the advantages and disadvantages of current approaches to achieving integration in  software development environments. We close with a brief discussion on the relationship between future integration technology and the three levels of integration.

CM Services in Software Environments: A Question of Integration

There is no concensus regarding where CM services should reside in software environment architectures, despite the diversity of approaches that have been explored. For example, CM services have been offered via:

· Tools such as RCS, SCCS, CCC.
· Operating system extensions at the file-system level such as DSEE and NSE.
· Shared data models such as in the CIS specifications [18] and the PCTE PACT [53] environment.

A further complication is the emergence of a robust CASE tool industry, wherein many popular CASE tools provide their own tool-specific repository and CM services. As a result, CM functions are increasingly provided by, and distributed across, several CASE tools in an environment.
We have found it useful to think of integration in terms of a three-level model. This model, illustrated in Figure 5-1, corresponds to the ANSI/SPARC [48] three-schema  pproach used to describe database architectures. A useful intuition is that this correspondence is more than accidental. The bottom level of integration, called “mechanism” integration, corresponds to the ANSI/SPARC physical schema level. Mechanism integration addresses the implementation aspects of software integration, including, but not limited to: software interfaces provided by the environment infrastructure, e.g., operating system or environment framework interfaces;

software interfaces provided by individual tools in the environment; and architectural aspects of the tools, such as process structure (e.g., client/server) and data management structure (derivers, data dictionary, database). In the case of CM, mechanism integration can refer to integration with CM systems such as SCCS, RCS, CCC and DSEE; and CM implementation aspects such as transparent repositories and other operating-systems level CM services.

The middle level of integration, called “services” integration, corresponds to the ANSI/SPARC logical schema level. Services refers to the high-level functions provided by tools, and integration at this level can be regarded as the specification of how services can be related in a coherent fashion. In the case of CM, these services refer to elements of the spectrum of concepts discussed in chapter 3, e.g., workspaces and transactions, and services integration constitutes a kind of unified model of CM services.

The top level of integration, called “process” integration, corresponds to the ANSI/SPARC external schema (also called “end-user”) level. Process integration can be regarded as a kind of process specification for how software will be developed; this specification can define a view of the process from many perspectives, spanning individual roles through larger organizational pespectives. In the case of CM, process integration refers to policies and procedures for carrying out CM activities.

Integration occurs within each of these levels of integration; thus, mechanisms are inte- 34 ATR grated with mechanisms, services with services, and process elements with process elements. There are also relationships that span the levels. The relationship between the mechanism level and the services level is an implementation relationship: a CM concept in  he services layer may be implemented by different tools in the mechanism level, and conversely, a single mechanism may implement more than one CM concept. The relationship between the services level and the process level is a process adaptation relationship: different CM services may be combined, and tuned, to support different process requirements.

image

This three-level model provides a working context for understanding integration. For the moment, however, existing integration technology does not match exactly this somewhat idealized model of integration. For example, many services provided by CASE tools (including CM) embed process constraints that should logically be separate, i.e., reside in the process level. Similarly, tool services are often closely coupled to particular implementation techniques.

The level of adaptability required of integrating CM—both in terms of adaptability for projectspecific requirements as well as adaptability to multiple underlying CM
implementations—pushes the limits of available environment integration techniques. The following sections describe the current state of integration technology and its limitations. The next chapter discusses how future generation integration technology can address these shortcomings.

Reference: 
The State of Automated Configuration Management.
A. Brown, S. Dart, P. Feiler, K. Wallnau

Tagged : / / / / / / / / / /

Definition of Configuration Management

Software CM is a discipline for controlling the evolution of software systems. Classic discussions about CM are given in texts such as [6] and [8]. A standard definition taken from IEEE standard 729-1983 [42] highlights the following operational aspects of CM:

  • Identification: an identification scheme reflects the structure of the product, identifies components and their types, making them unique and accessible in some form.
  • Control: controlling the release of a product and changes to it throughout the lifecycle by having controls in place that ensure consistent software via the creation of a baseline product.
  • Status Accounting: recording and reporting the status of components and change requests, and gathering vital statistics about components in the product.
  • Audit and review: validating the completeness of a product and maintaining consistency among the components by ensuring that the product is a welldefined collection of components.

The definition includes terminology such as configuration item, baseline, release and version. When analyzing CM systems—automated tools that provide CM—it becomes evident that these incorporate functionality of varying degrees to support the above definition. Some CM systems provide functionality that goes beyond the above definition though. This is due (among other reasons) to the recognition of different user roles, disparate operating environments such as heterogeneous platforms, and programming-in-the-large support such as enabling teams of software programmers to work on large projects synergistically. To capture this extra functionality, it is necessary to broaden the definition of CM to include:

  • Manufacture: managing the construction and building of the product in an effective and optimal manner.
  • Process management: ensuring the carrying out of the organization’s procedures, policies and lifecycle model.
  • Team work: controlling the work and interactions between multiple users on a product.

In summary, the capabilities provided by existing CM systems encompass identification, control, status accounting, audit and review, manufacture, process management and team work.

Tagged : / / / / / / /

Roles of CM Users and Views of CM

Roles of CM Users and Views of CM
Different views of CM emanate from the different roles and concerns of people in an organization.

A simple, typical, CM user scenario of an organization is described in order to present the various roles and the subsequent views of CM support. The scenario involves various people with different responsibilities: a project manager who is in charge of a software group, a configuration manager who is in charge of the CM procedures and policies, the programmers who are responsible for developing and maintaining the software product, the tester who validates the correctness of the product, the quality assurance (QA) manager who ensures the high quality of the product, and the customer who uses the product. Each role comes with its own goals and tasks. For the project manager, major goals are to ensure that the product is developed within a certain time frame and meets the customer’s  requirements. Hence, the manager monitors the progress of development and recognizes and reacts to problems. This is done by generating and analyzing reports about the status of the software system and by performing reviews on the system.

The goals of the configuration manager are to ensure that procedures and policies for creating,
changing, and testing of code are defined and followed, as well as to make information about the project accessible. To implement techniques for maintaining control over code changes, this manager introduces mechanisms for making official requests for changes, for evaluating changes (via a Change Control Board (CCB) that is responsible for approving changes to the software system), and for authorizing changes. The manager creates and disseminates task lists for the programmers and basically creates the project context. Also, the manager collects statistics about components in the software system, such as information determining which components in the system are problematic.

For the programmers, the goal is to work effectively in creating the product. This means programmers do not unnecessarily interfere with each other in the creation and testing of code and in the production of supporting documents. But, at the same time, they communicate and coordinate efficiently. They use tools that help build a consistent software product, and they communicate and coordinate by notifying one another about tasks required and tasks completed. Changes are propagated between each other’s work by merging those changes and resolving any conflicts. A history is kept of the evolution of all components in the product along with a log of what actually changed and reasons for those
changes. The programmers have their own work area for creating, changing, testing, and integrating code. At a certain point, the code is made into a baseline from which further development continues and from which parallel development for variants of other target machines emerges.
The tester’s goal is to make sure every component of the product is tested and found satis-factory. This involves testing a particular version of the product and keeping a record of which tests apply to which version of the product along with the results of the tests. Any errors are reported back to the appropriate people, tracked and fixes are put through regression testing.

The Quality Assurance (QA) manager’s goal is to ensure the high quality of the product. This means that certain procedures and policies must be fulfilled with the appropriate approval. Bugs must be fixed and fixes propagated to the appropriate variants of the product with the correct amount of testing applied to each variant. Customer complaints about the product must be followed up.

The customers use the product — most likely, different customers use different versions and variants of it. Customers follow certain procedures for requesting changes and for indicating bugs and improvements for the product.

As a result of the various roles, an organization ends up with several views of CM. These views range from corporate CM (CCM) and project CM (PCM) to developer CM (DCM) and application CM (ACM). The former two views treat CM as a management and control discipline emphasizing the process aspect of CM, while the latter two views treat CM as a developer support function emphasizing tool support for CM. CCM focuses on maintaining CM information for a product, whose data is the basis for strategic impact analysis and process improvement. PCM focuses on managing change to a product through formal control processes such as change authorization via CCBs and configuration managers guarding CM repositories. DCM focuses on supporting developers (programmers) performing actual changes, maintaining history of actual changes, and providing stability and consistency through system build and team support. ACM focuses on configuration of applications as they are deployed, addressing issues of installation management, as well as dynamic configuration and reconfiguration. Ideally, a CM system suited to the views should support all these goals, roles and tasks.

Reference: 
The State of Automated Configuration Management
A. Brown, S. Dart, P. Feiler, K. Wallnau

Tagged : / / / / /

Configuration Management

Hey everyone – we are having a discussion about getting started in CM – in the CM Crossroads forums http://bit.ly/awKvKA. Please feel free to respond here, in the forums or contact me directly at bob.aiello@ieee.org

Bob Aiello
Editor in Chief
CM Crossroads
http://www.linkedin.com/in/BobAiello

______________________________________________________________

Hi Rajesh – did you actually “test” the link, that I posted, before saying that it was “illegal”. It resolves just fine and it points to an active discussion on Getting Started in Configuration Management.

Hi bagaloreorbit – I DID SAY “Please feel free to respond here…” – didn’t I???? Gentlemen – I have been promoting Configuration Management in virtual communities for many many years. CM professionals are true corporate citizens who share their knowledge and expertise. Let’s model that corporate citizenship. Feel free to send me an email directly at Bob.Aiello@ieee.org or on skype: BobAiello.

 

  • Rajesh Kumar

    Rajesh Kumar 

    Hi Robert,

    the link which you have given for CM crossroads is false. I would request you to correct the link as soon as possible. its violating the community rules as you are providing false link.

    cm Crossroads link is:www.cmcrossroads.com

  • bangaloreorbit

    bangaloreorbit 

    This is Short url to cm-crosswords.

    Robert, I guess we can discuss here same topics instead of moving multiple platform 🙁

Tagged : / /

5 Keys to Automating Configuration Management for Application Infrastructure

One of the trends being discussed in business, among vendors and in the analyst community is the importance of automating the functions performed by IT. Growing demands by the business, tight budgets and compliance pressures together accentuate the need for IT to be more agile, efficient and responsive to business stakeholders.

Naturally, vendors rush into this environment, each touting the unique benefits of its solution set and the urgency to move forward immediately.  A key area targeted for IT automation is the area of ‘configuration management.’  As it relates to automating day to day IT functions, configuration management can mean many different things: patch management, server and network management or others.

Tagged : / / /

Top 25 Chef configuration management interview questions and answers

chef-interview-questions-and-answers

Source – learn.chef.io

What is a resource?
Answer- A resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.

Question: What is a recipe?
Answer- A recipe is a collection of resources that describes a particular configuration or policy. A recipe describes everything that is required to configure part of a system. Recipes do things such as:

install and configure software components.
manage files.
deploy applications.
execute other recipes.

Question: What happens when you don’t specify a resource’s action?
Answer- When you don’t specify a resource’s action, Chef applies the default action.

Question: Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]  end

&&

service 'httpd' do    action [:enable, :start]    end
package 'httpd'

Answer-
No, they are not. Remember that Chef applies resources in the order they appear. So the first recipe ensures that the httpd package is installed and then configures the service. The second recipe configures the service and then ensures the package is installed.

Question: The second recipe may not work as you’d expect because the service resource will fail if the package is not yet installed.

Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]    end
package 'httpd'
service 'httpd' do    action [:start, :enable]    end

Answer-
No, they are not. Although both recipes ensure that the httpd package is installed before configuring its service, the first recipe enables the service when the system boots and then starts it. The second recipe starts the service and then enables it to start on reboot.

Are these two recipes the same?

file ‘/etc/motd’ do
owner ‘root’
group ‘root’
mode ‘0755’
action :create
end

file ‘/etc/motd’ do
action :create
mode ‘0755’
group ‘root’
owner ‘root’
end

Answer-
Yes, they are! Order matters with a lot of things in Chef, but you can order resource attributes any way you want.

Question –
Write a service resource that stops and then disables the httpd service from starting when the system boots.

Answer –
service ‘httpd’ do
action [:stop, :disable]
end

How does a cookbook differ from a recipe?
A recipe is a collection of resources, and typically configures a software package or some piece of infrastructure. A cookbook groups together recipes and other information in a way that is more manageable than having just recipes alone.

For example, in this lesson you used a template resource to manage your HTML home page from an external file. The recipe stated the configuration policy for your web site, and the template file contained the data. You used a cookbook to package both parts up into a single unit that you can later deploy.

How does chef-apply differ from chef-client?

chef-apply applies a single recipe; chef-client applies a cookbook.

For learning purposes, we had you start off with chef-apply because it helps you understand the basics quickly. In practice, chef-apply is useful when you want to quickly test something out. But for production purposes, you typically run chef-client to apply one or more cookbooks.

You’ll learn in the next module how to run chef-client remotely from your workstation.

What’s the run-list?

The run-list lets you specify which recipes to run, and the order in which to run them. The run-list is important for when you have multiple cookbooks, and the order in which they run matters.

What are the two ways to set up a Chef server?

Install an instance on your own infrastructure.
Use hosted Chef.

What’s the role of the Starter Kit?
The Starter Kit provides certificates and other files that enable you to securely communicate with the Chef server.

Where can you get reusable cookbooks that are written and maintained by the Chef community?
Chef Supermarket, https://supermarket.chef.io.

What’s the command that enables you to interact with the Chef server?
knife

What is a node?
A node represents a server and is typically a virtual machine, container instance, or physical server – basically any compute resource in your infrastructure that’s managed by Chef.

What information do you need to in order to bootstrap?
You need:

your node’s host name or public IP address.
a user name and password you can log on to your node with.
Alternatively, you can use key-based authentication instead of providing a user name and password.

What happens during the bootstrap process?
During the bootstrap process, the node downloads and installs chef-client, registers itself with the Chef server, and does an initial checkin. During this checkin, the node applies any cookbooks that are part of its run-list.

Which of the following lets you verify that your node has successfully bootstrapped?

The Chef management console.
knife node list
knife node show
You can use all three of these methods.

What is the command you use to upload a cookbook to the Chef server?
knife cookbook upload

How do you apply an updated cookbook to your node?
We mentioned two ways.

Run knife ssh from your workstation.
SSH directly into your server and run chef-client.
You can also run chef-client as a daemon, or service, to check in with the Chef server on a regular interval, say every 15 or 30 minutes.

Update your Apache cookbook to display your node’s host name, platform, total installed memory, and number of CPUs in addition to its FQDN on the home page.

Update index.html.erb like this.
<html>
<body>
<h1>hello from <%= node[‘fqdn’] %></h1>

<pre>
<%= node[‘hostname’] %>
<%= node[‘platform’] %> – <%= node[‘platform_version’] %>
<%= node[‘memory’][‘total’] %> RAM
<%= node[‘cpu’][‘total’] %> CPUs
</pre>
</body>
</html>

Then upload your cookbook and run it on your node.

What would you set your cookbook’s version to once it’s ready to use in production?

According to Semantic Versioning, you should set your cookbook’s version number to 1.0.0 at the point it’s ready to use in production.

What is the latest version of the haproxy community cookbook?

To know the latest version of any cookbook on Chef Supermarket, browse to its page and view the latest version from the version selection box.

Or, get the info from the knife cookbook site command, like this.

knife cookbook site show haproxy | grep latest_version
latest_version: http://cookbooks.opscode.com/api/v1/cookbooks/haproxy/versions/1.6.6

Create a second node and apply the awesome_customers cookbook to it. How long does it take?

You already accomplished the majority of the tasks that you need. You wrote the awesome_customers cookbook, uploaded it and its dependent cookbooks to the Chef server, applied the awesome_customers cookbook to your node, and verified that everything’s working.

All you need to do now is:

Bring up a second Red Hat Enterprise Linux or CentOS node.
Copy your secret key file to your second node.
Bootstrap your node the same way as before. Because you include the awesome_customers cookbook in your run-list, your node will apply that cookbook during the bootstrap process.
The result is a second node that’s configured identically to the first one. The process should take far less time because you already did most of the work.

Now when you fix an issue or add a new feature, you’ll be able to deploy and verify your update much more quickly!

What’s the value of local development using Test Kitchen?

Local development with Test Kitchen:

enables you to use a variety of virtualization providers that create virtual machine or container instances locally on your workstation or in the cloud.
enables you to run your cookbooks on servers that resemble those that you use in production.
speeds up the development cycle by automatically provisioning and tearing down temporary instances, resolving cookbook dependencies, and applying your cookbooks to your instances.

What is VirtualBox? What is Vagrant?

VirtualBox is the software that manages your virtual machine instances.

Vagrant helps Test Kitchen communicate with VirtualBox and configures things like available memory and network settings.

Verify that your motd cookbook runs on both CentOS 6.6 and CentOS 6.5.

Your motd cookbook is already configured to work on CentOS 6.6 as well as CentOS 6.5, so you don’t need to modify it.

To run it on CentOS 6.5, add an entry to the platforms section of your .kitchen.yml file like this.

---       driver:       name: vagrant
provisioner:       name: chef_zero
platforms:       - name: centos-6.6       driver:       box: opscode-centos-6.6       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.6_chef-provisionerless.box       - name: centos-6.5       driver:       box: opscode-centos-6.5       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.5_chef-provisionerless.box
suites:       - name: default       run_list:       - recipe[motd::default]       attributes:

In many cases, Test Kitchen can infer the box and box_url parameters, which specify the name and location of the base image, or box. We specify them here to show you how to use them.

Run kitchen list to see the matrix of test instances that are available. Here, we have two platforms – CentOS 6.5 and CentOS 6.6 – multiplied by one suite – default.

$kitchen list

Instance           Driver   Provisioner  Verifier  Transport  Last Action default-centos-66  Vagrant  ChefZero     Busser    Ssh        <Not Created> default-centos-65  Vagrant  ChefZero     Busser    Ssh        <Not Created>

Run kitchen converge to create the instances and apply the motd cookbook.

$kitchen converge    -----> Starting Kitchen (v1.4.0)    -----> Creating <default-centos-66>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 10.372334751 seconds    Finished converging <default-centos-66> (3m52.59s).    -----> Creating <default-centos-65>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 5.32753132 seconds    Finished converging <default-centos-65> (10m12.63s).  -----> Kitchen is finished. (19m47.71s)

Now to confirm that everything’s working, run kitchen login. But this time, you need to provide the instance name so that Test Kitchen knows which instance to connect to.

$kitchen login default-centos-66       Last login: Wed May 13 20:15:00 2015 from 10.0.2.2              hostname:  default-centos-66       fqdn:      default-centos-66       memory:    469392kB       cpu count: 1       [vagrant@default-centos-66 ~]$ logout       Connection to 127.0.0.1 closed.$kitchen login default-centos-65       Last login: Wed May 13 20:28:18 2015 from 10.0.2.2              hostname:  default-centos-65       fqdn:      default-centos-65       memory:    469452kB       cpu count: 1       [vagrant@default-centos-65 ~]$ logout       Connection to 127.0.0.1 closed.
Tagged : / / / / / / / / / / / / / / /

Chef configuration management interview questions and answers | Chef Interview Q&A

chef-interview-questions-and-answers

Source – learn.chef.io

What is a resource?
Answer- A resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.

Question: What is a recipe?
Answer- A recipe is a collection of resources that describes a particular configuration or policy. A recipe describes everything that is required to configure part of a system. Recipes do things such as:

install and configure software components.
manage files.
deploy applications.
execute other recipes.

Question: What happens when you don’t specify a resource’s action?
Answer- When you don’t specify a resource’s action, Chef applies the default action.

Question: Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]  end

&&

service 'httpd' do    action [:enable, :start]    end
package 'httpd'

Answer-
No, they are not. Remember that Chef applies resources in the order they appear. So the first recipe ensures that the httpd package is installed and then configures the service. The second recipe configures the service and then ensures the package is installed.

Question: The second recipe may not work as you’d expect because the service resource will fail if the package is not yet installed.

Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]    end
package 'httpd'
service 'httpd' do    action [:start, :enable]    end

Answer-
No, they are not. Although both recipes ensure that the httpd package is installed before configuring its service, the first recipe enables the service when the system boots and then starts it. The second recipe starts the service and then enables it to start on reboot.

Are these two recipes the same?

file ‘/etc/motd’ do
owner ‘root’
group ‘root’
mode ‘0755’
action :create
end

file ‘/etc/motd’ do
action :create
mode ‘0755’
group ‘root’
owner ‘root’
end

Answer-
Yes, they are! Order matters with a lot of things in Chef, but you can order resource attributes any way you want.

Question –
Write a service resource that stops and then disables the httpd service from starting when the system boots.

Answer –
service ‘httpd’ do
action [:stop, :disable]
end

How does a cookbook differ from a recipe?
A recipe is a collection of resources, and typically configures a software package or some piece of infrastructure. A cookbook groups together recipes and other information in a way that is more manageable than having just recipes alone.

For example, in this lesson you used a template resource to manage your HTML home page from an external file. The recipe stated the configuration policy for your web site, and the template file contained the data. You used a cookbook to package both parts up into a single unit that you can later deploy.

How does chef-apply differ from chef-client?

chef-apply applies a single recipe; chef-client applies a cookbook.

For learning purposes, we had you start off with chef-apply because it helps you understand the basics quickly. In practice, chef-apply is useful when you want to quickly test something out. But for production purposes, you typically run chef-client to apply one or more cookbooks.

You’ll learn in the next module how to run chef-client remotely from your workstation.

What’s the run-list?

The run-list lets you specify which recipes to run, and the order in which to run them. The run-list is important for when you have multiple cookbooks, and the order in which they run matters.

What are the two ways to set up a Chef server?

Install an instance on your own infrastructure.
Use hosted Chef.

What’s the role of the Starter Kit?
The Starter Kit provides certificates and other files that enable you to securely communicate with the Chef server.

Where can you get reusable cookbooks that are written and maintained by the Chef community?
Chef Supermarket, https://supermarket.chef.io.

What’s the command that enables you to interact with the Chef server?
knife

What is a node?
A node represents a server and is typically a virtual machine, container instance, or physical server – basically any compute resource in your infrastructure that’s managed by Chef.

What information do you need to in order to bootstrap?
You need:

your node’s host name or public IP address.
a user name and password you can log on to your node with.
Alternatively, you can use key-based authentication instead of providing a user name and password.

What happens during the bootstrap process?
During the bootstrap process, the node downloads and installs chef-client, registers itself with the Chef server, and does an initial checkin. During this checkin, the node applies any cookbooks that are part of its run-list.

Which of the following lets you verify that your node has successfully bootstrapped?

The Chef management console.
knife node list
knife node show
You can use all three of these methods.

What is the command you use to upload a cookbook to the Chef server?
knife cookbook upload

How do you apply an updated cookbook to your node?
We mentioned two ways.

Run knife ssh from your workstation.
SSH directly into your server and run chef-client.
You can also run chef-client as a daemon, or service, to check in with the Chef server on a regular interval, say every 15 or 30 minutes.

Update your Apache cookbook to display your node’s host name, platform, total installed memory, and number of CPUs in addition to its FQDN on the home page.

Update index.html.erb like this.
<html>
<body>
<h1>hello from <%= node[‘fqdn’] %></h1>

<pre>
<%= node[‘hostname’] %>
<%= node[‘platform’] %> – <%= node[‘platform_version’] %>
<%= node[‘memory’][‘total’] %> RAM
<%= node[‘cpu’][‘total’] %> CPUs
</pre>
</body>
</html>

Then upload your cookbook and run it on your node.

What would you set your cookbook’s version to once it’s ready to use in production?

According to Semantic Versioning, you should set your cookbook’s version number to 1.0.0 at the point it’s ready to use in production.

What is the latest version of the haproxy community cookbook?

To know the latest version of any cookbook on Chef Supermarket, browse to its page and view the latest version from the version selection box.

Or, get the info from the knife cookbook site command, like this.

knife cookbook site show haproxy | grep latest_version
latest_version: http://cookbooks.opscode.com/api/v1/cookbooks/haproxy/versions/1.6.6

Create a second node and apply the awesome_customers cookbook to it. How long does it take?

You already accomplished the majority of the tasks that you need. You wrote the awesome_customers cookbook, uploaded it and its dependent cookbooks to the Chef server, applied the awesome_customers cookbook to your node, and verified that everything’s working.

All you need to do now is:

Bring up a second Red Hat Enterprise Linux or CentOS node.
Copy your secret key file to your second node.
Bootstrap your node the same way as before. Because you include the awesome_customers cookbook in your run-list, your node will apply that cookbook during the bootstrap process.
The result is a second node that’s configured identically to the first one. The process should take far less time because you already did most of the work.

Now when you fix an issue or add a new feature, you’ll be able to deploy and verify your update much more quickly!

What’s the value of local development using Test Kitchen?

Local development with Test Kitchen:

enables you to use a variety of virtualization providers that create virtual machine or container instances locally on your workstation or in the cloud.
enables you to run your cookbooks on servers that resemble those that you use in production.
speeds up the development cycle by automatically provisioning and tearing down temporary instances, resolving cookbook dependencies, and applying your cookbooks to your instances.

What is VirtualBox? What is Vagrant?

VirtualBox is the software that manages your virtual machine instances.

Vagrant helps Test Kitchen communicate with VirtualBox and configures things like available memory and network settings.

Verify that your motd cookbook runs on both CentOS 6.6 and CentOS 6.5.

Your motd cookbook is already configured to work on CentOS 6.6 as well as CentOS 6.5, so you don’t need to modify it.

To run it on CentOS 6.5, add an entry to the platforms section of your .kitchen.yml file like this.

---       driver:       name: vagrant
provisioner:       name: chef_zero
platforms:       - name: centos-6.6       driver:       box: opscode-centos-6.6       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.6_chef-provisionerless.box       - name: centos-6.5       driver:       box: opscode-centos-6.5       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.5_chef-provisionerless.box
suites:       - name: default       run_list:       - recipe[motd::default]       attributes:

In many cases, Test Kitchen can infer the box and box_url parameters, which specify the name and location of the base image, or box. We specify them here to show you how to use them.

Run kitchen list to see the matrix of test instances that are available. Here, we have two platforms – CentOS 6.5 and CentOS 6.6 – multiplied by one suite – default.

$kitchen list

Instance           Driver   Provisioner  Verifier  Transport  Last Action default-centos-66  Vagrant  ChefZero     Busser    Ssh        <Not Created> default-centos-65  Vagrant  ChefZero     Busser    Ssh        <Not Created>

Run kitchen converge to create the instances and apply the motd cookbook.

$kitchen converge    -----> Starting Kitchen (v1.4.0)    -----> Creating <default-centos-66>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 10.372334751 seconds    Finished converging <default-centos-66> (3m52.59s).    -----> Creating <default-centos-65>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 5.32753132 seconds    Finished converging <default-centos-65> (10m12.63s).  -----> Kitchen is finished. (19m47.71s)

Now to confirm that everything’s working, run kitchen login. But this time, you need to provide the instance name so that Test Kitchen knows which instance to connect to.

$kitchen login default-centos-66       Last login: Wed May 13 20:15:00 2015 from 10.0.2.2              hostname:  default-centos-66       fqdn:      default-centos-66       memory:    469392kB       cpu count: 1       [vagrant@default-centos-66 ~]$ logout       Connection to 127.0.0.1 closed.$kitchen login default-centos-65       Last login: Wed May 13 20:28:18 2015 from 10.0.2.2              hostname:  default-centos-65       fqdn:      default-centos-65       memory:    469452kB       cpu count: 1       [vagrant@default-centos-65 ~]$ logout       Connection to 127.0.0.1 closed.
Tagged : / / / / / / / / / / / / / / /

Top 5 Configuration Management Tools

top-5-configuration-management-tools

Top 5 Configuration Management Tools

Puppet

In computing, Puppet is an open-source configuration management utility. It runs on many Unix-like systems as well as on Microsoft Windows, and includes its own declarative language to describe system configuration.

Puppet is produced by Puppet Labs, founded by Luke Kanies in 2005. It is written in Ruby and released as free software under the GNU General Public License (GPL) until version 2.7.0 and the Apache License 2.0 after that.

What started out as a popular DevOps tool has quickly become a movement. Written in Ruby, like Chef, Puppet also comes in both an open source and enterprise version. However, where Chef has a healthy offering of features across both open source and enterprise versions, Puppet has placed the majority of its feature set into enterprise status. Features that the open source version comes with include provisioning (Amazon EC2, Google Compute Engine), configuration management (operating systems and applications) plus 2,000+ pre-built configurations on Puppet Forge. Considerably more features are available for the enterprise version, including the open source features plus graphical user interface, event inspector (visualize infrastructure changes), supported modules, and provisioning (VMware VMs). Configuration management (discovery, user accounts), orchestration, task automation, role-based access control (with external authentication support) are also included. Puppet enterprise has a unified cross-platform installer of all components and support.

Chef

Offered as both an open source and enterprise product, Chef is a powerful tool for full IT infrastructure configuration management. With open source Chef at the heart of both offerings, shared features include a flexible and scalable automation platform, access to 800+ reusable cookbooks and integration with leading cloud providers. Chef also offers enterprise platform support, including Windows and Solaris, and allows you to create, bootstrap and manage OpenStack clouds. It has easy installation with ‘one-click’ Omnibus Installer, automatic system discovery with Ohai, text-based search capabilities and multiple environment support. Other notable features inclyde the “Knife” command line interface, “Dry Run” mode for testing potential changes, and the ability to manage 10,000+ nodes on a single Chef server. Features only available in the enterprise version of Chef include availablility as a hosted service, enhanced management console, centralized activity and resource reporting, as well as “Push” command and control client runs. Multi-tenancy, role-based access control (RBAC), high availability installation support and verification, along with centralized authentication using LDAP or Active Directory are included with Chef enterprise.

Ansible

Ansible is a model-driven configuration management tool that leverages SSH to improve security and simplify management. In addition to configuration management, it is capable of automating app deployment (even multi-tier deployment), workflow orchestration and cloud provisioning, hence the company likes the tool to be categorized as an “orchestration engine.” Ansible is built on five design principles including ease of use (doesn’t require writing scripts or custom code), low learning curve (both for sysadmins and developers), comprehensive automation (allowing you to automate almost anything in your environment), efficiency (since it runs on OpenSSH it doesn’t rely on memory or CPU resources), and security (it is inherently more secure because it doesn’t require an agent, additional ports or root level daemons). As many other open source projects, Ansible has a paid product that comes in the form of a web UI called Ansible Tower.

CFEngine

One of the earliest full-featured configuration management systems out there, CFEngine has gone through several iterations and maintained relevance as OS have gone from the local data center to the cloud. At the heart of the infrastructure automation framework, CFEngine is also a modeling and monitoring compliance engine, capable of sitting on a small footprint. As recommended by CFEngine, steps toward identifying an initial desired state include: 1) model the desired state of your environment; 2) simulate configuration changes before committing them; 3) confirm the desired state and set for automatic self-healing; 4) collect reports on the differences between actual and desired states. CFEngine has a library of reusable data-driven models that will help users model their desired states. These infrastructure patterns are designed to be reusable across the Enterprise.

Salt

As part of a larger, enterprise ready application, the configuration management piece of Salt is as robust and feature-full as would be expected. Built upon the remote execution core, execution of the system occurs on “minions” which receives commands from the central Salt master and replies with the results of said commands. Salt support simultaneous configuration of tens of thousands of hosts. Based upon host “states”, no programming is required to write the configuration files, which are small and easy to understand, that help identify the state of each host. Additionally, for those who do program, or admins who want to have more control and familiarity with their configuration files, any language can be used to render the configurations.

Tagged : / / / / / / / / / / / / / / /

SCM Process and smartBuild Overview, What is SCM Process and smartBuild?

scm-process-and-smartbuild

SCM Process and smartBuild

Tagged : / / / / / / / / / / / / / / / /

Software Configuration Management in Pakistan | SCM Practices in Pakistan

software-configuration-management-in-pakistan
Mature CM is cross-business functionality NOT functionality solely within engineering. Software Configuration Management facilitates timely communications; enforces development policies and technical standards along Management of Hand-offs between Environments and Teams efficiently.

The practices and procedures for administering source code, producing software development builds, controlling change, and managing software configurations. Specifically, Software Configuration Management ensures the integrity, reliability and reproducibility of developing software products from conception to release.

I should say CM in Pakistan has been catching up slow but very steadily. There are not many senior people in Pakistan who have dedicated their careers in Configuration Management. Many people moved on to various management roles out of CM after some years of experience. This is due to lack of management opportunities in this area. But, times have changed for good and are getting better. With the growing maturity of CM across the industry, organizations are giving utmost importance to this area and so are hiring people at very senior levels.

I have around 3 years of experience while Interacting CM Professionals internationally and national level; found very strong maturity and awareness. With my experience, I would say Pakistan IT firms still need a broader thinking from CM perspective. In many organizations, people think that CM is just about making builds and controlling source code for projects. We need to educate people about the real importance of Software Configuration Management in any organization. There is a need of clear understanding of various aspects of Configuration Management like Change Management, Incident/Problem Management, Build & release management, Code control/version control , Automated Build management , Parallel development and Continuous integration , Introduction to new tools for software and hardware configurations etc…

CM is much beyond a tool administration or builds management. It does comprise entire life cycle of a project as well as the POST PRODUCTION analysis to improve the process for upcoming projects. It’s a continuous process which not only gives any IT organization a process to boost confidence in delivering quality product with stable code line.

NetSol is vigilant and moving superbly toward innovation and automation through R&D in the field of Configuration Management. Being the CMMI 5 organization; we have implemented excellent Release Management Process that beautifully interacting and entertaining CM Services throughout the organization. We are in process of experimenting Modern tools and discovering best practices for making our process incredibly easy and technically strong. NetSol’s Department of Configuration Management really welcomes newly born organizations and specially those (existing) who want to setup/improve their Configuration Management process and looking for technical improvement mentor; please come up and Consult NetSol with trust.

Tagged : / / / / / / / / / / / / / / /