Top Questions on Server Configuration Management Tools Chef, Puppet, and Ansible

 

server-configuration-management-tools-chef-puppet-and-ansible

Source – http://hub.scalr.com/blog/top-questions-on-server-configuration-management-tools-chef-puppet-and-ansible-2

As a quick recap, configuration management tools enable companies to standardize and automate their infrastructure. Through standardization, you can build systems that are platform independent (i.e. instead of relying on AMIs or provider specific toolsets). These tools also make it easy reproduce servers for scaling or testing, and recover from disaster quickly by defining a proper application state. For example, if servers are not in that defined state when each server is checked, they are restored to their proper state. In addition, this standardization makes it easy to onboard new developers.

While the language across configuration management tools is different, the concepts are the same. At the fundamental level in each configuration tool, a resource represents a part of the system and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.

In Chef, a recipe is a collection of resources that describes a particular configuration or policy. These collections are called playbooks in Ansible, and manifests in Puppet. These collections describe everything that is required to configure part of a system. Collections install and configure software components, manage files, deploy applications, and execute other recipes. We go into more detail in our blog post here.

Here are the top questions we got from the community:

How is the concept of master/agent configuration better (or not) than agentless, when it comes to infrastructure as code?

Chef and Puppet are master/agent configuration systems, while Ansible is an agentless system. The historic argument is that the agent-based installation process is difficult –  you have to set up the master, and then set up the agents on your nodes so that they know about the master. If you’ve got servers with diverse linux distros, on different versions of Windows, etc., installation can get tricky. Though, because they’re logging every few minutes, agent-based systems are powerful for advanced monitoring. At the end of the day this really is based on personal preference and what company requires. If your infrastructure is beefy and heavily standardized, installation on nodes isn’t complicated so use agent-based systems. If you have servers that run Python, try agentless.

Are these configuration management systems like MicroSoft System Center Configuration Manager (SCCM) but used for local and cloud?

This is like MS SCCM, but open-source and paid for per node. For those who haven’t used it, MicroSoft System Center Configuration Manager (SCCM) is used for infrastructure provisioning, monitoring, and automating workflow processes (usually sysadmin stuff). SCCM is a powerhouse in the enterprise space. While it can manage end clients on non-Windows servers, the server console portion of SCCM must be hosted and run on a Windows server machine. The reason other orchestration/configuration systems win here is that you pay on a per-node basis and you’re not totally tied into Windows Server’s licensing agreements. In other words, open-source vs proprietary. And with Chef/Puppet/Ansible the thinking is more in resources as opposed to SCCM, which is more in files and terminal commands.

An attendee commented on using SCCM:

 We really like Ansible because of the none-agent requirement. For Windows patching we utilize System Center Configuration Manager, and even though System Center can provide patching to Linux we have run into issues with SCCM agent staying healthy and running on our Linux systems. We have also run into when the SCCM admins have made changes it broke SCCM agent on a majority of our Linux servers. Our Linux patching process has been highly manual up to this point but we are seeking to automate this to free up staff time to be better directed at other support tasks, which is why were are reviewing several solutions. The non-agent aspect is highly desirable in our situation because of past experience with SCCM agent. I just wanted to provide that feedback so others that have not experienced agent issues with other deployment solutions may want to keep that in mind.”

If we have to pick a tool dependent on whether we deploy on cloud or on-premise – which of these tools would be a better choice?

We would recommend looking into network access requirements for each tool. If you have an agent that checks in periodically with a central master management piece, that is likely to work better then SSH which requires direct path / path through lots of proxies. 

One attendee mentioned in the comments: “[In regards to] SSH vs Agent – Agent is more secure where SSH not an option.

What happened to cfengine? This tool used to be mentioned alongside Chef and Puppet. 
Version 3 of cfengine is a complete revamp, but it compared to other configuration management tools the brand and community outreach isn’t strong, and does little the others don’t do better.

How does StackStorm compare to the other orchestrators being reviewed?
StackStorm labels itself more of an automation platform, or a DevOps workflow tool that handles provisioning and configuring servers but also leans on automatic and event driven services that plugs into Jenkins and other CI/CD workflows.

From one attendee that had used all three: “For us, getting the Server engineers to adopt Chef has been very difficult. It grew organically on the Dev side of the house. Ansible appears to be something that guys without Dev skills could pick up more easily. Just [my] perception.”

Can I run tasks in parallel with Ansible rather than running it serially (say 50 servers being updated with a patch)?

The default method is to run each task across all servers in parallel, meaning that it will run the first task (e.g. installing Git) on all servers in a group, and once all servers respond with a success, failure, or unchanged response, Ansible will move to the next task on all servers. It doesn’t run on a server, wait, move on to the next, it will run on all servers at once over SSH. If you want to deploy updates in batches, you can run a percentage of servers in a group (e.g. 50%, by listing it in the playbook as serial: 50%).

An attendee made this comment as we mentioned Ansible:

I have attended a presentation from RedHat regarding Ansible that states [that Ansible scales well]. They have large scale hosting companies that on the fly spin up servers and perform patching for their servers via Ansible. The one mentioned had over 50,000 servers and it seemed to handle the volume / scale fine. I of course don’t know everything about Puppet or Chef or Salt, but one thing I find really nice about Ansible is the ability to perform rolling updates / tasks. So if you say had 1000 servers you can say you want to run 10% or 100 at a time and keep it rolling until all 1000 are done. It can be stated by percentage or defined number…I am sure I sound a bit biased but one of the main reasons Ansible is high on our list right now is the fact that it is agentless and does not really consume resources.”

With Ansible, how can we handle the security implications about allowing passwordless ssh to a root account on all systems? What mechanisms are there for access control and auditing?

There are definitely security implications if you are going to allow passwordless ssh. So it’s on the company to ensure that security groups or NSGs are well defined. We should also mention that the passwordless ssh is only enabled on the machine you are running Ansible commands and playbooks from, so if anything consider that workstation to be your weak point. Make sure SSH access is only permissible through your IP. As an alternative solution to connecting via SSH, if you use docker, Ansible allows you to deploy playbooks directly into Docker containers using the local Docker client. All you need is a user inside that container.

Does ansible run single-threaded or is it addressing multiple servers in a group asynchronously?
Ansible runs on each host in parallel. This means that it attempts to run your tasks on all servers defined at the top of the playbook before moving on to the next task.

One user said in regards to all three tools: “Ansible seems better for “orchestration” and Puppet/Chef are really good for “Configuration Management”.  Ansible can be used to stop applications and databases and then run Puppet and then start applications and databases.

Lastly, we got a surprise question from the audience on Jenkins, a CI/CD pipeline tool that can be used in conjunction with tools like Chef to completely automate the infrastructure behind your applications.

What is the alternative of Jenkins?
While we recommend Jenkins, If you’re a Ruby shop, Capistrano is geared towards your deployments. If you live in the AWS world, you can try using the CodeCommit/CodeDeploy/CodePipeline toolset. If you’re looking for a provider agnostic solution, CircleCI is great. If your workflows revolve around Atlassian, try Bamboo. 

If you are unsure of what CI/CD pipeline tool to use, or how they work, we will be hosting a webinar on Jenkins as part of our on-going series on infrastructure-as-code.

Tagged : / / / / / / /

Top 25 TFS Interview Questions and Answers

tfs-interview-questions-and-answers

TFS Interview Questions

1) What is Team  Foundation Server? What does it cover – version control? build processes? bug tracking? task management?

Team Foundation Server is defined in the documentation as:

Team Foundation is a collection of collaborative technologies that support a team effort to deliver a product. While the Team Foundation technologies are typically employed by a software team to build a software product, they can also be used on other types of projects.

As the customer already noted three of the core deliverables of Team Foundation Server:

1. Build Process

2. List/Work item Tracking

3. Source Control

This is leaving off probably the two most import features of Team Foundation Server. By integrating the build process, source control,policy and work item tracking you can get a deep insight into what teams are doing and some analytics for future trends which leads to the 4th core deliverable of Team Foundation Server

4. Reporting

Having insight into how a team is tracking is really only half the answer their also needs to a mechanism to share this information which brings us to the last feature of Team Foundation Server:

5. Collaboration (Typically enabled through the Team Portal, Team Project and Process Guidance)

Interestingly it is the two missing categories that set Team Foundations Server apart from other offerings.

2) List out the functionalities provided by team foundation server?

– Project Management

– Tracking work items

– Version Control

– Test case management

– Build Automation

– Reporting

– Virtual Lab Management

3) Explain TFS in respect to GIT?

t

4) Explain how you can create a Git-TFS in Visual Studio 2013 express?

To create a Git-TFS in Visual Studio 2013 express

– Create an account with MS TFS service if you don’t have inhouse TFS server

– After that, you will be directed to TFS page, where you will see tow option for creating project, one with new team project and another with a new team project+Git

– The account URL will be found right below “Getting Started.”

– Click on create git project and it will take you to a new window, where you specify details about the project like project name, description, the process template, version control, etc. and once completed click on create project.

– Now you can create a local project in team foundation server by creating a new project in Visual studio and do not forget to mark the check box that says “Add to source control”

– In the next window, select mark Git as your version control and click ok, and you will be able to see the alteration made in the source code

– After that, commit your code, right click a file in team explorer and you can compare version differences

5) Mention whether all of the team foundation service features are included into the Team foundation server?

TFS service is updated every 3 weeks while Team Foundation Server “on-premise” is updated every 3 months.  So, the on-premise version will always remain a little behind. However, TFS on-premise has got something that the TFS service does not.

– You can use TFS Lab

– Customize work items/process templates

6) Explain what kind or report server you can add in TFS?

TFS uses SQL for its data storage, so you have to add SQL server reporting services to provide a report server for TFS.

7) How one would know whether the report is updated in TFS?

For each report, there will be an option “Date Last Updated” in the lower light corner, when you click or select that option, it will give details about when it was last updated.

8) Explain how you can restore hidden debugger commands in Visual Studio 2013?

To restore the debugger feature that is hidden, you have to add the command back to the command

– Open your project, click on Tools menu and then click customize

– Tap the command tab in the customize dialog box

– In the menu bar, drop down, choose the debug menu for which you want to contain the restored command

– Tap on the Add command button

– In the Add command box, choose the command you want to add and click OK

– Repeat the step to add another command

9) Explain how you can track your code by customizing the scroll bar in Visual Studio 2013?

To show the annotations on the scroll bar

– You can customize the scroll bar to display code changes, breakpoints, bookmarks and errors

– Open the scroll bar options page

– Choose the option “show annotations over vertical scroll bar”, and then choose the annotations you want to see

– You can replace anything in the code that frequently appears in the file which is not meant to be

10) Can I install the TFS 2010 Build Service on my TFS 2008 build machine? 

Yes, you can. Even though they both default to the same port (9191), they can share that port without any problems.

11) Can we disable the “Override CheckIn Policy Failure” checkbox? Can that be customized based on User Login, Policy Type of File type?

No. It is designed it to be fully auditable by including policy compliance data in the changeset details and in the checkin mail that is delivered, but left it up to the developer to determine whether they have a good reason for overriding.

12) What are the different events available in the event model and is there any documentation on them?

There is really only one SCC event and that is the one that is raised on checkin. Subscription is via the general event model that is discussed in the extensibility kit.

13) Are Deletes you make in TFS 2010 Source Control physical or logical? Can accidental deletes be recovered?

Deletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.

14) Can different CheckIn Policies be applied on different branches? E.g. Can they have QA specific policies applied on CheckIn in a QA branch?

No.

15) How do I redisplay source control explorer?

Selecting View > Other Windows > Source Control Explorer will display the Source Control Explorer window within the IDE.

16) Why doesn’t source control detect that I have deleted a file/folder on my local disk?

The main scenario here is deleting a file (by mistake or intentionally) outside of Team Foundation and then trying the get that file back from source control. If the file version has not changed the server thinks the user already has the file and does not copy it over. This is because the server keeps a list of files that the user already has and when activities are made outside of source control this list becomes out of date. Team Foundation Version Control does have a force get option which will provide the functionality needed to obtain the desired version but it is currently partially hidden under the Get Specific Version Dialog window as a check box item.

17) Can I compare directory structures in TFS Source Control?

No, you cannot compare Directory Structures in TFS Source Control

18) Can we configure SCC to not check-in the binary files? Where are such configurations done?

Team Foundation Version Control provides a way to limit check-ins by setting up check-in policies that are evaluated before a check-in can take effect. The easiest way to do this is by authoring a policy that checks if the user is trying to check-in a binary file from a given folder structure and reject or accept it in accordance.

19) How can I add non-solution items to source control?

This can be achieved by either clicking the Add icon or by going to File > Source Control and selecting the Add To Source Control menu item.

20) When a user “edits” a file in a “source controlled” project, it gets checked out automatically. Is this configurable? Can we change this behavior?

Yes it can be done by configuring TFS by going to Tools > Options > Source Control > Environment provides an option where a user can change the settings to not checkout files automatically on edit.

21) What plugin / extensibility API does it expose?

The Team Foundation Server component model for modifying both the Process Template and creating plugins is built on to be entirely open(in many cases the entry points are defined in XML configuration files). In addition to the having this the development team and community is quite active in supplying samples of this:

Brian Harry

Buck Hodges

Rob Caron

This open platform has also enabled a ecosystem of add-ons like Teamlook, Teamprise, Teamplain, Teamword, TFSPermission Manager.

22)  How does it integrate with other non-MS platforms?

Team Foundation Server uses Web Services for cross machine communication therefore the Team Foundation Server functionality can be made available to any computer. (see MSDN Team System Article on how to use these web services) This is exactly how companies likeTeamprise, Teamplain, have built their clients to run on non windows computers.

23) How does it integrate with other software (eg custom task management software etc)?

In addition to the integration methods mentioned above Team Foundation is also a popular platform for other software manufacturers to host themselves in. Examples of this is Borland with their Together and Caliber Products and Compuware Testing with DevPartner.

24) How does the version control compare to Perforce? Branching, merging, change lists etc?

Team Foundation Server supports all normally expected Source Control features such as branching, merging, exclusively locking, remote disconnected scenarios, labeling, searching on various properties high fidelity reporting (how much code churn per person per project per iteration etc) plus a couple of newer paradigms like shelving and optimization for things like branching scenarios (many version control systems do a full copy for branches). I would have some performance comparisons but most systems don’t allow this.

25)  Automated build system?

Yes Team Foundation Server includes an Automated Build System. This system is based on MSBUILD and offers the additional functionality of automatically running tests, profiling, code analysis, verifying policies, collating the changesets and workitems for reporting.

26) Any support for distributed build tools? Eg integrating our custom data build tools into the system throughout a network?

MSbuild was written to be extensible and integrate with existing tools through easy to use XML configuration files. Many of the commercial build utilities are already using and/or integrated with MSBuild –such as Cruisecontrol.net. In addition to making these actions part of the build script I have found the generic tests set to run as part of the build to do just as good a job with a rich user interface and support for managing/filtering etc.

27) Documentation support – eg integrating documentation with code check-ins etc?

This would typically be done through an entry to a work item (to be either associated or resolved) on time of check in and linked with this work item.

The links to the documentation can exist in a couple of ways.

1. Checked in as Files (ie doc, HTML etc) Team Foundation Server makes it trivial to link all object checked in (as well as other workitems.)

2. Process guidance files that exist on the Windows Sharepoint Site – Again making it easy for linking.

3. External files once again to linked in a Workitem entry.

28) Does it send data compressed over the network?

Team Foundation uses Web Services for cross machine communication and by default automatically configures IIS use Compression.

29) Working from home / remote location?

Since cross machine communication is accomplished through web services remote access is vastly simplified.

30) Working offline? If the server is offline?

Yes, you need to change the file property to offline via a command utility called TFPT and save changes your local workspace. Any subsequent check-in does a get latest which would resolve if there are conflicts to be merged.

Tagged : / / / / / / / / / / / / /

Top 25 Chef configuration management interview questions and answers

chef-interview-questions-and-answers

Source – learn.chef.io

What is a resource?
Answer- A resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.

Question: What is a recipe?
Answer- A recipe is a collection of resources that describes a particular configuration or policy. A recipe describes everything that is required to configure part of a system. Recipes do things such as:

install and configure software components.
manage files.
deploy applications.
execute other recipes.

Question: What happens when you don’t specify a resource’s action?
Answer- When you don’t specify a resource’s action, Chef applies the default action.

Question: Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]  end

&&

service 'httpd' do    action [:enable, :start]    end
package 'httpd'

Answer-
No, they are not. Remember that Chef applies resources in the order they appear. So the first recipe ensures that the httpd package is installed and then configures the service. The second recipe configures the service and then ensures the package is installed.

Question: The second recipe may not work as you’d expect because the service resource will fail if the package is not yet installed.

Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]    end
package 'httpd'
service 'httpd' do    action [:start, :enable]    end

Answer-
No, they are not. Although both recipes ensure that the httpd package is installed before configuring its service, the first recipe enables the service when the system boots and then starts it. The second recipe starts the service and then enables it to start on reboot.

Are these two recipes the same?

file ‘/etc/motd’ do
owner ‘root’
group ‘root’
mode ‘0755’
action :create
end

file ‘/etc/motd’ do
action :create
mode ‘0755’
group ‘root’
owner ‘root’
end

Answer-
Yes, they are! Order matters with a lot of things in Chef, but you can order resource attributes any way you want.

Question –
Write a service resource that stops and then disables the httpd service from starting when the system boots.

Answer –
service ‘httpd’ do
action [:stop, :disable]
end

How does a cookbook differ from a recipe?
A recipe is a collection of resources, and typically configures a software package or some piece of infrastructure. A cookbook groups together recipes and other information in a way that is more manageable than having just recipes alone.

For example, in this lesson you used a template resource to manage your HTML home page from an external file. The recipe stated the configuration policy for your web site, and the template file contained the data. You used a cookbook to package both parts up into a single unit that you can later deploy.

How does chef-apply differ from chef-client?

chef-apply applies a single recipe; chef-client applies a cookbook.

For learning purposes, we had you start off with chef-apply because it helps you understand the basics quickly. In practice, chef-apply is useful when you want to quickly test something out. But for production purposes, you typically run chef-client to apply one or more cookbooks.

You’ll learn in the next module how to run chef-client remotely from your workstation.

What’s the run-list?

The run-list lets you specify which recipes to run, and the order in which to run them. The run-list is important for when you have multiple cookbooks, and the order in which they run matters.

What are the two ways to set up a Chef server?

Install an instance on your own infrastructure.
Use hosted Chef.

What’s the role of the Starter Kit?
The Starter Kit provides certificates and other files that enable you to securely communicate with the Chef server.

Where can you get reusable cookbooks that are written and maintained by the Chef community?
Chef Supermarket, https://supermarket.chef.io.

What’s the command that enables you to interact with the Chef server?
knife

What is a node?
A node represents a server and is typically a virtual machine, container instance, or physical server – basically any compute resource in your infrastructure that’s managed by Chef.

What information do you need to in order to bootstrap?
You need:

your node’s host name or public IP address.
a user name and password you can log on to your node with.
Alternatively, you can use key-based authentication instead of providing a user name and password.

What happens during the bootstrap process?
During the bootstrap process, the node downloads and installs chef-client, registers itself with the Chef server, and does an initial checkin. During this checkin, the node applies any cookbooks that are part of its run-list.

Which of the following lets you verify that your node has successfully bootstrapped?

The Chef management console.
knife node list
knife node show
You can use all three of these methods.

What is the command you use to upload a cookbook to the Chef server?
knife cookbook upload

How do you apply an updated cookbook to your node?
We mentioned two ways.

Run knife ssh from your workstation.
SSH directly into your server and run chef-client.
You can also run chef-client as a daemon, or service, to check in with the Chef server on a regular interval, say every 15 or 30 minutes.

Update your Apache cookbook to display your node’s host name, platform, total installed memory, and number of CPUs in addition to its FQDN on the home page.

Update index.html.erb like this.
<html>
<body>
<h1>hello from <%= node[‘fqdn’] %></h1>

<pre>
<%= node[‘hostname’] %>
<%= node[‘platform’] %> – <%= node[‘platform_version’] %>
<%= node[‘memory’][‘total’] %> RAM
<%= node[‘cpu’][‘total’] %> CPUs
</pre>
</body>
</html>

Then upload your cookbook and run it on your node.

What would you set your cookbook’s version to once it’s ready to use in production?

According to Semantic Versioning, you should set your cookbook’s version number to 1.0.0 at the point it’s ready to use in production.

What is the latest version of the haproxy community cookbook?

To know the latest version of any cookbook on Chef Supermarket, browse to its page and view the latest version from the version selection box.

Or, get the info from the knife cookbook site command, like this.

knife cookbook site show haproxy | grep latest_version
latest_version: http://cookbooks.opscode.com/api/v1/cookbooks/haproxy/versions/1.6.6

Create a second node and apply the awesome_customers cookbook to it. How long does it take?

You already accomplished the majority of the tasks that you need. You wrote the awesome_customers cookbook, uploaded it and its dependent cookbooks to the Chef server, applied the awesome_customers cookbook to your node, and verified that everything’s working.

All you need to do now is:

Bring up a second Red Hat Enterprise Linux or CentOS node.
Copy your secret key file to your second node.
Bootstrap your node the same way as before. Because you include the awesome_customers cookbook in your run-list, your node will apply that cookbook during the bootstrap process.
The result is a second node that’s configured identically to the first one. The process should take far less time because you already did most of the work.

Now when you fix an issue or add a new feature, you’ll be able to deploy and verify your update much more quickly!

What’s the value of local development using Test Kitchen?

Local development with Test Kitchen:

enables you to use a variety of virtualization providers that create virtual machine or container instances locally on your workstation or in the cloud.
enables you to run your cookbooks on servers that resemble those that you use in production.
speeds up the development cycle by automatically provisioning and tearing down temporary instances, resolving cookbook dependencies, and applying your cookbooks to your instances.

What is VirtualBox? What is Vagrant?

VirtualBox is the software that manages your virtual machine instances.

Vagrant helps Test Kitchen communicate with VirtualBox and configures things like available memory and network settings.

Verify that your motd cookbook runs on both CentOS 6.6 and CentOS 6.5.

Your motd cookbook is already configured to work on CentOS 6.6 as well as CentOS 6.5, so you don’t need to modify it.

To run it on CentOS 6.5, add an entry to the platforms section of your .kitchen.yml file like this.

---       driver:       name: vagrant
provisioner:       name: chef_zero
platforms:       - name: centos-6.6       driver:       box: opscode-centos-6.6       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.6_chef-provisionerless.box       - name: centos-6.5       driver:       box: opscode-centos-6.5       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.5_chef-provisionerless.box
suites:       - name: default       run_list:       - recipe[motd::default]       attributes:

In many cases, Test Kitchen can infer the box and box_url parameters, which specify the name and location of the base image, or box. We specify them here to show you how to use them.

Run kitchen list to see the matrix of test instances that are available. Here, we have two platforms – CentOS 6.5 and CentOS 6.6 – multiplied by one suite – default.

$kitchen list

Instance           Driver   Provisioner  Verifier  Transport  Last Action default-centos-66  Vagrant  ChefZero     Busser    Ssh        <Not Created> default-centos-65  Vagrant  ChefZero     Busser    Ssh        <Not Created>

Run kitchen converge to create the instances and apply the motd cookbook.

$kitchen converge    -----> Starting Kitchen (v1.4.0)    -----> Creating <default-centos-66>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 10.372334751 seconds    Finished converging <default-centos-66> (3m52.59s).    -----> Creating <default-centos-65>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 5.32753132 seconds    Finished converging <default-centos-65> (10m12.63s).  -----> Kitchen is finished. (19m47.71s)

Now to confirm that everything’s working, run kitchen login. But this time, you need to provide the instance name so that Test Kitchen knows which instance to connect to.

$kitchen login default-centos-66       Last login: Wed May 13 20:15:00 2015 from 10.0.2.2              hostname:  default-centos-66       fqdn:      default-centos-66       memory:    469392kB       cpu count: 1       [vagrant@default-centos-66 ~]$ logout       Connection to 127.0.0.1 closed.$kitchen login default-centos-65       Last login: Wed May 13 20:28:18 2015 from 10.0.2.2              hostname:  default-centos-65       fqdn:      default-centos-65       memory:    469452kB       cpu count: 1       [vagrant@default-centos-65 ~]$ logout       Connection to 127.0.0.1 closed.
Tagged : / / / / / / / / / / / / / / /

Chef configuration management interview questions and answers | Chef Interview Q&A

chef-interview-questions-and-answers

Source – learn.chef.io

What is a resource?
Answer- A resource represents a piece of infrastructure and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.

Question: What is a recipe?
Answer- A recipe is a collection of resources that describes a particular configuration or policy. A recipe describes everything that is required to configure part of a system. Recipes do things such as:

install and configure software components.
manage files.
deploy applications.
execute other recipes.

Question: What happens when you don’t specify a resource’s action?
Answer- When you don’t specify a resource’s action, Chef applies the default action.

Question: Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]  end

&&

service 'httpd' do    action [:enable, :start]    end
package 'httpd'

Answer-
No, they are not. Remember that Chef applies resources in the order they appear. So the first recipe ensures that the httpd package is installed and then configures the service. The second recipe configures the service and then ensures the package is installed.

Question: The second recipe may not work as you’d expect because the service resource will fail if the package is not yet installed.

Are these two recipes the same?

package 'httpd'
service 'httpd' do    action [:enable, :start]    end
package 'httpd'
service 'httpd' do    action [:start, :enable]    end

Answer-
No, they are not. Although both recipes ensure that the httpd package is installed before configuring its service, the first recipe enables the service when the system boots and then starts it. The second recipe starts the service and then enables it to start on reboot.

Are these two recipes the same?

file ‘/etc/motd’ do
owner ‘root’
group ‘root’
mode ‘0755’
action :create
end

file ‘/etc/motd’ do
action :create
mode ‘0755’
group ‘root’
owner ‘root’
end

Answer-
Yes, they are! Order matters with a lot of things in Chef, but you can order resource attributes any way you want.

Question –
Write a service resource that stops and then disables the httpd service from starting when the system boots.

Answer –
service ‘httpd’ do
action [:stop, :disable]
end

How does a cookbook differ from a recipe?
A recipe is a collection of resources, and typically configures a software package or some piece of infrastructure. A cookbook groups together recipes and other information in a way that is more manageable than having just recipes alone.

For example, in this lesson you used a template resource to manage your HTML home page from an external file. The recipe stated the configuration policy for your web site, and the template file contained the data. You used a cookbook to package both parts up into a single unit that you can later deploy.

How does chef-apply differ from chef-client?

chef-apply applies a single recipe; chef-client applies a cookbook.

For learning purposes, we had you start off with chef-apply because it helps you understand the basics quickly. In practice, chef-apply is useful when you want to quickly test something out. But for production purposes, you typically run chef-client to apply one or more cookbooks.

You’ll learn in the next module how to run chef-client remotely from your workstation.

What’s the run-list?

The run-list lets you specify which recipes to run, and the order in which to run them. The run-list is important for when you have multiple cookbooks, and the order in which they run matters.

What are the two ways to set up a Chef server?

Install an instance on your own infrastructure.
Use hosted Chef.

What’s the role of the Starter Kit?
The Starter Kit provides certificates and other files that enable you to securely communicate with the Chef server.

Where can you get reusable cookbooks that are written and maintained by the Chef community?
Chef Supermarket, https://supermarket.chef.io.

What’s the command that enables you to interact with the Chef server?
knife

What is a node?
A node represents a server and is typically a virtual machine, container instance, or physical server – basically any compute resource in your infrastructure that’s managed by Chef.

What information do you need to in order to bootstrap?
You need:

your node’s host name or public IP address.
a user name and password you can log on to your node with.
Alternatively, you can use key-based authentication instead of providing a user name and password.

What happens during the bootstrap process?
During the bootstrap process, the node downloads and installs chef-client, registers itself with the Chef server, and does an initial checkin. During this checkin, the node applies any cookbooks that are part of its run-list.

Which of the following lets you verify that your node has successfully bootstrapped?

The Chef management console.
knife node list
knife node show
You can use all three of these methods.

What is the command you use to upload a cookbook to the Chef server?
knife cookbook upload

How do you apply an updated cookbook to your node?
We mentioned two ways.

Run knife ssh from your workstation.
SSH directly into your server and run chef-client.
You can also run chef-client as a daemon, or service, to check in with the Chef server on a regular interval, say every 15 or 30 minutes.

Update your Apache cookbook to display your node’s host name, platform, total installed memory, and number of CPUs in addition to its FQDN on the home page.

Update index.html.erb like this.
<html>
<body>
<h1>hello from <%= node[‘fqdn’] %></h1>

<pre>
<%= node[‘hostname’] %>
<%= node[‘platform’] %> – <%= node[‘platform_version’] %>
<%= node[‘memory’][‘total’] %> RAM
<%= node[‘cpu’][‘total’] %> CPUs
</pre>
</body>
</html>

Then upload your cookbook and run it on your node.

What would you set your cookbook’s version to once it’s ready to use in production?

According to Semantic Versioning, you should set your cookbook’s version number to 1.0.0 at the point it’s ready to use in production.

What is the latest version of the haproxy community cookbook?

To know the latest version of any cookbook on Chef Supermarket, browse to its page and view the latest version from the version selection box.

Or, get the info from the knife cookbook site command, like this.

knife cookbook site show haproxy | grep latest_version
latest_version: http://cookbooks.opscode.com/api/v1/cookbooks/haproxy/versions/1.6.6

Create a second node and apply the awesome_customers cookbook to it. How long does it take?

You already accomplished the majority of the tasks that you need. You wrote the awesome_customers cookbook, uploaded it and its dependent cookbooks to the Chef server, applied the awesome_customers cookbook to your node, and verified that everything’s working.

All you need to do now is:

Bring up a second Red Hat Enterprise Linux or CentOS node.
Copy your secret key file to your second node.
Bootstrap your node the same way as before. Because you include the awesome_customers cookbook in your run-list, your node will apply that cookbook during the bootstrap process.
The result is a second node that’s configured identically to the first one. The process should take far less time because you already did most of the work.

Now when you fix an issue or add a new feature, you’ll be able to deploy and verify your update much more quickly!

What’s the value of local development using Test Kitchen?

Local development with Test Kitchen:

enables you to use a variety of virtualization providers that create virtual machine or container instances locally on your workstation or in the cloud.
enables you to run your cookbooks on servers that resemble those that you use in production.
speeds up the development cycle by automatically provisioning and tearing down temporary instances, resolving cookbook dependencies, and applying your cookbooks to your instances.

What is VirtualBox? What is Vagrant?

VirtualBox is the software that manages your virtual machine instances.

Vagrant helps Test Kitchen communicate with VirtualBox and configures things like available memory and network settings.

Verify that your motd cookbook runs on both CentOS 6.6 and CentOS 6.5.

Your motd cookbook is already configured to work on CentOS 6.6 as well as CentOS 6.5, so you don’t need to modify it.

To run it on CentOS 6.5, add an entry to the platforms section of your .kitchen.yml file like this.

---       driver:       name: vagrant
provisioner:       name: chef_zero
platforms:       - name: centos-6.6       driver:       box: opscode-centos-6.6       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.6_chef-provisionerless.box       - name: centos-6.5       driver:       box: opscode-centos-6.5       box_url: http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_centos-6.5_chef-provisionerless.box
suites:       - name: default       run_list:       - recipe[motd::default]       attributes:

In many cases, Test Kitchen can infer the box and box_url parameters, which specify the name and location of the base image, or box. We specify them here to show you how to use them.

Run kitchen list to see the matrix of test instances that are available. Here, we have two platforms – CentOS 6.5 and CentOS 6.6 – multiplied by one suite – default.

$kitchen list

Instance           Driver   Provisioner  Verifier  Transport  Last Action default-centos-66  Vagrant  ChefZero     Busser    Ssh        <Not Created> default-centos-65  Vagrant  ChefZero     Busser    Ssh        <Not Created>

Run kitchen converge to create the instances and apply the motd cookbook.

$kitchen converge    -----> Starting Kitchen (v1.4.0)    -----> Creating <default-centos-66>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 10.372334751 seconds    Finished converging <default-centos-66> (3m52.59s).    -----> Creating <default-centos-65>...    Bringing machine 'default' up with 'virtualbox' provider...    [...]    Running handlers:    Running handlers complete    Chef Client finished, 1/1 resources updated in 5.32753132 seconds    Finished converging <default-centos-65> (10m12.63s).  -----> Kitchen is finished. (19m47.71s)

Now to confirm that everything’s working, run kitchen login. But this time, you need to provide the instance name so that Test Kitchen knows which instance to connect to.

$kitchen login default-centos-66       Last login: Wed May 13 20:15:00 2015 from 10.0.2.2              hostname:  default-centos-66       fqdn:      default-centos-66       memory:    469392kB       cpu count: 1       [vagrant@default-centos-66 ~]$ logout       Connection to 127.0.0.1 closed.$kitchen login default-centos-65       Last login: Wed May 13 20:28:18 2015 from 10.0.2.2              hostname:  default-centos-65       fqdn:      default-centos-65       memory:    469452kB       cpu count: 1       [vagrant@default-centos-65 ~]$ logout       Connection to 127.0.0.1 closed.
Tagged : / / / / / / / / / / / / / / /

Top 10 Interview Questions and Answers in SVN (Subversion)

svn-subversion-interview-questions-and-answers

  • What is SVN?

  • What is “branch” , “Tag” and “Trunk” in SVN ?

  • what do you mean by “Synchronizing with Repository” ? How is it different from “Update” ?

  • Difference between Update and Commit ?

  • How to apply a patch in SVN ?

  • What if SVN Update gives Merge Conflicts and you just want your local files to be overriden with the repository versions ?

  • trunk vs branch vs tag in subversion or SVN

  • What is the process to take backuop and restore in SVN?

  • How to setup SVN?

  • How to setup authentication in SVN?

Tagged : / / / / / / / / / / / / / / /

Top Interview Questions and Answers of Jenkins

jenkins-interview-questions

  • What is continuous integration?

  • Jenkins Continuous integration API features?

  • Advantages of jenkins?

  • Jenkins plugins?

  • Requirements for using Jenkins?

  • Installing Jenkins on Ubuntu and RHEL?

  • Process to take Jenkins backup and copying files?

  • Top 20 Jenkins and Useful Plugins?

Tagged : / / / / / / / / / / / / /

Interview Questions Sets : Shell Script Descriptive

shell-script-descriptive-interview-questions-sets

Interview Questions Sets : Shell Script Descriptive Questions Sets

What is shell scripting?
Shell scripting is used to program command line of an operating system. Shell Scripting is also used to program the shell which is the base for any operating system. Shell scripts often refer to programming UNIX. Shell scripting is mostly used to program operating systems of windows, UNIX, Apple, etc. Also this script is used by companies to develop their own operating system with their own features.

Advantages of Shell scripting?
There are many advantages of shell scripting some of them are, one can develop their own operating system with relevant features best suited to their organization than to rely on costly operating systems. Software applications can be designed according to their platform.

What are the disadvantages of shell scripting?
There are many disadvantages of shell scripting they are

  • Design flaws can destroy the entire process and could prove a costly error.
  • Typing errors during the creation can delete the entire data as well as partition data.
  • Initially process is slow but can be improved.
  • *Portbility between different operating system is a prime concern as it is very difficult to port scripts etc.


Explain about the slow execution speed of shells?
Major disadvantage of using shell scripting is slow execution of the scripts. This is because for every command a new process needs to be started. This slow down can be resolved by using pipeline and filter commands. A complex script takes much longer time than a normal script.

Give some situations where typing error can destroy a program?
There are many situations where typing errors can prove to be a real costly effort. For example a single extra space can convert the functionality of the program from deleting the sub directories to files deletion. cp, cn, cd all resemble the same but their actual functioning is different. Misdirected > can delete your data.
Coding Related Shell Scripting Interview Questions …

Explain about return code?
Return code is a common feature in shell programming. These return codes indicate whether a particular program or application has succeeded or failed during its process. && can be used in return code to indicate which application needs to be executed first.

What are the different variables present in Linux shell?
Variables can be defined by the programmer or developer they specify the location of a particular variable in the memory. There are two types of shells they are System variables and user defined variables. System variables are defined by the system and user defined variables are to be defined by the user (small letters).

Explain about GUI scripting?
Graphical user interface provided the much needed thrust for controlling a computer and its applications. This form of language simplified repetitive actions. Support for different applications mostly depends upon the operating system. These interact with menus, buttons, etc.

Shell Scripting Command Interview Questions …

Explain about echo command?
Echo command is used to display the value of a variable. There are many different options give different outputs such as usage \c suppress a trailing line, \r returns a carriage line, -e enables interpretation, \r returns the carriage.

Explain about Stdin, Stdout and Stderr?
These are known as standard input, output and error. These are categorized as 0, 1 and 2. Each of these functions has a particular role and should accordingly functions for efficient output. Any mismatch among these three could result in a major failure of the shell.

Explain about sourcing commands?
Sourcing commands help you to execute the scripts within the scripts. For example sh command makes your program to run as a separate shell. .command makes your program to run within the shell. This is an important command for beginners and for special purposes.

Explain about debugging?
Shell can make your debugging process easier because it has lots of commands to perform the function. For example sh –ncommand helps you to perform debugging. It helps you to read the shell but not to execute it during the course. Similarly sh –x command helps you by displaying the arguments and functions as they are executed.

Explain about Login shell?
Login shell is very useful as it creates an environment which is very useful to create the default parameters. It consists of two files they are profile files and shell rc files. These files initialize the login and non login files. Environment variables are created by Login shell.

Explain about non-login shell files?
The non login shell files are initialized at the start and they are made to run to set up variables. Parameters and path can be set etc are some important functions. These files can be changed and also your own environment can be set. These functions are present in the root. It runs the profile each time you start the process.

Explain about shebang?
Shebang is nothing but a # sign followed by an exclamation. This is visible at the top of the script and it is immediately followed by an exclamation. To avoid repetitive work each time developers use shebang. After assigning the shebang work we pass info to the interpreter.

Explain about the Exit command?
Every program whether on UNIX or Linux should end at a certain point of time and successful completion of a program is denoted by the output 0. If the program gives an output other than 0 it defines that there has been some problem with the execution or termination of the problem. Whenever you are calling other function, exit command gets displayed.

Explore about Environment variables?
Environment variables are set at the login time and every shell that starts from this shell gets a copy of the variable. When we export the variable it changes from an shell variable to an environment variable and these variables are initiated at the start of the shell.

How can you tell what shell you are running on a UNIX system?
Answer :
You can do the Echo $RANDOM. It will return a undefined variable if you are from the C-Shell, just a return prompt if you are from the Bourne shell, and a 5 digit random numbers if you are from the Korn shell.

You could also do a ps -l and look for the shell with the highest PID.

What are conditions on which deadlock can occur while swapping the processes?

All processes in the main memory are asleep. All ‘ready-to-run’ processes are swapped out.
There is no space in the swap device for the new incoming process that are swapped out of the main memory. There is no space in the main memory for the new incoming process.

How do you change File Access Permissions?

Answer :

Every file has following attributes:
owner’s user ID ( 16 bit integer )
owner’s group ID ( 16 bit integer )
File access mode word

‘r w x -r w x- r w x’
(user permission-group permission-others permission)

r-read, w-write, x-execute

To change the access mode, we use chmod(filename,mode).
Example:
To change mode of myfile to ‘rw-rw-r–’ (ie. read, write permission for user – read,write permission for group – only read permission for others) we give the args as:
chmod(myfile,0664) .

Each operation is represented by discrete values
‘r’ is 4
‘w’ is 2
‘x’ is 1

Therefore, for ‘rw’ the value is 6(4+2).

Example 2:
To change mode of myfile to ‘rwxr–r–’ we give the args as:
chmod(myfile,0744).

List the system calls used for process management.
Answer :

System calls Description
fork() To create a new process
exec() To execute a new program in a process
wait() To wait until a created process completes its execution
exit() To exit from a process execution
getpid() To get a process identifier of the current process
getppid() To get parent process identifier
nice() To bias the existing priority of a process
brk() To increase/decrease the data segment size of a process

What is the difference between Swapping and Paging?
Answer:

Swapping:
Whole process is moved from the swap device to the main memory for execution. Process size must be less than or equal to the available main memory. It is easier to implementation and overhead to the system. Swapping systems does not handle the memory more flexibly as compared to the paging systems.

Paging:
Only the required memory pages are moved to main memory from the swap device for execution. Process size does not matter. Gives the concept of the virtual memory.

It provides greater flexibility in mapping the virtual address space into the physical memory of the machine. Allows more number of processes to fit in the main memory simultaneously. Allows the greater process size than the available physical memory. Demand paging systems handle the memory more flexibly.

What is the difference between cmp and diff commands?
Answer :

cmp – Compares two files byte by byte and displays the first mismatch
diff – tells the changes to be made to make the files identical

What is meant by the nice value?
Answer :

Nice value is the value that controls {increments or decrements} the priority of the process. This value that is returned by the nice () system call. The equation for using nice value is:
Priority = (“recent CPU usage”/constant) + (base- priority) + (nice value)
Only the administrator can supply the nice value. The nice () system call works for the running process only. Nice value of one process cannot affect the nice value of the other process.

What is a daemon?
Answer :
A daemon is a process that detaches itself from the terminal and runs, disconnected, in the background, waiting for requests and responding to them. It can also be defined as the background process that does not belong to a terminal session. Many system functions are commonly performed by daemons, including the sendmail daemon, which handles mail, and the NNTP daemon, which handles USENET news. Many other daemons may exist. Some of the most common daemons are:
init: Takes over the basic running of the system when the kernel has finished the boot process.
inetd: Responsible for starting network services that do not have their own stand-alone daemons. For example, inetd usually takes care of incoming rlogin, telnet, and ftp connections.
cron: Responsible for running repetitive tasks on a regular schedule.

What are the process states in UNIX?

Answer :
As a process executes it changes state according to its circumstances. Unix processes have the following states:
Running : The process is either running or it is ready to run .
Waiting : The process is waiting for an event or for a resource.
Stopped : The process has been stopped, usually by receiving a signal.
Zombie : The process is dead but have not been removed from the process table.

How are devices represented in UNIX?
All devices are represented by files called special files that are located in/dev directory. Thus, device files and other files are named and accessed in the same way. A ‘regular file’ is just an ordinary data file in the disk. A ‘block special file’ represents a device with characteristics similar to a disk (data transfer in terms of blocks). A ‘character special file’ represents a device with characteristics similar to a keyboard (data transfer is by stream of bits in sequential order).

What is ‘inode’?
All UNIX files have its description stored in a structure called ‘inode’. The inode contains info about the file-size, its location, time of last access, time of last modification, permission and so on. Directories are also represented as files and have an associated inode. In addition to descriptions about the file, the inode contains pointers to the data blocks of the file. If the file is large, inode has indirect pointer to a block of pointers to additional data blocks (this further aggregates for larger files). A block is typically 8k.
Inode consists of the following fields:
• File owner identifier
• File type
• File access permissions
• File access times
• Number of links
• File size
• Location of the file data

Brief about the directory representation in UNIX
A Unix directory is a file containing a correspondence between filenames and inodes. A directory is a special file that the kernel maintains. Only kernel modifies directories, but processes can read directories. The contents of a directory are a list of filename and inode number pairs. When new directories are created, kernel makes two entries named ‘.’ (refers to the directory itself) and ‘..’ (refers to parent directory).
System call for creating directory is mkdir (pathname, mode).

What are the Unix system calls for I/O?
• open(pathname,flag,mode) – open file
• creat(pathname,mode) – create file
• close(filedes) – close an open file
• read(filedes,buffer,bytes) – read data from an open file
• write(filedes,buffer,bytes) – write data to an open file
• lseek(filedes,offset,from) – position an open file
• dup(filedes) – duplicate an existing file descriptor
• dup2(oldfd,newfd) – duplicate to a desired file descriptor
• fcntl(filedes,cmd,arg) – change properties of an open file
• ioctl(filedes,request,arg) – change the behaviour of an open file
The difference between fcntl anf ioctl is that the former is intended for any open file, while the latter is for device-specific operations.

How do you change File Access Permissions?
Every file has following attributes:
• owner’s user ID ( 16 bit integer )
• owner’s group ID ( 16 bit integer )
• File access mode word
‘r w x -r w x- r w x’
(user permission-group permission-others permission)
r-read, w-write, x-execute
To change the access mode, we use chmod(filename,mode).
Example 1:
To change mode of myfile to ‘rw-rw-r–‘ (ie. read, write permission for user – read,write permission for group – only read permission for others) we give the args as:
chmod(myfile,0664) .
Each operation is represented by discrete values
‘r’ is 4
‘w’ is 2
‘x’ is 1
Therefore, for ‘rw’ the value is 6(4+2).
Example 2:
To change mode of myfile to ‘rwxr–r–‘ we give the args as:
chmod(myfile,0744).

What are links and symbolic links in UNIX file system?
A link is a second name (not a file) for a file. Links can be used to assign more than one name to a file, but cannot be used to assign a directory more than one name or link filenames on different computers.
Symbolic link ‘is’ a file that only contains the name of another file.Operation on the symbolic link is directed to the file pointed by the it.Both the limitations of links are eliminated in symbolic links.
Commands for linking files are:
Link ln filename1 filename2
Symbolic link ln -s filename1 filename2

What is a FIFO?
FIFO are otherwise called as ‘named pipes’. FIFO (first-in-first-out) is a special file which is said to be data transient. Once data is read from named pipe, it cannot be read again. Also, data can be read only in the order written. It is used in interprocess communication where a process writes to one end of the pipe (producer) and the other reads from the other end (consumer).

How do you create special files like named pipes and device files?
The system call mknod creates special files in the following sequence.
kernel assigns new inode,
sets the file type to indicate that the file is a pipe, directory or special file,
If it is a device file, it makes the other entries like major, minor device numbers.
For example:
If the device is a disk, major device number refers to the disk controller and minor device number is the disk.

Discuss the mount and unmount system calls
The privileged mount system call is used to attach a file system to a directory of another file system; the unmount system call detaches a file system. When you mount another file system on to your directory, you are essentially splicing one directory tree onto a branch in another directory tree. The first argument to mount call is the mount point, that is , a directory in the current file naming system. The second argument is the file system to mount to that point. When you insert a cdrom to your unix system’s drive, the file system in the cdrom automatically mounts to /dev/cdrom in your system.

How does the inode map to data block of a file?
Inode has 13 block addresses. The first 10 are direct block addresses of the first 10 data blocks in the file. The 11th address points to a one-level index block. The 12th address points to a two-level (double in-direction) index block. The 13th address points to a three-level(triple in-direction)index block. This provides a very large maximum file size with efficient access to large files, but also small files are accessed directly in one disk read.

What is a shell?
A shell is an interactive user interface to an operating system services that allows an user to enter commands as character strings or through a graphical user interface. The shell converts them to system calls to the OS or forks off a process to execute the command. System call results and other information from the OS are presented to the user through an interactive interface. Commonly used shells are sh,csh,ks etc.

Brief about the initial process sequence while the system boots up.
While booting, special process called the ‘swapper’ or ‘scheduler’ is created with Process-ID 0. The swapper manages memory allocation for processes and influences CPU allocation. The swapper inturn creates 3 children:
• the process dispatcher,
• vhand and
• dbflush
with IDs 1,2 and 3 respectively.
This is done by executing the file /etc/init. Process dispatcher gives birth to the shell. Unix keeps track of all the processes in an internal data structure called the Process Table (listing command is ps -el).

What are various IDs associated with a process?
Unix identifies each process with a unique integer called ProcessID. The process that executes the request for creation of a process is called the ‘parent process’ whose PID is ‘Parent Process ID’. Every process is associated with a particular user called the ‘owner’ who has privileges over the process. The identification for the user is ‘UserID’. Owner is the user who executes the process. Process also has ‘Effective User ID’ which determines the access privileges for accessing resources like files.
getpid() -process id
getppid() -parent process id
getuid() -user id
geteuid() -effective user id

Explain fork() system call.
The `fork()’ used to create a new process from an existing process. The new process is called the child process, and the existing process is called the parent. We can tell which is which by checking the return value from `fork()’. The parent gets the child’s pid returned to him, but the child gets 0 returned to him.

Predict the output of the following program code
main()
{
fork();
printf(“Hello World!”);
}
Answer:
Hello World!Hello World!
Explanation:
The fork creates a child that is a duplicate of the parent process. The child begins from the fork().All the statements after the call to fork() will be executed twice.(once by the parent process and other by child). The statement before fork() is executed only by the parent process.

Predict the output of the following program code
main()
{
fork(); fork(); fork();
printf(“Hello World!”);
}
Answer:
“Hello World” will be printed 8 times.
Explanation:
2^n times where n is the number of calls to fork()

List the system calls used for process management:
System calls Description
fork() To create a new process
exec() To execute a new program in a process
wait() To wait until a created process completes its execution
exit() To exit from a process execution
getpid() To get a process identifier of the current process
getppid() To get parent process identifier
nice() To bias the existing priority of a process
brk() To increase/decrease the data segment size of a process

How can you get/set an environment variable from a program?
Getting the value of an environment variable is done by using `getenv()’.
Setting the value of an environment variable is done by using `putenv()’.

How can a parent and child process communicate?
A parent and child can communicate through any of the normal inter-process communication schemes (pipes, sockets, message queues, shared memory), but also have some special ways to communicate that take advantage of their relationship as a parent and child. One of the most obvious is that the parent can get the exit status of the child.

What is a zombie?
When a program forks and the child finishes before the parent, the kernel still keeps some of its information about the child in case the parent might need it – for example, the parent may need to check the child’s exit status. To be able to get this information, the parent calls `wait()’; In the interval between the child terminating and the parent calling `wait()’, the child is said to be a `zombie’ (If you do `ps’, the child will have a `Z’ in its status field to indicate this.)

What are the process states in Unix?
As a process executes it changes state according to its circumstances. Unix processes have the following states:
Running : The process is either running or it is ready to run .
Waiting : The process is waiting for an event or for a resource.
Stopped : The process has been stopped, usually by receiving a signal.
Zombie : The process is dead but have not been removed from the process table.

What Happens when you execute a program?
When you execute a program on your UNIX system, the system creates a special environment for that program. This environment contains everything needed for the system to run the program as if no other program were running on the system. Each process has process context, which is everything that is unique about the state of the program you are currently running. Every time you execute a program the UNIX system does a fork, which performs a series of operations to create a process context and then execute your program in that context. The steps include the following:
• Allocate a slot in the process table, a list of currently running programs kept by UNIX.
• Assign a unique process identifier (PID) to the process.
• iCopy the context of the parent, the process that requested the spawning of the new process.
• Return the new PID to the parent process. This enables the parent process to examine or control the process directly.
After the fork is complete, UNIX runs your program.

What Happens when you execute a command?
When you enter ‘ls’ command to look at the contents of your current working directory, UNIX does a series of things to create an environment for ls and the run it:

The shell has UNIX perform a fork. This creates a new process that the shell will use to run the ls program.
The shell has UNIX perform an exec of the ls program. This replaces the shell program and data with the program and data for ls and then starts running that new program.

The ls program is loaded into the new process context, replacing the text and data of the shell. The ls program performs its task, listing the contents of the current directory.

What is a Daemon?
A daemon is a process that detaches itself from the terminal and runs, disconnected, in the background, waiting for requests and responding to them. It can also be defined as the background process that does not belong to a terminal session. Many system functions are commonly performed by daemons, including the sendmail daemon, which handles mail, and the NNTP daemon, which handles USENET news. Many other daemons may exist. Some of the most common daemons are:
• init: Takes over the basic running of the system when the kernel has finished the boot process.
• inetd: Responsible for starting network services that do not have their own stand-alone daemons. For example, inetd usually takes care of incoming rlogin, telnet, and ftp connections.
• cron: Responsible for running repetitive tasks on a regular schedule.

What is ‘ps’ command for?
The ps command prints the process status for some or all of the running processes. The information given are the process identification number (PID),the amount of time that the process has taken to execute so far etc.

How would you kill a process?
The kill command takes the PID as one argument; this identifies which process to terminate. The PID of a process can be got using ‘ps’ command.

What is an advantage of executing a process in background?
The most common reason to put a process in the background is to allow you to do something else interactively without waiting for the process to complete. At the end of the command you add the special background symbol, &. This symbol tells your shell to execute the given command in the background.
Example: cp *.* ../backup& (cp is for copy)

How do you execute one program from within another?
The system calls used for low-level process creation are execlp() and execvp(). The execlp call overlays the existing program with the new one , runs that and exits. The original program gets back control only when an error occurs.
execlp(path,file_name,arguments..); //last argument must be NULL
A variant of execlp called execvp is used when the number of arguments is not known in advance.
execvp(path,argument_array); //argument array should be terminated by NULL

What is IPC? What are the various schemes available?
The term IPC (Inter-Process Communication) describes various ways by which different process running on some operating system communicate between each other. Various schemes available are as follows:
Pipes:
One-way communication scheme through which different process can communicate. The problem is that the two processes should have a common ancestor (parent-child relationship). However this problem was fixed with the introduction of named-pipes (FIFO).

Message Queues :
Message queues can be used between related and unrelated processes running on a machine.

Shared Memory:
This is the fastest of all IPC schemes. The memory to be shared is mapped into the address space of the processes (that are sharing). The speed achieved is attributed to the fact that there is no kernel involvement. But this scheme needs synchronization.

State and explain about features of UNIX?
UNIX operating system originally was developed in 1969. This is an open source operating system developed by AT&T. It is widely used in work stations and servers. It is designed to be multi tasking, multi user and portable. UNIX has many several components packed together.

Explain about sh?
Sh is the command line interpreter and it is the primary user interface. This forms the programmable command line interpreter. After windows appeared it still retained the programmable characteristics.

Explain about system and user utilities?
There are two utilities they are system and user utilities. System utilities contain administrative tools such as mkfs, fsck, etc. Where as user utilities contain features such as passwd, kill, etc. It basically contains environment values.

Explain about document formatting?
UNIX systems were primarily used for typesetting systems and document formatting. Modern UNIX systems used packages such as Tex and Ghostscript. It uses some of the programs such as nroff, tbl, troff, refer, eqn and pic. Document formatting is very used because it forms the base of UNIX.

Explain about communication features in UNIX?
Early UNIX systems used inter user communication programs mail and write commands. They never contained a fully embedded inter user communication features. Systems with BSD included TCP/IP protocols.

Explain about chmod options filename?
This command allows you to change, write, read and execute permissions on your file. Changes can be done to the file system but at times you need to change permissions for the file systems. At times files should be executable for viewing the files.

Explain about gzip filename?
Gzip filename is used to compress the files so that those files take up less space. The size of the file actually gets reduced to half their size but they might also depend upon about the file size and nature of the file systems. Files using gzip file name end with .gz.

Explain about refer?
Refer was written in Bell Laboratories and it is implemented as a troff preprocessor. This program is used managing bibliographic references and it is used to cite them in troff documents. It is offered in most of the UNIX packages. It refers with text and reference file.

Explain about lpr filename?
This command is used to print a file. If you want to change the default print you can change the printer by using the P option. For double sided print you can use lpr-Pvalkyr-d. This is very useful command in UNIX present in many packages.

Explain about lprm job number?
This command is used to remove documents from the printer queue. The job number or the queue number can be found by using lpq. Printer name should be specified but this is not necessary if you want to use your default printer.

Brief about the command ff?
This command finds files present anywhere on the system. This command is used to find document location where you forgot the directory in which you kept the file but you do remember about the name. This command is not restricted in finding files it displays files and documents relevant to the name.

Brief about finger username?
This command is used to give information about the user; it gives out a profile about the user. This command is very useful for administrators as it gives the log information, email, current log information, etc. finger also displays information such as phone number and name when they use a file called .plan.

Explain about the command elm?
This command lets you to send email message from your system. This command is not the only one which sends email there are lots of other messenger systems which can facilitate the process of sending a mail. This command behaves differently on different machines.

Brief about the command kill PID?
This command ends the process to which it was assigned (ID). This command cannot be used in multi systems in the network. ID can be obtained by the command ps. This command ignores completely the state at which the process is it kills the process.

Explain about the command lynx?
This command helps you to browse web from an ordinary terminal. Text can be seen but not the pictures. URL can be assigned as an argument to the G command. Help section can be obtained by pressing H and Q makes the program to quit.

Brief about the command nn?
This command allows you to read the news. First you can read about the local news and then the remote news. “nnl” command makes or allows you to read local news and nnr command is used to read remote news. Manual and help information is available with many popular packages.

Brief about ftp hostname?
This command lets you download information, documents, etc from a remote ftp. First it is important to configure an FTP for the process to begin. Some of the important commands relevant to the usage of FTP are as follows get, put, mget, mput, etc. If you are planning to transfer files other than ASCII defined it is imperative to use binary mode.

Explain about the case statement.
The case statement compares word to the patterns from top to bottom, and performs the commands associated with the first, and only the first, pattern that matches. The patterns are written using the shells pattern matching rules, slightly generalized.

Explain the basic forms of each loop?
There are three loops; for, while and until. For loop is by far the most commonly used form of loop. Basically like other programs it executes a given set of commands and instructions. While and until forms of loop use the exit status from a command based system. They control the execution of the commands in the body of the loop.

Describe about awk and sed?
The awk program processes this to report the changes in an easier to understand format. Sed output is always behind its input by one line; there is always a line of input that has been processed but not printed, and this would introduce an unwanted delay.
Explain about signal argument?
The sequence of commands is a single argument, so it must almost always be quoted. The signal numbers are small integers that identify the signal. For example, 2 is the signal generated by pressing the DEL key, and 1 is generated by hanging up the phone. Unless a program has taken explicit action to deal with signals, the signal will terminate it.

Explain about exec?
The exec is just for efficiency, the command would run just as well without it. Exec is a shell built-in that replaces the process running this shell by the named program, thereby saving one process- the shell that would normally wait for the program to complete. Exec could be used at the end of the enhanced cal program when it invokes /usr/bin/cal.

Explain about trap command
The trap command sequence must explicitly invoke exit, or the shell program will continue to execute after the interrupt. The command sequence will be read twice: once when the trap is set and once when it is invoked. Trap is used sometimes interactively, most often to prevent a program from being killed by the hangup signal.

Explain about sort command?
The sort command has an option –o to overwrite a file:
$ sort file1 -0 file2
Is equivalent to
$ sort file1 > file2
If file 1 and file 2 are the same file, redirection with > will truncate the input file before it is sorted. The –o option works correctly because the input is sorted and saved in a temporary file before the output file is created. Many other commands could also use a –o option.

Explain about the command overwrite?
Overwrite is committed to changing the original file. If the program providing input to overwrite gets an error, its output will be empty and overwrite will dutifully and reliably destroy the argument file. Overwrite could ask for conformation before replacing the file, but making overwrite interactive would negate its efficiency. Overwrite could check that its input is empty.

Explain about kill command?
The kill command only terminates processes specified by process-id when a specific background process needs to be killed, you must usually run ps to find the process-id and then re type it as an argument to kill. Killing process is dangerous and care must be taken to kill the right processes.

Explain about the shell variable IFS?
The shell variable IFS (internal field separator) is a string of characters that separate words in argument lists such as back quotes and for statements. Normally IFS contains a blank, a tab, and a new line, but we can change it to anything useful, such as just a newline.

Explain about the rules used in overwrite to preserve the arguments to the users command?
Some of the rules are
• $* and $@ expand into the arguments and are rescanned; blanks in arguments will result in multiple arguments.
• “$*” is a single word composed of all the arguments to the shell file joined together with spaces.
• “$@” is identical to the arguments received by the shell file: blanks in arguments are ignored and the result is a list of words identical to the original arguments.

Explain about @@@ lines?
@@@ Lines are counted (but not printed), and as long as the count is not greater than the desired version, the editing commands are passed through. Two ed commands are added after those from the history file: $d deletes the single @@@ line that sed left on the current version.

Explain about vis?
Vis that copied its standard input to its standard output, except that it makes all non printing characters visible by printing them as \nnn, where nnn is the octal value of the character. Vis is invaluable for detecting strange or unwanted characters that may have crept into files.

Is the function call to exit at the end of vis necessary?
The call to exit at the end of vis is not necessary to make the program work properly, but it ensures that any caller of the program will see a normal exit status from the program when it completes. An alternate way to return status is to leave main with return 0; the return value from main is the program`s exit status.

Explain about fgets?

Fgets (buf, size, fp) fetches the next line of input from fp, up to and including a newline, into buf, and adds a terminating \0; at most size-1 characters are copied. A Null value is returned at the end of the file.

Explain about efopen page?
The routine efopen encapsulates a vey common operation: try to open a file; if it`s not possible, print an error message and exit. To encourage error messages that identify the offending program, efopen refers to an external string program containing the name of the program, which is set in main.

Explain about yacc parser generator?
Yacc is a parser generator that is a program for converting a grammatical specification of a language like the one above into a parser that will parse statements in the language.

What is $*?
Will display all the commandline arguments that are passed to the script

Different types of shells?
Bourne Shell (bash)
Korn Shell (ksh)
C Shell (csh)

What  is difference between a wild-card and regular expression?

Tagged : / / / / / / / / / / / / / / / / / / / /

List of All Possible Maven Interview Questions & Answers

maven-interview-questions-answers

Is there a way to use the current date in the POM?
Take a look at the buildnumber plugin. It can be used to generate a build date each time I do a build, as follows:
org.codehaus.mojo
maven-buildnumber-plugin
0.9.4

{0,date,yyyy-MM-dd HH:mm:ss}
timestamp

false
false

validate
create

pom.xml or settings.xml? What is the best practice configuration usage for these files?
The best practice guideline between settings.xml and pom.xml is that configurations in settings.xml must be specific to the current user and that pom.xml configurations are specific to the project.
For example, in pom.xml would tell all users of the project to use the specified in the pom.xml. However, some users may prefer to use a mirror instead, so they’ll put in their settings.xml so they can choose a faster repository server.
so there you go:
settings.xml -> user scope
pom.xml -> project scope

How do I indicate array types in a MOJO configuration?

value1
value2

How should I point a path for maven 2 to use a certain version of JDK when I have different versions of JDK installed on my PC and my JAVA_HOME already set?
If you don’t want to change your system JAVA_HOME, set it in maven script instead.
How do I setup the classpath of my antrun plugin to use the classpath from maven?
The maven classpaths are available as ant references when running your ant script. The ant reference names and some examples can be found here: maven-antrun-plugin
Is it possible to use HashMap as configurable parameter in a plugin? How do I configure that in pom.xml?
Yes. Its possible to use a HashMap field as a parameter in your plugin. To use it, your pom configuration should look like this:

yourvalue
…..

How do I filter which classes should be put inside the packaged jar?
All compiled classes are always put into the packaged jar. However, you can configure the compiler plugin to exclude compiling some of the java sources using the compiler parameter excludes as follows:


org.apache.maven.plugins
maven-compiler-plugin

**/NotNeeded*.java


How can I change the default location of the generated jar when I command “mvn package”?
By default, the location of the generated jar is in ${project.build.directory} or in your target directory.
We can change this by configuring the outputDirectory of maven-jar-plugin.

org.apache.maven.plugins
maven-jar-plugin

${project.build.directory}/
How does maven 2 implement reproducibility?

Add the exact versions of plugins into your pluginDepenencies (make use of the release plugin)

Make use of ibiblio for your libraries. This should always be the case for jars. (The group is working on stabilising metadata and techniques for locking it down even if it changes. An internal repository mirror that doesn’t fetch updates (only new) is recommended for true reproducibility.)

Why there are no dependency properties in Maven 2?
They were removed because they aren’t reliable in a transitive environment. It implies that the dependency knows something about the
environment of the dependee, which is back to front. In most cases, granted, the value for war bundle will be the same for a particular
dependency – but that relies on the dependency specifying it.
In the end, we give control to the actual POM doing the building, trying to use sensible defaults that minimise what needs to be
specified, and allowing the use of artifact filters in the configuration of plugins.

What does aggregator mean in mojo?
When a Mojo has a @aggregator expression, it means that It can only build the parent project of your multi-module-project, the one who has the packaging of pom. It can also give you values for the expression ${reactorProjects} where reactorProjects are the MavenProject references to the parent pom modules.
Where is the plugin-registry.xml?
From the settings.xml, you may enable it by setting to true
and the file will be in ~/.m2/plugin-registry.xml
How do I create a command line parameter (i.e., -Dname=value ) in my mojo?
In your mojo, put “expression=${}” in your parameter field

/**
* @parameter expression=”${expression.name}”
*/
private String exp;

You may now able to pass parameter values to the command line.
“mvn -Dexpression.name=value install”
How do I convert my from Maven 1 to Maven 2?
In m1, we declare reports in the pom like this:

maven-checkstyle-plugin
maven-pmd-plugin
In m2, the tag is replaced with

org.apache.maven.plugins
maven-checkstyle-plugin

org.apache.maven.plugins
maven-pmd-plugin

 

What does the “You cannot have two plugin executions with the same (or missing) elements” message mean?
It means that you have executed a plugin multiple times with the same . Provide each with a unique then it would be ok.
How do I add my generated sources to the compile path of Maven, when using modello?
Modello generate the sources in the generate-sources phase and automatically adds the source directory for compilation in maven. So you don’t have to copy the generated sources. You have to declare the modello-plugin in the build of your plugin for source generation (in that way the sources are generated each time).
What is Maven’s order of inheritance?

parent pom

project pom

settings

CLI parameters

where the last overrides the previous.
How do I execute the assembly plugin with different configurations?
Add this to your pom,


org.apache.maven.plugins
maven-assembly-plugin

1
install
assembly

src/main/descriptors/bin.xml
${project.build.finalName}-bin

2
install
assembly

src/main/descriptors/src.xml
${project.build.finalName}-src

 

and run mvn install, this will execute the assembly plugin twice with different config.
How do I configure the equivalent of maven.war.src of war plugin in Maven 2.0?


org.apache.maven.plugins
maven-war-plugin

How do I add main class in a generated jar’s manifest?
Configure the maven-jar-plugin and add your main class.

org.apache.maven.plugins
maven-jar-plugin

com.mycompany.app.App

What does the FATAL ERROR with the message “Class org.apache.commons.logging.impl.Jdk14Logger does not implement Log” when using the maven-checkstyle-plugin mean?
Checkstyle uses commons-logging, which has classloader problems when initialized within a Maven plugin’s container. This results in the above message – if you run with ‘-e’, you’ll see something like the following:

Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Jdk14Logger does not implement Log

buried deep in the stacktrace.
The only workaround we currently have for this problem is to include another commons-logging Log implementation in the plugin itself. So, you can solve the problem by adding the following to your plugin declaration in your POM:



maven-checkstyle-plugin

log4j
log4j
1.2.12

 

While this may seem a counter-intuitive way of configuring a report, it’s important to remember that Maven plugins can have a mix of reports and normal mojos. When a POM has to configure extra dependencies for a plugin, it should do so in the normal plugins section.
We will probably try to fix this problem before the next release of the checkstyle plugin.
UPDATE: This problem has been fixed in the SVN trunk version of the checkstyle plugin, which should be released very soon.
Plugins and Lifecycle, Sites & Reporting, Errors
How do I determine the stale resources in a Mojo to avoid reprocessing them?
This can be done using the following piece of code:

// Imports needed
import org.codehaus.plexus.compiler.util.scan.InclusionScanException;
import org.codehaus.plexus.compiler.util.scan.StaleSourceScanner;
import org.codehaus.plexus.compiler.util.scan.mapping.SuffixMapping;

// At some point of your code
StaleSourceScanner scanner = new StaleSourceScanner( 0, Collections.singleton( “**/*.xml” ), Collections.EMPTY_SET );
scanner.addSourceMapping( new SuffixMapping( “.xml”, “.html” ) );
Set staleFiles = (Set) scanner.getIncludedSources( this.sourceDirectory, this.targetDirectory );

The second parameter to the StaleSourceScanner is the set of includes, while the third parameter is the set of excludes. You must add a source mapping to the scanner (second line). In this case we’re telling the scanner what is the extension of the result file (.html) for each source file extension (.xml). Finally we get the stale files as a Set calling the getIncludedSources method, passing as parameters the source and target directories (of type File). The Maven API doesn’t support generics, but you may cast it that way if you’re using them.
In order to use this API you must include the following dependency in your pom:

org.codehaus.plexus
plexus-compiler-api
1.5.1
Is there a property file for plug-in configuration in Maven 2.0?
No. Maven 2.x no longer supports plug-in configuration via properties files. Instead, in Maven 2.0 you can configure plug-ins directly from command line using the -D arguement, or from the plug-in’s POM using the element.
How do I determine which POM contains missing transitive dependency?
run “mvn -X”
How do I integrate static (x) html into my Maven site?
You can integrate your static pages in this several steps,

Put your static pages in the resources directory, ${basedir}/src/site/resources.

Create your site.xml and put it in ${basedir}/src/site. An example below:

Maven War Plugin
http://maven.apache.org/images/apache-maven-project.png
http://maven.apache.org/

http://maven.apache.org/images/maven-small.gif

 

 

${reports}

Link the static pages by modifying the

section, create items and map it with the filename of the static pages.

How do I run an ant task twice, against two different phases?
You can specify multiple execution elements under the executions tag, giving each a different id and binding them at different phases.

maven-antrun-plugin

* one*
generate-sources

run

*two*
package

* *

run

Can a profile inherit the configuration of a “sibling” profile?
No. Profiles merge when their ID’s match – so you can inherit them from a parent POM (but you can’t inherit profiles from the same POM).
Inheritence and Interpolation, Plugins and Lifecycle, POM
How do I invoke the “maven dist” function from Maven 1.0, in Maven 2.0?
mvn assembly:assembly
See the Assembly Plugin documentation for more details.
General, Plugins and Lifecycle
How do I specify which output folders the Eclipse plugin puts into the .classpath file?


org.apache.maven.plugins
maven-eclipse-plugin

target-eclipse

What is a Mojo?
A mojo is a Maven plain Old Java Object. Each mojo is an executable goal in Maven, and a plugin is a distribution of one or more related mojos.
How to produce execution debug output or error messages?
You could call Maven with -X parameter or -e parameter. For more information, run:

mvn –help

Maven compiles my test classes but doesn’t run them?
Tests are run by the surefire plugin. The surefire plugin can be configured to run certain test classes and you may have unintentionally done so by specifying a value to ${test}. Check your settings.xml and pom.xml for a property named “test” which would like this:


test
some-value

Or


some-value

How do I include tools.jar in my dependencies?
The following code includes tools.jar on Sun JDKs (it is already included in the runtime for Mac OS X and some free JDKs).


default-tools.jar

java.vendor
Sun Microsystems Inc.

com.sun
tools
1.4.2
system
${java.home}/../lib/tools.jar

I have a jar that I want to put into my local repository. How can I copy it in?
If you understand the layout of the maven repository, you can copy the jar directly into where it is meant to go. Maven will find this file next time it is run.
If you are not confident about the layout of the maven repository, then you can adapt the following command to load in your jar file, all on one line.

mvn install:install-file
-Dfile=
-DgroupId=
-DartifactId=
-Dversion=
-Dpackaging= -DgeneratePom=true

Where:   the path to the file to load
the group that the file should be registered under
the artifact name for the file
the version of the file
the packaging of the file e.g. jar

This should load in the file into the maven repository, renaming it as needed.
How do I set up Maven so it will compile with a target and source JVM of my choice?
You must configure the source and target parameters in your pom. For example, to set the source and target JVM to 1.5, you should have in your pom :


org.apache.maven.plugins
maven-compiler-plugin
2.0.2

1.5
1.5

How can I use Ant tasks in Maven 2?

There are currently 2 alternatives:

For use in a plugin written in Java, Beanshell or other Java-like scripting language, you can construct the Ant tasks using the instructions given in the Ant documentation

If you have very small amounts of Ant script specific to your project, you can use the AntRun plugin.

Maven 2.0 Eclipse Plug-in

Plugins are great in simplifying the life of programmers; it actually reduces the repetitive tasks involved in the programming. In this article our experts will show you the steps required to download and install the Maven Plugin with your eclipse IDE.
Why Maven with Eclipse
Eclipse is an industry leader in IDE market, it is used very extensively in developing projects all around the world. Similarly, Maven is a high-level, intelligent project management, build and deployment tool provided by Apache’s software foundation group. Maven deals with application development lifecycle management.

Maven–Eclipse Integration makes the development, testing, packaging and deployment process easy and fast. Maven Integration for Eclipse provides a tight integration for Maven into the IDE and avails the following features:
· It helps to launch Maven builds from within Eclipse
· It avails the dependency management for Eclipse build path based on Maven’s pom.xml
· It resolves Maven dependencies from the Eclipse workspace withoutinstalling to local Maven repository
· It avails an automatic downloading of the required dependencies from the remote Maven repositories
· It provides wizards for creating new Maven projects, pom.xml or to enable Maven support on plain Java project
· It helps to search quickly for dependencies in Maven remote repositories
· It quickly fixes in the Java editor for looking up required dependencies/jars by the class or package name.
What do you Need?
1. Get the Eclipse Development Environment :
In this tutorial we are using the eclipse-SDK-3.3-win32, which can be downloaded fromhttp://www.eclipse.org/downloads/
2. Get Maven-eclipse-plugin-plugin :
It is available at http://mevenide.codehaus.org/maven-eclipse-plugin-plugin/

Download and Install Eclipse
First download and install the eclipse plugin on your development machine then proceed with the installation process of the eclipse-maven plugin.

A Maven 2.0 Repository: An Introduction

Maven repository Types:

Public remote external repository: This public external repository exists at ibiblio.org and maven synchronizes with this repository.

Private remote internal repository: We set up this repository and make changes in the maven’s pom.xml or settings.xml file to use this repository.

Local repository: This repository is maintained by the developer and stays on the developer’s machine. It is synchronous to the maven repository defined in the settings.xml file that exists in the .m2 directory at its standard location i.e. C:\Documents and Settings\Administrator. If no private internal repository is setup and not listed in the pom.xml or in the setting.xml then the local repository exists on the developer’s machine is synchronized with the public maven repository at ibiblio.org.

Advantages of having an internal private repository :

Reduces conflicts among likelihood versions.

To build first time it requires less manual intervention.

Rather than having several separate independent libraries it provides a single central reference repository for all the dependent software libraries.

It quickly builds the project while using an internal repository as maven artifacts are retrieved from the intranet server rather than retrieving from the server on internet.

Use cases for maven repository:

It creates two sub-repository inside the internal repository.

Downloads ibiblio-cache from ibiblio for artifacts and make it available publically. This synchronizes with external repository from ibiblio.

internal-maven-repository: used for internal artifacts of an organization. It contains unique artifacts for the organization and is not synchronized with any repository.

Alternatively, another sub-repository that is not at ibiblio can be created for artifacts. This does not synchronize with any external repository.

Browse the remote repository by using a web browser.

Search the artifacts in the repository.

Download code from version control and make changes in settings.xml to point to the internal repository and build without any manual intervention.

Install new version of the artifacts.

Import artifacts into the repository in bulk.

Export artifacts from the repository in bulk.

Setup the task to backup the repository automatically.

Criteria for choosing a maven repository implementation: In ideal condition a maven repository implementation should be:

Free and open source

Provide admin tools

Easy to setup and use

Provide backup facility

Able to create, edit and delete sub repositories.

Anonymous read only access and also access control facility.

Deployable in any standard web server such as Tomcat or Apache.

Issue tracker, forums and other independent source of information.

Active community developers make the product enhanced and bugs fixed.

Bulk import/export facility to move groups of artifacts into the repository and out of the repository.

Provide a repository browser: should be a web browser instead of the desktop application.

Shifting from Apache Ant to Maven

Maven is entirely a different creature from Ant. Ant is simply a toolbox whereas Maven is about the application of patterns in order to achieve an infrastructure which displays the characteristics of visibility, reusability, maintainability, and comprehensibility. It is wrong to consider Maven as a build tool and just a replacement for Ant.
Ant Vs Maven
There is nothing that Maven does that Ant cannot do. Ant gives the ultimate power and flexibility in build and deployment to the developer. But Maven adds a layer of abstraction above Ant (and uses Jelly). Maven can be used to build any Java application. Today JEE build and deployment has become much standardized. Every enterprise has some variations, but in general it is all the same: deploying EARs, WARs, and EJB-JARs. Maven captures this intelligence and lets you achieve the build and deployment in about 5-6 lines of Maven script compared to dozens of lines in an Ant build script.
Ant lets you do any variations you want, but requires a lot of scripting. Maven on the other hand mandates certain directories and file names, but it provides plugins to make life easier. The restriction imposed by Maven is that only one artifact is generated per project (A project in Maven terminology is a folder with a project.xml file in it). A Maven project can have sub projects. Each sub project can build its own artifact. The topmost project can aggregate the artifacts into a larger one. This is synonymous to jars and wars put together to form an EAR. Maven also provides inheritance in projects.
Maven : Stealing the show
Maven simplifies build enormously by imposing certain fixed file names and acceptable restrictions like one artifact per project. Artifacts are treated as files on your computer by the build script. Maven hides the fact that everything is a file and forces you to think and script to create a deployable artifact such as an EAR. Artifact has a dependency on a particular version of a third party library residing in a shared remote (or local) enterprise repository, and then publish your library into the repository as well for others to use. Hence there are no more classpath issues. No more mismatch in libraries. It also gives the power to embed even the Ant scripts within Maven scripts if absolutely essential.

Maven 2.0: Features

Maven is a high-level, intelligent project management, build and deployment tool provided by Apache’s software foundation group. Maven deals with application development lifecycle management. Maven was originally developed to manage and to minimize the complexities of building the Jakarta Turbine project. But its powerful capabilities have made it a core entity of the Apache Software Foundation projects. Actually, for a long time there was a need to standardized project development lifecycle management system and Maven has emerged as a perfect option that meets the needs. Maven has become the de- facto build system in many open source initiatives and it is rapidly being adopted by many software development organizations.
Maven was borne of the very practical desire to make several projects at Apache work in a consistence manner. So that developers could freely move between these projects, knowing clearly how they all worked by understanding how one of them worked.

If a developer spent time understanding how one project built it was intended that they would not have to go through this process again when they moved on to the next project. The same idea extends to testing, generating documentation, generating metrics and reports, testing and deploying. All projects share enough of the same characteristics, an understanding of which Maven tries to harness in its general approach to project management.
On a very high level all projects need to be built, tested, packaged, documented and deployed. There occurs infinite variation in each of the above mentioned steps, but these variation still occur within the confines of a well defined path and it is this path that Maven attempts to present to everyone in a clear way. The easiest way to make a path clear is to provide people with a set of patterns that can be shared by anyone involved in a project.

The key benefit of this approach is that developers can follow one consistent build lifecycle management process without having to reinvent such processes again. Ultimately this makes developers more productive, agile, disciplined, and focused on the work at hand rather than spending time and effort doing grunt work understanding, developing, and configuring yet another non-standard build system.
Maven: Features

Portable: Maven is portable in nature because it includes:

Building configuration using maven are portable to another machine, developer and architecture without any effort

Non trivial: Maven is non trivial because all file references need to be relative, environment must be completely controlled and independent from any specific file system.

Technology: Maven is a simple core concept that is activated through IoC container (Plexus). Everything is done in maven through plugins and every plugin works in isolation (ClassLoader). Plugings are downloaded from a plugin-repository on demand.

Maven’s Objectives:
The primary goal of maven is to allow the developers to comprehend the complete state of a project in the shortest time by using easy build process, uniform building system, quality project management information (such as change Log, cross-reference, mailing lists, dependencies, unit test reports, test coverage reports and many more), guidelines for best practices and transparent migration to new features. To achieve to this goal Maven attempts to deal with several areas like:

It makes the build process easy

Provides a uniform building system

Provides quality related project information

Provides guidelines related to development to meet the best goal.

Allows transparent migration to new features.

Introduction to Maven 2.0

Maven2 is an Open Source build tool that made the revolution in the area of building projects. Like the build systems as “make” and “ant” it is not a language to combine the build components but it is a build lifecycle framework. A development team does not require much time to automate the project’s build infrastructure since maven uses a standard directory layout and a default build lifecycle. Different development teams, under a common roof can set-up the way to work as standards in a very short time. This results in the automated build infrastructure in more stable state. On the other hand, since most of the setups are simple and reusable immediately in all the projects using maven therefore many important reports, checks, build and test animation are added to all the projects. Which was not possible without maven because of the heavy cost of every project setup.

Maven 2.0 was first released on 19 October 2005 and it is not backward compatible with the plugins and the projects of maven1. In December 2005, a lot of plugins were added to maven but not all plugins that exists for maven1 are ported yet. Maven 2 is expected to stabilize quickly with most of the Open Source technologies. People are introduced to use maven as the core build system for Java development in one project and a multi-project environment. After a little knowledge about the maven, developers are able to setup a new project with maven and also become aware of the default maven project structure. Developers are easily enabled to configure maven and its plugins for a project. Developers enable common settings for maven and its plugins over multiple projects, how to generate, distribute and deploy products and reports with maven so that they can use repositories to set up a company repository. Developers can also know about the most important plugins about how to install, configure and use them, just to look for other plugins to evaluate them so that they can be integrated in their work environment.

Maven is the standard way to build projects and it also provides various other characters like clearing the definition of the project, ways to share jars across projects. It also provides the easy way to publish project information (OOS).
Originally maven was designed to simplify the building processes in the Jakarta Turbine project. Several projects were there containing their own slightly different Ant build files and JARs were checked into CVS. An apache group’s tool that can build the projects, publish project information, defines what the project consists of and that can share JARs across several projects. The result of all these requirement was the maven tool that builds and manages the java-based-project.

Why maven is a great build tool? how does it differ from other Build tools?
Tell me more about Profiles and Nodes in Maven?
Tell me more about local repositories?
How did you configured local repositories in different environment (Development, Testing , Production etc)?
What is Transcend Dependencies in maven 2?
Did you write plugins in maven? if so what are they?
Why a matrix report is required during a new release? How does this benefit QA Team?
What are pre-scripts and post-scripts in maven? Illustrate with an example?
What are the checklists for artifacts ? and what are the checklists for source code artifact?
Tell me the experience about Static Analysis Code?

Reference:
http://www.javabeat.net

 

Tagged : / / / / / / / / / / / / /