+91 700 483 5930   +91 810 584 3520    info@scmgalaxy.com
LOGIN
Sign In or Register
Avatar
Not Registered Yet?

Join Now! It's FREE. Get full access and benefit from this site

Reset My password - Remind Me My username

Username
Password
Remember me
 
scmGalaxy logo

Chef Server
All logs generated by the Chef server can be found in /var/log/opscode. Each service enabled on the system also has a sub-directory in which service-specific logs are located, typically found in /var/log/opscode/service_name.

The Chef server has built-in support for easily tailing the logs that are generated. To view all the logs being generated on the Chef server, enter the following command:
> chef-server-ctl tail

To view logs for a specific service:
> chef-server-ctl tail SERVICENAME
where SERVICENAME should be replaced with name of the service for which log files will be viewed. SERVICENAME represents the name of any service that is listed after running the "> chef-server-ctl service-list" subcommand.

Another way to view log files is to use the system utility tail:
> tail -50f /var/log/chef-server/opscode-chef/current
> tail -50f /var/log/opscode/opscode-chef/current

Supervisor Logs
Supervisor logs are created and managed directly by the service supervisor, and are automatically rotated when a the current log file reaches 1,000,000 bytes. 10 log files are kept. The latest supervisor log is always located in /var/log/chef-server/service_name/current and rotated logs have a filename starting with @ followed by a precise tai64n timestamp based on when the file was rotated.

Supervisor logs are available for the following services:

  1. bifrost
  2. bookshelf
  3. nginx
  4. opscode-erchef
  5. opscode-expander
  6. opscode-expander-reindexer
  7. opscode-solr4
  8. postgresql
  9. rabbitmq
  10. redis

nginx

The nginx service creates both supervisor and administrator logs. The administrator logs contain both access and error logs for each virtual host utilized by the Chef server. Each of the following logs require external log rotation.

Logs Description
/var/log/opscode/nginx/access.log The Web UI and API HTTP access logs.
/var/log/opscode/nginx/error.log The Web UI and API HTTP error logs.
/var/log/opscode/nginx/internal-account.access.log The opscode-account internal load-balancer access logs.
/var/log/opscode/nginx/internal-account.error.log The opscode-account internal load-balancer error logs.
/var/log/opscode/nginx/internal-authz.access.log The opscode-authz internal load-balancer access logs.
/var/log/opscode/nginx/internal-authz.error.log The opscode-authz internal load-balancer error logs.
/var/log/opscode/nginx/internal-chef.access.log The opscode-chef and opscode-erchef internal load-balancer access logs.
/var/log/opscode/nginx/internal-chef.error.log The opscode-chef and opscode-erchef internal load-balancer error logs.
/var/log/opscode/nginx/nagios.access.log The nagios access logs.
/var/log/opscode/nginx/nagios.error.log The nagios error logs.
/var/log/opscode/nginx/rewrite-port-80.log The rewrite logs for traffic that uses HTTP instead of HTTPS.

To follow the logs for the service:

 $ chef-server-ctl tail nginx

Chef Client

Client.rb3 file might help you. Default value of log_location is STDOUT. You can give /path/to/log_location in place of this. You can locate this client.rb file in C:\chef\client.rb or /etc/chef/client.rb directories.

Use the verbose logging that is built into the chef-client:
-l LEVEL, --log_level LEVEL
The level of logging to be stored in a log file.
-L LOGLOCATION, --logfile c
The location of the log file. This is recommended when starting any executable as a daemon. Default value: STDOUT.

Knife
Use the verbose logging that is built into knife:
-V, --verbose
Set for more verbose outputs. Use -VV for maximum verbosity.

chef-solo
-l LEVEL, --log_level LEVEL
The level of logging to be stored in a log file.

The Chef file and folder locations are different on Linux and Windows machines. This article explains the purpose of each file and the location.

Summary

  Linux Windows
Cookbook location /var/chef/cache/cookbooks  C:\chef\cache\cookbooks
Chef Client run log /var/log/chef.log First run only
C:\chef\chef-client.log
    Subsequent Chef client runs
C:\chef\log\client.log
Error log /var/chef/cache/chef-stacktrace.out C:\chef\cache\chef-stacktrace.out
Ohai output /var/chef/cache/failed-run-data.json C:\chef\cache\failed-run-data.json
Recommended location for custom log files /tmp/cheflog.log C:\Logs\Chef\cheflog.log
Chef Client configuration /etc/chef/client.rb C:\chef\client.rb

When you test your cookbook in Test Kitchen

The .kitchen.yml file contains the username to execute the Chef cookbook. It is specified under platforms:, transport:, username:

Use that value in place of USER-NAME-FROM-KITCHEN-YML below.

  Linux Windows
Cookbook location /tmp/kitchen/cookbooks
/tmp/kitchen/cache/cookbooks
 C:\Users\USER-NAME-FROM-KITCHEN-YML\AppData\Local\Temp\kitchen\cookbooks
Error log /tmp/kitchen/cache/chef-stacktrace.out C:\Users\USER-NAME-FROM-KITCHEN-YML\AppData\Local\Temp\kitchen\cache\chef-stacktrace.out
Ohai output /tmp/kitchen/cache/failed-run-data.json C:\Users\USER-NAME-FROM-KITCHEN-YML\AppData\Local\Temp\kitchen\cache\failed-run-data.json
Data bags /tmp/kitchen/data_bags C:\Users\USER-NAME-FROM-KITCHEN-YML\AppData\Local\Temp\kitchen\data_bags

 

Cookbook location

When the Chef recipes are executed, all cookbooks are stored on the node. You can examine the code to make sure your latest changes are reflected on the machine.

The log of the Chef client run

The output of the Chef cookbook execution is in the chef.log or chef-client.log file

On Windows

The log of the first Chef Client run and subsequent runs are stored in different log files. After the initial Chef Client run, the rest of the log entries are collected in the second file.

Stacktrace

Chef saves information on the hard drive when scripts are executed. If there is a failure, the stack trace of the last error is saved in the chef-stacktrace.out file.

Ohai output

All the information that Ohai collects on the instance, is saved in the failed-run-data.jsonfile, even if there is no error. It is a great resource to get the server specific values.

Reference

https://docs.chef.io/server_logs.html
https://docs.chef.io/debug.html

Chef notifies and subscribes explained with examples

 
A notification is a property on a resource that listens to other resources in the resource collection and then takes actions based on the notification type (notifies or subscribes).
 

Timers

A timer specifies the point during the chef-client run at which a notification is run. The following timers are available:
:before
Specifies that the action on a notified resource should be run before processing the resource block in which the notification is located.
 
:delayed
Default. Specifies that a notification should be queued up, and then executed at the very end of the chef-client run.
 
:immediate, :immediately
Specifies that a notification should be run immediately, per resource notified.
 

Notifies

A resource may notify another resource to take action when its state changes. Specify a 'resource[name]', the :action that resource should take, and then the :timer for that action. A resource may notify more than one resource; use a notifies statement for each resource to be notified.
The syntax for notifies is:
notifies :action, 'resource[name]', :timer
 
Example
The following examples show how to use the notifies notification in a recipe.
 
Delay notifications
template '/etc/nagios3/configures-nagios.conf' do
  # other parameters
  notifies :run, 'execute[test-nagios-config]', :delayed
end
 
Notify immediately
By default, notifications are :delayed, that is they are queued up as they are triggered, and then executed at the very end of a chef-client run. To run an action immediately, use :immediately:
template '/etc/nagios3/configures-nagios.conf' do
  # other parameters
  notifies :run, 'execute[test-nagios-config]', :immediately
end
 
and then the chef-client would immediately run the following:
execute 'test-nagios-config' do
  command 'nagios3 --verify-config'
  action :nothing
end
 

Subscribes

A resource may listen to another resource, and then take action if the state of the resource being listened to changes. Specify a 'resource[name]', the :action to be taken, and then the :timer for that action.
 
Note that subscribes does not apply the specified action to the resource that it listens to - for example:
file '/etc/nginx/ssl/example.crt' do
   mode '0600'
   owner 'root'
end
 
service 'nginx' do
   subscribes :reload, 'file[/etc/nginx/ssl/example.crt]', :immediately
end
 
In this case the subscribes property reloads the nginx service whenever its certificate file, located under /etc/nginx/ssl/example.crt, is updated. subscribes does not make any changes to the certificate file itself, it merely listens for a change to the file, and executes the :reload action for its resource (in this example nginx) when a change is detected.
The syntax for subscribes is:
subscribes :action, 'resource[name]', :timer
 
Examples
The following examples show how to use the subscribes notification in a recipe.
 
Prevent restart and reconfigure if configuration is broken
 
Use the :nothing action (common to all resources) to prevent the test from starting automatically, and then use the subscribes notification to run a configuration test when a change to the template is detected:
 
execute 'test-nagios-config' do
  command 'nagios3 --verify-config'
  action :nothing
  subscribes :run, 'template[/etc/nagios3/configures-nagios.conf]', :immediately
end
 
 
Reference
https://docs.chef.io/resources.html#id4
 
Example
https://gist.github.com/nathanhoel/f5433a0d9cfe2e717c71

Source - http://hub.scalr.com/blog/top-questions-on-server-configuration-management-tools-chef-puppet-and-ansible-2

As a quick recap, configuration management tools enable companies to standardize and automate their infrastructure. Through standardization, you can build systems that are platform independent (i.e. instead of relying on AMIs or provider specific toolsets). These tools also make it easy reproduce servers for scaling or testing, and recover from disaster quickly by defining a proper application state. For example, if servers are not in that defined state when each server is checked, they are restored to their proper state. In addition, this standardization makes it easy to onboard new developers.

 

While the language across configuration management tools is different, the concepts are the same. At the fundamental level in each configuration tool, a resource represents a part of the system and its desired state, such as a package that should be installed, a service that should be running, or a file that should be generated.

In Chef, a recipe is a collection of resources that describes a particular configuration or policy. These collections are called playbooks in Ansible, and manifests in Puppet. These collections describe everything that is required to configure part of a system. Collections install and configure software components, manage files, deploy applications, and execute other recipes. We go into more detail in our blog post here.

 

Here are the top questions we got from the community:

 

How is the concept of master/agent configuration better (or not) than agentless, when it comes to infrastructure as code?

Chef and Puppet are master/agent configuration systems, while Ansible is an agentless system. The historic argument is that the agent-based installation process is difficult -  you have to set up the master, and then set up the agents on your nodes so that they know about the master. If you’ve got servers with diverse linux distros, on different versions of Windows, etc., installation can get tricky. Though, because they’re logging every few minutes, agent-based systems are powerful for advanced monitoring. At the end of the day this really is based on personal preference and what company requires. If your infrastructure is beefy and heavily standardized, installation on nodes isn’t complicated so use agent-based systems. If you have servers that run Python, try agentless.

 

Are these configuration management systems like MicroSoft System Center Configuration Manager (SCCM) but used for local and cloud?

This is like MS SCCM, but open-source and paid for per node. For those who haven’t used it, MicroSoft System Center Configuration Manager (SCCM) is used for infrastructure provisioning, monitoring, and automating workflow processes (usually sysadmin stuff). SCCM is a powerhouse in the enterprise space. While it can manage end clients on non-Windows servers, the server console portion of SCCM must be hosted and run on a Windows server machine. The reason other orchestration/configuration systems win here is that you pay on a per-node basis and you’re not totally tied into Windows Server’s licensing agreements. In other words, open-source vs proprietary. And with Chef/Puppet/Ansible the thinking is more in resources as opposed to SCCM, which is more in files and terminal commands.

 

An attendee commented on using SCCM:

 We really like Ansible because of the none-agent requirement. For Windows patching we utilize System Center Configuration Manager, and even though System Center can provide patching to Linux we have run into issues with SCCM agent staying healthy and running on our Linux systems. We have also run into when the SCCM admins have made changes it broke SCCM agent on a majority of our Linux servers. Our Linux patching process has been highly manual up to this point but we are seeking to automate this to free up staff time to be better directed at other support tasks, which is why were are reviewing several solutions. The non-agent aspect is highly desirable in our situation because of past experience with SCCM agent. I just wanted to provide that feedback so others that have not experienced agent issues with other deployment solutions may want to keep that in mind.”

 

If we have to pick a tool dependent on whether we deploy on cloud or on-premise - which of these tools would be a better choice?

We would recommend looking into network access requirements for each tool. If you have an agent that checks in periodically with a central master management piece, that is likely to work better then SSH which requires direct path / path through lots of proxies. 

 

One attendee mentioned in the comments: "[In regards to] SSH vs Agent - Agent is more secure where SSH not an option.

 

What happened to cfengine? This tool used to be mentioned alongside Chef and Puppet. 
Version 3 of cfengine is a complete revamp, but it compared to other configuration management tools the brand and community outreach isn’t strong, and does little the others don't do better.

How does StackStorm compare to the other orchestrators being reviewed?
StackStorm labels itself more of an automation platform, or a DevOps workflow tool that handles provisioning and configuring servers but also leans on automatic and event driven services that plugs into Jenkins and other CI/CD workflows.

 

From one attendee that had used all three: “For us, getting the Server engineers to adopt Chef has been very difficult. It grew organically on the Dev side of the house. Ansible appears to be something that guys without Dev skills could pick up more easily. Just [my] perception.”

 

Can I run tasks in parallel with Ansible rather than running it serially (say 50 servers being updated with a patch)?

The default method is to run each task across all servers in parallel, meaning that it will run the first task (e.g. installing Git) on all servers in a group, and once all servers respond with a success, failure, or unchanged response, Ansible will move to the next task on all servers. It doesn’t run on a server, wait, move on to the next, it will run on all servers at once over SSH. If you want to deploy updates in batches, you can run a percentage of servers in a group (e.g. 50%, by listing it in the playbook as serial: 50%).

 

An attendee made this comment as we mentioned Ansible:

 

I have attended a presentation from RedHat regarding Ansible that states [that Ansible scales well]. They have large scale hosting companies that on the fly spin up servers and perform patching for their servers via Ansible. The one mentioned had over 50,000 servers and it seemed to handle the volume / scale fine. I of course don’t know everything about Puppet or Chef or Salt, but one thing I find really nice about Ansible is the ability to perform rolling updates / tasks. So if you say had 1000 servers you can say you want to run 10% or 100 at a time and keep it rolling until all 1000 are done. It can be stated by percentage or defined number...I am sure I sound a bit biased but one of the main reasons Ansible is high on our list right now is the fact that it is agentless and does not really consume resources.”

 

With Ansible, how can we handle the security implications about allowing passwordless ssh to a root account on all systems? What mechanisms are there for access control and auditing?

There are definitely security implications if you are going to allow passwordless ssh. So it’s on the company to ensure that security groups or NSGs are well defined. We should also mention that the passwordless ssh is only enabled on the machine you are running Ansible commands and playbooks from, so if anything consider that workstation to be your weak point. Make sure SSH access is only permissible through your IP. As an alternative solution to connecting via SSH, if you use docker, Ansible allows you to deploy playbooks directly into Docker containers using the local Docker client. All you need is a user inside that container.

 

Does ansible run single-threaded or is it addressing multiple servers in a group asynchronously?
Ansible runs on each host in parallel. This means that it attempts to run your tasks on all servers defined at the top of the playbook before moving on to the next task.

 

One user said in regards to all three tools: “Ansible seems better for "orchestration" and Puppet/Chef are really good for "Configuration Management".  Ansible can be used to stop applications and databases and then run Puppet and then start applications and databases.

 

Lastly, we got a surprise question from the audience on Jenkins, a CI/CD pipeline tool that can be used in conjunction with tools like Chef to completely automate the infrastructure behind your applications.

 

What is the alternative of Jenkins?
While we recommend Jenkins, If you’re a Ruby shop, Capistrano is geared towards your deployments. If you live in the AWS world, you can try using the CodeCommit/CodeDeploy/CodePipeline toolset. If you’re looking for a provider agnostic solution, CircleCI is great. If your workflows revolve around Atlassian, try Bamboo. 

 

If you are unsure of what CI/CD pipeline tool to use, or how they work, we will be hosting a webinar on Jenkins as part of our on-going series on infrastructure-as-code.

Page 1 of 7

NEW TUTORIALS