Which is the best Ansible training institute in Hyderabad?

If you’re in IT, you must be heard about Ansible. CIO calls it the DevOps “darling” for software automation, in recent times Ansible has come from nowhere to be the No. 1 choice for software automation in many organizations. StackShare lists shows that more than 1,000 companies that use Ansible, including Intel, Evernote, and Hootsuite, and the Ansible. Also Apple and NASA have adopted it as well. So, what is Ansible, and why has it gained popularity so quickly?

What is Ansible?

Ansible is an open-source automation tool, or you can also called platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. Automation is crucial these days, with IT environments that are too complex and often need to scale too quickly for system administrators and developers to keep up if they had to do everything manually.

In other words, it frees up time and increases efficiency. And also it is rapidly rising to the top in the world of automation tools.

Let’s take a look on some of the features of Ansible

  • Configuration Management
  • Application Deployment
  • Orchestration
  • Security and Compliance
  • Cloud Provisioning

Now that we have seen what Ansible is, let us find out the various Benefits of Ansible.

Benefits of Ansible

  • Free: Ansible is an open-source tool.
  • Very simple to set up and use: No special coding skills are necessary to use Ansible’s playbooks.
  • Powerful: Ansible lets you model even highly complex IT workflows.
  • Flexible: You can orchestrate the entire application environment no matter where it’s deployed. You can also customize it based on your needs.
  • Agentless: You don’t need to install any other software or firewall ports on the client systems you want to automate. You also don’t have to set up a separate management structure.
  • Efficient: Because you don’t need to install any extra software, there’s more room for application resources on your server.

Advantages of Using Ansible especially with Docker

Ansible does a great job of automating Docker and operationalizing the process of building and deploying containers. If you’re someone from a traditional IT system, for example, it can be hard to add container-tooling functionality. But Ansible removes the need to do processes manually.

There are four main advantages of using Ansible with Docker:

  • Portability/Flexibility
  • Auditability
  • Management of Entire Environments
  • Similar Syntax

Conclusion

As we discuss earlier using Ansible with Docker simplify your processes by allowing you to work with containers and to automate all that work, It’s no wonder the Ansible-Docker combination is so popular. And learning how to use Ansible with Docker won’t just benefit your organization, it also benefits you on your Payscale, according to Payscale, the average salary of a developer with Ansible skills is $110,000 per year, and some developers earn even more. According to Dice, Ansible is the highest-paying DevOps skill.

If you find this article helpful and you grab any informative knowledge about Ansible and if you also want to learn Ansible skill, then I would glad to suggest you the best online institute to learn and training of Ansible where they provide you the best trainer and guidance with experience of more than 10 years of this field.

Recommended institutes

devOpsschool.com

scmgalaxy.com

bestdevOps.com

I hope this Blog will be helpful for you!!!

Tagged : / / / / / / / / /

Top 10 Container(Docker) Monitoring Solutions and Tools in 2018

Top 10 Container Monitoring Solutions/Tools in 2018

  1. Native Docker
  2. cAdvisor
  3. Scout
  4. Pingdom
  5. Datadog
  6. Sysdig
  7. Prometheus
  8. Heapster / Grafana
  9. ELK stack
  10. Sensu

Reference

https://rancher.com/comparing-10-container-monitoring-solutions-rancher/

Tagged : / / / /

Setup Docker service to use insecure(http) registry instead of https

By default docker use https to connect to docker registry. But there can be use cases to use insecure registry. Here are the steps to use insecure registry.

In ubuntu
edit the file /etc/default/docker and update DOCKER_OPTS e.g

DOCKER_OPTS='--insecure-registry 10.84.34.155:5000'

where 10.84.34.155 is ipaddress of registry and 5000 is your port on which registry is configured.

In Centos
Edit the file /etc/docker/daemon.json e.g.

{
"insecure-registries" : ["10.84.34.155:5000"]
}

where 10.84.34.155 is ipaddress of registry and 5000 is your port on which registry is configured.

Restart docker
$ service docker restart

Tagged : / / / / /

Understanding the tools sets in Docker ecosystem

Docker Engine
Docker Engine is our lightweight and powerful open source containerization technology combined with a work flow for building and containerizing your applications.

  • Docker Engine is a client-server application with these major components:
  • A server which is a type of long-running program called a daemon process (the dockerd command).
  • A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
  • A command line interface (CLI) client (the docker command).

Docker Client
The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI.

Docker Server aka Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services. The daemon creates and manages Docker objects, such as images, containers, networks, and volumes.

https://docs.docker.com/engine/images/engine-components-flow.png

Docker registry
A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR). When you use the docker pull or docker run commands, the required images are pulled from your configured registry. When you use the docker push command, your image is pushed to your configured registry.

Docker Hub
Docker Hub is a registry service on the cloud that allows you to download Docker images that are built by other communities. You can also upload your own Docker built images to Docker hub.

Docker Store
Docker store allows you to buy and sell Docker images or distribute them for free. For instance, you can buy a Docker image containing an application or service from a software vendor and use the image to deploy the application into your testing, staging, and production environments. You can upgrade the application by pulling the new version of the image and redeploying the containers.

Docker Machine
Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like Azure, AWS, or Digital Ocean. Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

Docker Compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features. Compose works in all environments: production, staging, development, testing, as well as CI workflows.

Kitematic
Kitematic’s one click install gets Docker running on your Mac and lets you control your app containers from a graphical user interface (GUI). It easily search and pull your favorite images on Docker Hub from Kitematic GUI to create and run your app containers. It seamlessly switch between Kitematic GUI or Docker CLI to run and manage your application containers. It automatically map ports, visually change environment variables, configuring volumes, streamline logs and CLI access to containers.

Docker for Windows
Docker is a full development platform for creating containerized apps, and Docker for Windows is the best way to get started with Docker on Windows.

Docker for Mac
Docker for Mac is an easy-to-install desktop app for building, debugging and testing Dockerized apps on a Mac. Docker for Mac is a complete development environment deeply integrated with the MacOS Hypervisor framework, networking and filesystem. Docker for Mac is the fastest and most reliable way to run Docker on a Mac.

Docker Sync
Developing with docker under OSX/ Windows is a huge pain, since sharing your code into containers will slow down the code-execution about 60 times (depends on the solution). Testing and working with a lot of the alternatives made us pick the best of those for each platform, and combine this in one single tool: docker-sync.
docker-sync is:

  • Support for OSX, Windows, Linux and FreeBSD
  • Runs on Docker for Mac, Docker for Windows and Docker Toolbox
  • Uses either native_osx, unison or rsync as possible strategies. The container performance is not influenced at all, see performance
  • Very efficient due to the native_osx concept
  • Without any dependencies on OSX when using (native_osx)
  • Backgrounds as a daemon
  • Supports multiple sync-end points and multiple projects at the same time
  • Supports user-remapping on sync to avoid permission problems on the container
  • Can be used with your docker-compose way or the integrated docker-compose way to start the stack and sync at the same time with one command
  • Using overlays to keep your production docker-compose.yml untouched an portable
  • Supports Linux* to use the same toolchain across all platforms, but maps on a native mount in linux (no sync)

Docker Toolbox
Docker Toolbox is an installer for quick setup and launch of a Docker environment on older Mac and Windows systems that do not meet the requirements of the new Docker for Mac and Docker for Windows apps.

Docker Community Edition (CE)
Docker Community Edition (CE) is ideal for developers and small teams looking to get started with Docker and experimenting with container-based apps. Available for many popular infrastructure platforms like desktop, cloud and open source operating systems, Docker CE provides an installer for a simple and quick install so you can start developing immediately. Docker CE is integrated and optimized to the infrastructure so you can maintain a native app experience while getting started with Docker. Build the first container, share with team members and automate the dev pipeline, all with Docker Community Edition.

Docker Enterprise Edition (EE)
Docker Enterprise Edition (EE) 2.0 is the only enterprise-ready container platform that enables IT leaders to choose how to cost-effectively build and manage their entire application portfolio at their own pace, without fear of architecture and infrastructure lock-in. Docker’s container platform enables organizations to accelerate digital and multi-cloud initiatives by automating the delivery of legacy and modern applications using an agile operating model with integrated security. Because Docker EE includes services, support and training, organizations have a complete containerization strategy for supporting an ever-changing business environment.

Docker Swarm
A swarm is a group of machines that are running Docker and joined into a cluster. After that has happened, you continue to run the Docker commands you’re used to, but now they are executed on a cluster by a swarm manager. The machines in a swarm can be physical or virtual. After joining a swarm, they are referred to as nodes.

Docker Cloud
Docker Cloud provides a hosted registry service with build and testing facilities for Dockerized application images; tools to help you set up and manage host infrastructure; and application lifecycle features to automate deploying (and redeploying) services created from images.

Docker Cloud uses the hosted Docker Cloud Registry, which allows you to publish Dockerized images on the internet either publicly or privately. Docker Cloud can also store pre-built images, or link to your source code so it can build the code into Docker images, and optionally test the resulting images before pushing them to a repository.

Tagged : / / / / / / / / / / / / / / / / /

docker pause and unpause explanined!

As of Version 0.12.0, Docker supports PAUSE and UNPAUSE commands to pause and resume containers using cgroup freezer.

The docker pause command suspends all processes in the specified containers. On Linux, this uses the cgroups freezer. Traditionally, when suspending a process the SIGSTOP signal is used, which is observable by the process being suspended. With the cgroups freezer the process is unaware, and unable to capture, that it is being suspended, and subsequently resumed. On Windows, only Hyper-V containers can be paused.

So it means that the processes in the container stop running, and they are able to be resumed later.

Possible use is to pause resource intensive tasks that you can resume at a later date. Some people predict that “docker pause” could be used in the future to support “live” migration of containers between Docker Engines.

Checkout more about cgroups freezer here https://www.kernel.org/doc/Documentation/cgroup-v1/freezer-subsystem.txt

[root@ip-172-31-80-30 ~]# docker run -d -p 8080:8080 -p 50000:50000 jenkins

[root@ip-172-31-80-30 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6be033bf1f69 jenkins "/bin/tini -- /usr/l…" About an hour ago Up About an hour (Paused) 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp laughing_bell

[root@ip-172-31-80-30 ~]# docker pause 6be033bf1f69
Error response from daemon: Container 6be033bf1f6917f3bfcccd5d770c00349c47576ab1cd77b14aa39ef1333ae90c is already paused

[root@ip-172-31-80-30 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6be033bf1f69 jenkins "/bin/tini -- /usr/l…" About an hour ago Up About an hour (Paused) 0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp laughing_bell

[root@ip-172-31-80-30 ~]# docker exec -it 6be033bf1f69 /bin/bash
Error response from daemon: Container 6be033bf1f69 is paused, unpause the container before exec

[root@ip-172-31-80-30 ~]# docker top 6be033bf1f69
UID PID PPID C STIME TTY TIME CMD
ec2-user 23703 23691 0 13:26 ? 00:00:00 /bin/tini -- /usr/local/bin/jenkins.sh
ec2-user 23724 23703 0 13:26 ? 00:00:23 java -jar /usr/share/jenkins/jenkins.war

Tagged : / / / /

Lifecycle of Docker Containers

We need to carefully understand the life cycle of Docker containes. There are following images which depicts the the right phases of docker containers.

Phase of Docker Containers

  • Create -> Destroy
  • Create -> Start -> Stopped -> Destroy
  • Create -> Start -> Pause -> Unpause
  • Create -> Start -> Restart

Image flow of Simple Docker Container Lifecycle

Image flow of Detailed Docker Container Lifecycle

 

Image Source and Credits: http://docker-saigon.github.io/post/Docker-Internals/

Create container
$ docker create –name ubuntu-cont ubuntu

Run docker container
$ docker run -itd ubuntu
$ docker run -itd –name ubuntu-cont ubuntu

Pause container
$ docker pause <container-id/name>

Unpause container
$ docker unpause <container-id/name>

Start container
$ docker start <container-id/name>

Stop container
$ docker stop <container-id/name>

Restart container
$ docker restart <container-id/name>

Kill container
$ docker kill <container-id/name>

Destroy container
$ docker rm <container-id/name>

Tagged : / / /

Working with Ports in Docker Containers

Port expose and publish has to happen when a container is created. Just stop the existing container and create a new one in its place with the added expose and/or publish options.

By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers. Each outgoing connection will appear to originate from one of the host machine’s own IP addresses thanks to an iptables masquerading rule on the host machine that the Docker server creates when it starts:

$ sudo iptables -t nat -L -n
...
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0

The Docker server creates a masquerade rule that lets containers connect to IP addresses in the outside world. If you want containers to accept incoming connections, you will need to provide special options when invoking docker run. There are two approaches.

How to map ports to containers?
Approach 1
First, you can supply -P or –publish-all=true|false to docker run
or
EXPOSE line in the image’s Dockerfile
or
–expose <port> commandline flag and maps it to a host port somewhere within an ephemeral port range.

Approach 2
Mapping can be specified explicitly using -p SPEC or –publish=SPEC option. It allows you to particularize which port on docker server – which can be any port at all, not just one within the ephemeral port range – you want mapped to which port in the container.

How to EXPOSE Port on running container?

Mehtod 1 – Using docker commit
Commit your current container to a new image and then do a docker run specifying the new port range and the new image name.

$ docker stop containerID 
$ docker commit containerID newImageName:tag
$ docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 newImageName:tag

Method 2 – using iptables

HOST> iptables -t nat -A DOCKER -p tcp --dport 443 -j DNAT --to-destination 172.17.0.2:443
HOST> iptables -t nat -A POSTROUTING -j MASQUERADE -p tcp --source 172.17.0.2 --destination 172.17.0.2 --dport https
HOST> iptables -A DOCKER -j ACCEPT -p tcp --destination 172.17.0.2 --dport https
Tagged : / / /

What is SELinux and how its SELinux used in Docker?

What is SELinux and how its SELinux used in Docker?

There are three popular solutions for implementing access control in Linux:

  1. SELinux
  2. AppArmor
  3. GrSecurity

Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies. It is a security feature of the Linux kernel. It is designed to protect the server against misconfigurations and/or compromised daemons. It put limits and instructs server daemons or programs what files they can access and what actions they can take by defining a security policy.

SELinux is an implementation of a MAC security mechanism. MAC is an acronym for Mandatory Access Control (MAC). It is built into the Linux kernel and enabled by default on Fedora, CentOS, RHEL and a few other Linux distributions. SELinux allows server admin to define various permissions for all process. It defines how all processes can interact with other parts of the server such as:

  • Pipes
  • Files
  • Network ports
  • Sockets
  • Directories
  • Other process

SELinux puts restrictions on each of the above object according to a policy. For example, an apache user with full permission can only access /var/www/html directory, but can not touch other parts of the system such as /etc directory without policy modification. If an attacker managed to gain access to sendmail mail or bind dns or apache web server, would only have access to exploited server and the files normally has access as defined in the policy for the server. An attacker can not access the other parts of the system or internal LAN. In other words, damage can be now restricted to the particular server and files. The cracker will not able to get a shell on your server via common daemons such as Apache / BIND / Sendmail as SELinux offers the following security features:

  • Protect users’ data from unauthorized access.
  • Protect other daemons or programs from unauthorized access.
  • Protect network ports / sockets / files from unauthorized access.
  • Protect server against exploits.
  • Avoid privilege escalation and much more.

Please note that SELinux is not a silver bullet for protecting the server. You must follow other security practices such as

  • Implementing firewalls policy.
  • Server monitoring.
  • Patching the system on time.
  • Writing and securing cgi/php/python/perl scripts.

The /etc/selinux/config configuration file controls whether SELinux is enabled or disabled, and if enabled, whether SELinux operates in permissive mode or enforc-ing mode.

SETTING OF SELINUX
SELinux is set in three modes.

Enforcing – SELinux security policy is enforced. IF this is set SELinux is enabled and will try to enforce the SELinux policies strictly

Permissive – SELinux prints warnings instead of enforcing. This setting will just give warning when any SELinux policy setting is breached

Disabled – No SELinux policy is loaded. This will totally disable SELinux policies.

SELinux policies
SELinux allows for multiple policies to be installed on the system, but only one policy may be active at any given time. At present, two kinds of SELinux policy exist:

Targeted – The targeted policy is designed as a policy where most processes operate without restrictions, and only specific ser-vices are placed into distinct security domains that are confined by the policy.

Strict – The strict policy is designed as a policy where all processes are partitioned
into fine-grained security domains and confined by policy.

To put SELinux into enforcing mode:

$ sudo setenforce 1

To query the SELinux status:

$ getenforce

To see SELinux status in simplified way you can use sestatus

$ sestatus

To get elobrated info on difference status of SELinux on different services use -b option along sestatus

$ sestatus -b

How to disable SElinux?

We can do it in two ways
1)Permanent way : edit /etc/selinux/config
change the status of SELINUX from enforcing to disabled
SELINUX=enforcing
to
SELINUX=disabled
Save the file and exit.

2)Temporary way : Execute below command
echo 0 > /selinux/enforce
or
setenforce 0

How about enabling SELinux?

1)Permanent way : edit /etc/selinux/config
change the status of SELINUX from disabled to enforcing
SELINUX=disabled
to
SELINUX=enforcing
Save the file and exit.

2)Temporary way : Execute below command
echo 1 > /selinux/enforce
or
setenforce 1

Now lets understand Docker with SELinux?
The interaction between SELinux policy and Docker is focused on two concerns: protection of the host, and protection of containers from one another.

SELinux labels consist of 4 parts:

User:Role:Type:level.

SELinux controls access to processes by Type and Level. Docker offers two forms of SELinux protection: type enforcement and multi-category security (MCS) separation.

Docker has the –selinux-enabled flag by default in CentOS 7.4.1708. However, in case your image or your configuration management tool is disabling it, as was the case for our puppet module verify this, you verify by running the following comman

$ docker info | grep 'Security Options'

[root@ip-172-31-80-30 ec2-user]# more /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are pro
tected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

Refernece
https://www.cyberciti.biz/faq/what-is-selinux/
https://en.wikipedia.org/wiki/Security-Enhanced_Linux
http://jaormx.github.io/2018/selinux-and-docker-notes/

Tagged : / /

How to run UI testing in Docker container using Selenium

Docker is one of the revolutions technologies which has created lots of buzz in the Software development practices. Docker has not only helped to setup Continuous Integration and Delivery but also manage and replicate test environments and deploy a large at scale in no time. Here are the following advantages which benefit to testing team using docker

  • Docker will help setting up Continuous Integration and Delivery which will enable testing team to deploy and test application in very less time compare of using virtual machines.
  • Efficient software teams push code to production multiple times a day. But this only works with good processes in place. Pull requests, code reviews, and good test coverage are essential for enabling a fast pace and high output of new code. Docker will help QA team to enable this in no time
  • Docker Compose help to create a Application Stake for Developers and QA in very less time.
  • Docker allows you to run your tests in containers as well as isolate your tests in development and deployment.
  • Using Docker to manage and replicate test environments in no time.

But there is one limitation. The major problem of a Docker container for UI testing is that it does not have a screen output. There are in general two solutions:

  1. Use a headless browser such as HTMLUnit that does not require a graphical user interface or
  2. Simulate a screen output.

Recommended is second one because in this case you do not need to change my testing code to use a WebDriver for a headless browser. Moreover, headless browser may not have full functionalities of a real browser.  What I need is a display server called Xvfb or X virtual frame buffer. It performs all graphical operations in memory without showing any screen output.

You can find sample example in following urls.

https://medium.com/@yiquanzhou/run-selenium-ui-tests-in-docker-container-78be98e1b52d

http://testnblog.com/ui-automation-framework-on-docker/

Tagged : / /

docker-compose prepends current directory name to named volumes

docker-compose

Issues – docker-compose prepends current directory name to named volumes

What actually happens is that the named volume gets prepended with (a simplified version of) the directory name from which the docker-compose command was run.

For instance, if I run from the “dcompos-programs” directory, and I name the volume “my_named_vol”, then I end up with a volume named “dcompos-programs_my_named_vol”.

docker-compose-named-volume.yml:


version: '3'
services:

solr:
 image: alpine:3.5
 container_name: foo
 volumes:
 - my_named_vol:/opt/foo
volumes:
 my_named_vol:

Result:


$ docker-compose -f docker-compose-named-volume.yml up -d &amp;&amp; \
&gt; echo "## named volume:" &amp;&amp; \
&gt; docker volume ls | grep my_named_vol &amp;&amp; \
&gt; echo "## stop" &amp;&amp; \
&gt; docker-compose -f docker-compose-named-volume.yml down &amp;&amp; \
&gt; echo "## rm volume" &amp;&amp; \
&gt; docker volume rm $(docker volume ls | grep my_named_vol | awk '{print $2}')
Creating network "deploymentroot_default" with the default driver
Creating volume "dcompos-programs_my_named_vol" with default driver
Creating foo

Answer
docker-compose uses a project name. By default it’s the directory name which prevents collisions with existing containers. But you can use -p to change the prefix name.

You can use the external volume setting to avoid the prefix.
https://docs.docker.com/compose/compose-file/#external

Tagged : /