What is composer.JSON? How do I use it?

A file is generated from the composer tool. Which is the main file of the whole project which we call the composer file. The extension file is composer.json. It is the main composer.json that defines your project requirements.

How to setup a new or existing package

You can also say how to create a composer.json file in a Project to make it a package.

  • Ussing composer init Command
  • Manually Creating composer.json file

Ussing composer init Command

composer init – It is used to set up a new or existing package. The init command creates a basic composer.json file in the current directory.
Every project is a package.
As soon as you a composer.json in a directory, that directory is a package.

composer.json

Package name – in order to make that package installable you need to give it a name. It consists of vendor name and project name, separated by/. The name can contain any character, including white spaces, names are case insensitive, the convention is all lowercase and dashes for word separation. It is required for published packages(libraries).

Syntax:- vendorname/packagename

Ex:- devopsschool/dev

Description- A short description of the package. Usually, this is one line long. It is required for published packages(libraries).

Authors – The authors of the package. This is an array of objects.

Each author object can have the following properties:

  • name: The author’s name. Usually their real name.
  • email: The author’s email aaddress.
  • homepage: An URL to the author’s website.
  • role: The author’s role in the prject (e.g. developer or translator)

Minimum Stability – Composer accepts these flags as minimum-stability settings. The defualt setting for minimun-stability if not provided is assumed to be stable, but you sould define any of the flags down the hierarchy.

  • stable (most stable)
  • re
  • beta
  • alpha
  • dev (least stable)

Package Type – Package types are used for custom installation logic. If you have a package that needs some special logic, you can define a custom type. It default to library.

  • Library
  • Project
  • Metaapackage
  • Composer-plugin

License – The license of the package. This can be either a string or an array of strings.

Ex:- MIT

Tagged : / / / /

An introduction to JavaScript Programming Language.

Do You Know?

  • HTML
  • CSS

When Your Start JavaScript then before should requiring knowledge of HTML & CSS. After that, you can use JavaScript.

What is JavaScript ?

JavaScript is the programming language of HTML and the Web. It makes web page dynamic. It is an interpreted programming language with object-oriented capabilities

JavaScript History

1995 by Brendan Eich (NetScape)
• Mocha
• LiveScript
• JavaScript
• ECMAScript

Tools

Where You use Javascript means editor

  • Notepad
  • Notepad ++
  • Any Text Editor

JavaScript and Java Same?

Tagged : / / / / / /

Understanding the tools sets in kubernetes ecosystem

Kubernetes at Public Cloud

  1. Google Container Engine – Google Kubernetes Engine is a powerful cluster manager and orchestration system for running your Docker containers.
  2. ECS – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster.
  3. EKS – Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS.

Kubernetes cli tools

  1. kubectl – Main CLI tool for running commands and managing Kubernetes clusters.
  2. JSONPath – Syntax guide for using JSONPath expressions with kubectl.
  3. kubeadm – CLI tool to easily provision a secure Kubernetes cluster.
  4. kubefed – CLI tool to help you administrate your federated clusters.
  5. Minikube – This is the simplest way to get a Kubernetes cluster on your Mac or Windows machine.
  6. Kops – kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE in beta support , and VMware vSphere in alpha, and other platforms planned.

kubernetes config reference

  1. kubelet – The primary node agent that runs on each node. The kubelet takes a set of PodSpecs and ensures that the described containers are running and healthy.
  2. Container runtime – Container runtime is Docker engine which resides in each node
  3. kube-proxy – Can do simple TCP/UDP stream forwarding or round-robin TCP/UDP forwarding across a set of back-ends.

Cluster control plane (AKA master)

  1. kube-apiserver – REST API that validates and configures data for API objects such as pods, services, replication controllers.
  2. Cluster state store – All persistent cluster state is stored in an instance of etcd. This provides a way to store configuration data reliably.
  3. kube-controller-manager – Daemon that embeds the core control loops shipped with Kubernetes.
  4. kube-scheduler – Scheduler that manages availability, performance, and capacity.
  5. Federation – A single Kubernetes cluster may span multiple availability zones.
  6. federation-apiserver – API server for federated clusters.
  7. federation-controller-manager – Daemon that embeds the core control loops shipped with Kubernetes federation

Kubernetes Add ons

  1. DNS
  2. Ingress controller
  3. Heapster (resource monitoring)
  4. Dashboard (GUI)
Tagged : / / / / / / / / / / / / / / /

Lets Understand the Ruby programming world in 5 mins!!!

Lets Understand the Ruby programming world in 5 mins?

Ruby
Ruby is a dynamic, interpreted, reflective, object-oriented, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro “Matz” Matsumoto in Japan. The Currnet most latest stable release is Ruby 2.5.

Gem – A Ruby Package
A Gem is a Ruby application package which can contain anything from a collection of code to libraries, and/or list of dependencies that the packaged code actually needs to run.

Gems are formed of a structure similar to the following:

/[package_name] # The main root directory of the Gem package.
|__ /bin # Location of the executable binaries if the package has any.
|__ /lib # Directory containing the main Ruby application code (inc. modules).
|__ /test # Location of test files.
|__ README #
|__ Rakefile # The Rake-file for libraries which use Rake for builds.
|__ [name].gemspec # *.gemspec file, which has the name of the main directory, contains all package meta-data, e.g. name, version, directories etc.

One of the tools we will be using for creating Gems is Bundler.

Bundler
Bundler a gem to bundle gems. Bundler makes sure Ruby applications run the same code on every machine.

It does this by managing the gems that the application depends on. Given a list of gems, it can automatically download and install those gems, as well as any other gems needed by the gems that are listed. Before installing gems, it checks the versions of every gem to make sure that they are compatible, and can all be loaded at the same time. After the gems have been installed, Bundler can help you update some or all of them when new versions become available. Finally, it records the exact versions that have been installed, so that others can install the exact same gems.

RubyGems – A Package Manager
RubyGems is the default package manager for Ruby. It helps with all application package lifecycle from downloading to distributing Ruby applications and relevant binaries or libraries. RubyGems is a powerful package management tool which provides the developers a standardised structure for packing application in archives called Ruby Gem. The webiste is https://rubygems.org/

Rake
Rake is a Make-like program implemented in Ruby. Tasks and dependencies are specified in standard Ruby syntax. you must write a “Rakefile” file which contains the build rules.

Rails
Rails is a web application development framework written in the Ruby programming language. It is designed to make programming web applications easier by making assumptions about what every developer needs to get started. It allows you to write less code while accomplishing more than many other languages and frameworks. Experienced Rails developers also report that it makes web application development more fun.

bower
Web sites are made of lots of things — frameworks, libraries, assets, and utilities. Bower manages all these things for you. Bower can manage components that contain HTML, CSS, JavaScript, fonts or even image files. Bower doesn’t concatenate or minify code or do anything else – it just installs the right versions of the packages you need and their dependencies.

yarn
Yarn caches every package it has downloaded, so it never needs to download the same package again. It also does almost everything concurrently to maximize resource utilization. This means even faster installs.

If i have missed any tools which is important of Rudy ecospace, please mentioned in the comments sections.

Tagged : / / / / / / /

Ecosystem of chef and Its associated tools explained

Chef Apply
chef-apply is an executable program that runs a single recipe from the command line. Is part of the Chef development kit. A great way to explore resources

Chef
The chef executable is a command-line tool which Generates applications, cookbooks, recipes, attributes, files, templates, and custom resources (LWRPs) and Ensures that RubyGems are downloaded properly for the chef-client development environment along with Verifies that all components are installed and configured correctly

Knife
knife is a command-line tool that provides an interface between a local chef-repo and the Chef server. knife helps users to manage Nodes, Cookbooks and recipes, Roles, Environments, and Data Bags, Resources within various cloud environments, The installation of the chef-client onto nodes, Searching of indexed data on the Chef server

Chef Client
The Chef client works with the Chef server to bring nodes to their desired states with policies you provide as recipes. The chef-client executable can be run as a daemon. A chef-client is an agent that runs locally on every node that is under management by Chef. When a chef-client is run, it will perform all of the steps that are required to bring the node into the expected state, including:

  • Registering and authenticating the node with the Chef server
  • Building the node object
  • Synchronizing cookbooks
  • Compiling the resource collection by loading each of the required cookbooks, including recipes, attributes, and
  • all other dependencies
  • Taking the appropriate and required actions to configure the node
  • Looking for exceptions and notifications, handling each as required

Chef Development Kit
The Chef development kit contains all you need to develop and test your infrastructure, built by the awesome Chef community. Chef Development Kit has following Component installed…

  • fauxhai
  • kitchen-vagrant
  • openssl
  • delivery-cli
  • test-kitchen
  • git
  • berkshelf
  • chefspec
  • knife-spork
  • inspec
  • tk-policyfile-provisioner
  • opscode-pushy-client
  • chef-dk
  • chef-sugar
  • chef-client
  • generated-cookbooks-pass-chefspec
  • chef-provisioning
  • package installation

Chef Server
The Chef server makes it easy to automate your infrastructure, manage scale and complexity, and safeguard your systems.

Chef Server has following tools which should be running…

  • bookshelf
  • nginx
  • oc_bifrost
  • oc_id
  • opscode-erchef
  • opscode-expander
  • opscode-solr4
  • postgresql
  • rabbitmq
  • redis_lb

InSpec
InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements.

Push Jobs Client
The Push Jobs client communicates with the Push Jobs server, which extends the Chef Server to allow you to execute commands across hundreds or even thousands of nodes in your Chef-managed infrastructure.

Push Jobs Server
The Push Jobs server add-on, along with its associated client, extends the Chef Server to allow you to execute commands across hundreds or even thousands of nodes in your Chef-managed infrastructure.

Supermarket
Supermarket is an artifact repository that makes it easy to browse, use, and share communal cookbooks and tools within your organization.

Chef Automate
One platform with a unified workflow, end-to-end visibility, and automated compliance over your entire Chef ecosystem.

Chef Compliance
Assess and monitor infrastructure compliance and use InSpec compliance profiles to validate that production servers are properly configured.

Chef Backend
Chef High Availability makes it easy to build high-availability Chef clusters on any infrastructure.

Chef Manage
Chef Manage is an Enterprise Chef add-on that enables a web-based user interface for visualizing and managing nodes, data bags, roles, environments, cookbooks and role-based access control (RBAC).

Kitchen or Test Kitchen
kitchen is the command-line tool for Kitchen, an integration testing tool used by the chef-client. Kitchen runs tests against any combination of platforms using any combination of test suites. Each test, however, is done against a specific instance, which is comprised of a single platform and a single set of testing criteria.

“Test Kitchen is an integration tool for developing and testing infrastructure code and software on isolated target platforms.” It creates test machines, converges them, and runs post-convergence tests against them to verify their state. Test Kitchen is written in Ruby. It has a plugin system for supporting machine creation through a variety of virtual machine technologies such as vagrant, EC2, docker, and several others. Test Kitchen makes it easy for Chef developers to test cookbooks on a variety of platforms. It uses busser to install post-convergence integration test tools such as Serverspec or BATS that actually perform the tests.

foodcritic
Foodcritic is a helpful lint tool you can use to check your Chef cookbooks for common problems.
http://www.foodcritic.io/

ChefSpec
ChefSpec is a framework that tests resources and recipes as part of a simulated chef-client run. ChefSpec tests execute very quickly. When used as part of the cookbook authoring workflow, ChefSpec tests are often the first indicator of problems that may exist within a cookbook.
ChefSpec is packaged as part of the Chef development kit. To run ChefSpec
$ chef exec rspec
https://docs.chef.io/chefspec.html

RuboCop
Rubocop is a Ruby command-line tool that performs lint and style checks based on the community driven Ruby Style Guide. It performs static analysis of any Ruby code, which includes Chef recipes, resources, library helpers, and so forth. Rubocop can be configured via .rubocop.yml to exclude certain rules, and it can be run with “–lint” to perform only lint checking, excluding all style checks. Rubocop is used in the Chef community in cookbooks to make contributions more consistent and easier to manage.

Serverspec
Serverspec is an “outside-in” integration test framework. It is platform and tool agnostic, and is used by other configuration management systems to verify systems are configured as desired. It checks the actual state of the target node by executing commands locally, via SSH, via WinRM, or other remote transports. Serverspec is implemented in RSpec, and uses RSpec test syntax.

Tagged : / / / / / / / / / / / / / / / / / / / /

Tools for Counting Lines of Code in Source Code

USC CodeCount and USC COCOMO- $0

CodeCount automates the collection of source code sizing information. The CodeCount toolset utilizes one of two possible source lines of code (SLOC) definitions, physical or logical. COCOMO (COnstructive COst MOdel), is a tool which allows one to estimate the cost, effort, and schedule associated with a prospective software development project.

Languages: Ada, Assembly, C, C++, COBOL, FORTRAN, Java, JOVIAL, Pascal, PL1

SLOCCount – $0

SLOCCount is a set of tools for counting physical Source Lines of Code (SLOC) in a large number of languages of a potentially large set of programs. SLOCCount can automatically identify and measure many programming languages.

Languages: Ada, Assembly, awk, Bourne shell and variants, C, C++, C shell, COBOL, C#, Expect, Fortran, Haskell, Java, lex/flex, LISP/Scheme, Makefile, Modula-3, Objective-C, Pascal, Perl, PHP, Python, Ruby, sed, SQL, TCL, and Yacc/Bison.

SourceMonitor – $0

SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. Collects metrics in a fast, single pass through source files. Displays and prints metrics in tables and charts.

Languages: C++, C, C#, Java, Delphi, Visual Basic (VB6) or HTML

LOCC – $0

LOCC is an extensible system for producing hierarchical, incremental measurements of work product size that are useful for estimation, planning, and other software engineering activities. LOCC supports size measurement of grammar-based languages through integrated support for JavaCC. LOCC produces size data corresponding to the number of packages, the number of classes in each package, the number of methods in each class, and the number of lines of code in each method.

Languages: C++, Java

Code Counter Pro – $25

Code Counter Pro is perfect for those reports you need to send to your boss – count up all your progamming lines (SLOC, KLOC) automatically, find out your team’s productivity, use as handy help for measuring Function Points through Backfiring, measure comment percentages and more.

Languages: ASM, COBOL, C, C++, C#, Fortran, Java, JSP, PHP, HTML, Delphi, Pascal, VB, XML

SLOC Metrics – $99

SLOC Metrics measures the size of your source code based on the Physical Source Lines of Code metric recommended by the Software Engineering Institute at Carnegie Mellon University (CMU/SEI-92-TR-019). Specifically, the source lines that are included in the count are the lines that contain executable statements, declarations, and/or compiler directives. Comments, and blank lines are excluded from the count. When a line or statement contains more than one type, it is classified as the type with the highest precedence. The order of precedence for the types is: executable, declaration, compiler directive, comment and lastly, white space.

Languages: ASP, C, C++, C#, Java, HTML, Perl, Visual Basic

Resource Standard Metrics – $200

Resource Standard Metrics, or RSM, is a source code metrics and quality analysis tool unlike any other on the market. The unique ability of RSM to support virtually any operating system provides your enterprise with the ability to standardize the measurement of source code quality and metrics throughout your organization. RSM provides the fastest, most flexible and easy-to-use tool to assist in the measurement of code quality and metrics.

Languages: C, C++, C#, Java

EZ-Metrix – $495

EZ-Metrix supports software development estimates, productivity measurement, schedule forecasting and quality analysis. With an easy Internet-based interface, multiple language support and flexible licensing features, you will be up and running in minutes with EZ-Metrix. Measure source code size from virtually all text-based languages and from any platform or operating system with the same utility. Size data may be stored in EZ-Metrix’s internal database or may be exported for further analysis.

Languages: Ada, ALGOL, antlr, asp, Assembly, awk, bash, BASIC, bison, C, C#, C++, ColdFusion, Delphi, Forth, FORTRAN, Haskell, HTML, Java, Javascript, JOVIAL, jsp, lex, lisp, Makefile, MUMPS, Pascal, Perl, PHP, PL/SQL, PL1, PowerBuilder, ps, Python, Ruby, sdl, sed, SGML, shell, SQL, Visual Basic, XML, Yacc

McCabe IQ – $ unknown

McCabe IQ enables you to deliver better, more reliable software to your end-users, and is known worldwide as the gold standard for the analysis, comprehension, testing, and reengineering of new software and legacy systems. McCabe IQ uses advanced software metrics to identify, objectively measure, and report on the complexity and quality of your code at the application and enterprise level.

Languages: Ada, ASM86, C, C#, C++.NET, C++, COBOL, FORTRAN, Java, JSP, Perl, PL1, VB, VB.NET

Tagged : / / / /

General SCM Interview Questions

The previous chapters outlined the state of CM technology from the standpoint of a spectrum of concepts underlying automated CM, and from the standpoint of the reflection of some of these concepts in commercial CM products. Clearly, no CM product supports all CM concepts; similarly, not all CM concepts are necessary in the support of all possible end-user requirements. That is, different CM tools (and the concepts which underlie these tools) may be required by different organizations or projects, or within projects at different  phases of the software development life cycle. This observation, coupled with the observed,continuing industry effort to adopt computer-aided software engineering (CASE) tools, leads us to conclude that integration is key to providing automated CM support in software development environments.
In this chapter we define what we mean by integration by way of a three-level model of integration. We illustrate where CM integration fits into this three-level model.  e then describe the advantages and disadvantages of current approaches to achieving integration in  software development environments. We close with a brief discussion on the relationship between future integration technology and the three levels of integration.

CM Services in Software Environments: A Question of Integration

There is no concensus regarding where CM services should reside in software environment architectures, despite the diversity of approaches that have been explored. For example, CM services have been offered via:

· Tools such as RCS, SCCS, CCC.
· Operating system extensions at the file-system level such as DSEE and NSE.
· Shared data models such as in the CIS specifications [18] and the PCTE PACT [53] environment.

A further complication is the emergence of a robust CASE tool industry, wherein many popular CASE tools provide their own tool-specific repository and CM services. As a result, CM functions are increasingly provided by, and distributed across, several CASE tools in an environment.
We have found it useful to think of integration in terms of a three-level model. This model, illustrated in Figure 5-1, corresponds to the ANSI/SPARC [48] three-schema  pproach used to describe database architectures. A useful intuition is that this correspondence is more than accidental. The bottom level of integration, called “mechanism” integration, corresponds to the ANSI/SPARC physical schema level. Mechanism integration addresses the implementation aspects of software integration, including, but not limited to: software interfaces provided by the environment infrastructure, e.g., operating system or environment framework interfaces;

software interfaces provided by individual tools in the environment; and architectural aspects of the tools, such as process structure (e.g., client/server) and data management structure (derivers, data dictionary, database). In the case of CM, mechanism integration can refer to integration with CM systems such as SCCS, RCS, CCC and DSEE; and CM implementation aspects such as transparent repositories and other operating-systems level CM services.

The middle level of integration, called “services” integration, corresponds to the ANSI/SPARC logical schema level. Services refers to the high-level functions provided by tools, and integration at this level can be regarded as the specification of how services can be related in a coherent fashion. In the case of CM, these services refer to elements of the spectrum of concepts discussed in chapter 3, e.g., workspaces and transactions, and services integration constitutes a kind of unified model of CM services.

The top level of integration, called “process” integration, corresponds to the ANSI/SPARC external schema (also called “end-user”) level. Process integration can be regarded as a kind of process specification for how software will be developed; this specification can define a view of the process from many perspectives, spanning individual roles through larger organizational pespectives. In the case of CM, process integration refers to policies and procedures for carrying out CM activities.

Integration occurs within each of these levels of integration; thus, mechanisms are inte- 34 ATR grated with mechanisms, services with services, and process elements with process elements. There are also relationships that span the levels. The relationship between the mechanism level and the services level is an implementation relationship: a CM concept in  he services layer may be implemented by different tools in the mechanism level, and conversely, a single mechanism may implement more than one CM concept. The relationship between the services level and the process level is a process adaptation relationship: different CM services may be combined, and tuned, to support different process requirements.

image

This three-level model provides a working context for understanding integration. For the moment, however, existing integration technology does not match exactly this somewhat idealized model of integration. For example, many services provided by CASE tools (including CM) embed process constraints that should logically be separate, i.e., reside in the process level. Similarly, tool services are often closely coupled to particular implementation techniques.

The level of adaptability required of integrating CM—both in terms of adaptability for projectspecific requirements as well as adaptability to multiple underlying CM
implementations—pushes the limits of available environment integration techniques. The following sections describe the current state of integration technology and its limitations. The next chapter discusses how future generation integration technology can address these shortcomings.

Reference: 
The State of Automated Configuration Management.
A. Brown, S. Dart, P. Feiler, K. Wallnau

Tagged : / / / / / / / / / /

Do I need to install the Developer tools in order to use Iceberg?

msiexpert created the topic: Do I need to install the Developer tools in order to use Iceberg?
Do I need to install the Developer tools in order to use Iceberg?

applicationPackaging replied the topic: Re: Do I need to install the Developer tools in order to use Iceberg?
With the introduction of Iceberg 1.2, it is no more required to install the Developer tools. If you prefer to use the SplitForks tool instead of the goldin tool, you still need to install the Developer tools.

Tagged :

Application Packaging-Joining within 15days(Job Scheduling tools)

created the topic: Application Packaging-Joining within 15days(Job Scheduling tools)
FROM: dwilma@magna.in

This is Wilma from Magna InfoTech.
Greetings from Magna InfoTech!!!!!
We have a very exciting opportunity for your career growth. Please go through the following -Regarding Magna opportunities bestowed for you.

About Magna Infotech:
MAGNA INFOTECH is a Premier Provider of IT services. We, Magna InfoTech provide IT services to clients worldwide, incorporated in Danbury, CT – USA since 1995. We are a 2000+ employee organization with skilled IT professionals. For more information logon to www.magna.in

Kindly check the current opening:

Primary Skill : Application Packaging
Experience : 4+yrs
Location : Bangalore
Education :.Any

Skill:Application Packaging,Wise,VB Scripting

Only those candidates needs to apply who are willing to join within 15 Days.
If you possess the same and are confident getting into our CMM Level 5 cliental all across the Globe, please send across your updated profile in word format with your contact details to dwilma@magna.in highlighting your experience in the below mentioned fields.

Current CTC:
Expected CTC:
Notice period:
Alternate Contact NO:
Job type: Permanent/Contractual:
The opportunity extends to your friends and Colleagues who have the skill and potential to WIN as a excelling software Engineer.
We look forward to hearing from you at the earliest.

Regards
Wilma

Tagged :

Excelling Opportunity in Open Tools Admin

created the topic: Excelling Opportunity in Open Tools Admin
FROM: skeerthi@magna.in

You have navigated on to the job posting space of India’s leading, ISO certified, 2000 plus IT Contractual Staffing Company – Magna InfoTech! I bet you are aware that Magna has been recognized by key IT majors for revolutionizing the IT staffing business in the country.

As you scout for an interesting opening through the World Wide Web, we welcome you and would guide you through the appropriate, challenging and exciting career opportunity that matches your skill sets and exposure.

You may like to check the current opening that has emerged at our end, thanks to the rapid growth recorded.

Skill sets : CA Harvest, Requisite pro, HP Quality Center or HP Performance Center, Solaris, Linux, Windows, Networking
Role / Designation : Senior Analyst
Years of Experience : 4 Yrs – 10 Yrs
Job Location : Bangalore

Interested? Want to give it a try? If yes, then please mail your updated profile to skeerthi@magna.in

For more details on the Company, log in to www.magna.in

Good luck!

Regards,
Keerthi Shetty
______________________
Magna Infotech Pvt Ltd.
#13 and 14 8th cross|2nd Main
Indiranagar 1st Phase
Bangalore 38
Mailto: skeerthi@magna.in
www.magna.in

Tagged :