Top 4 open source alternatives to Slack for team collaboration and Chat

Off course Slack is market leader in terms team collaboration and Chat but here are the list of 4 open source alternatives to Slack for team collaboration and Chat.
Slack 
Slack offers a lot of IRC-like features: persistent chat rooms (channels) organized by topic, as well as private groups and direct messaging (again, historically based on IRC).[14] All content inside Slack is searchable, including files, conversations, and people. Slack integrates with a large number of third-party services and supports community-built integrations. Major integrations include services such as Google Drive, Trello, Dropbox, Box, Heroku, Crashlytics, GitHub, Runscope and Zendesk. In December 2015, Slack announced their app directory, consisting of over 150 integrations that users can install.[21] Users can add emoji buttons to their messages, which other users can then click on to express their reactions to messages.
More info – https://slack.com/
IRC
Internet Relay Chat, or IRC, is a protocol which dates back to the late 1980s. Since it’s been around so long, there are numerous open source implementations on both the client and the server side.
Coming with its age, however, are numerous drawbacks. It lacks many features one might expect in a modern chat client, from security to identity management to even just being able to easily transmit non-text components, like images, files, or emoticons (the latter might be seen as a plus to some, however). Some features have been implemented after-the-fact through bot services, including nickname management, logging, and other features, but these vary from server to server.
IRC does still have some things going for it, though. It’s nearly universal, and clients are available for basically every platform out there. Though the command-driven interface isn’t necessarily intuitive for beginners, many clients re-implement commands through a GUI. And if you’re doing upstream open source development, there’s a good chance you’re already hanging out in IRC anyway, so adding a team server might be a path of least resistance.
Let’s Chat
Let’s Chat is a persistent messaging application that runs on Node.js and MongoDB. It’s designed to be easily deployable and fits well with small, intimate teams.
It’s free (MIT licensed) and ships with killer features such as LDAP/Kerberos authentication, a REST-like API and XMPP support.
Let’s Chat is a side-project of the development team at Security Compass. (A real life 10% time project!)
More –
Mattermost
As an alternative to proprietary SaaS messaging, Mattermost brings all your team communication into one place, making it searchable and accessible anywhere. It’s written in Golang and React and runs as a production-ready Linux binary under an MIT license with either MySQL or Postgres.
Rocket.chat
Rocket.Chat is an incredible product because we have an incredible developer community.
Over 200 contributors have made our platform a dynamic and innovative toolkit, from group messages and video calls to helpdesk killer features.
Our contributors are the reason we’re the best cross-platform open source chat solution available today.
Tagged : / / / / / / / / / / / / / / / / /

How to Set or Configure Proxy in Linux and Windows System? – scmGalaxy

proxy-configuration-in-linux-and-windows
Setting the proxy configuration in Linux and Windows
If you use a proxy server or firewall, you may need to set the http_proxy environment variable in order to access some url from command-line.
Windows Command line
set http_proxy=http://your_proxy:your_port
set http_proxy=http://username:password@your_proxy:your_port
set https_proxy=https://your_proxy:your_port
set https_proxy=https://username:password@your_proxy:your_port
Windows GUI
1. Open the Control Panel and click the System icon.The System Properties dialog is displayed.
2. On the Advanced tab, click on Environment Variables. The Environment Variables dialog is displayed.
3. Click New in the System variables panel. The New Sytem Variable dialog is displayed.
4. Add http_proxy with the appropriate proxy information
Windows Registry
IE can set username and password proxies, so maybe setting it there and import does work
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyEnable /t REG_DWORD /d 1
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyServer /t REG_SZ /d name:port
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyUser /t REG_SZ /d username
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyPass /t REG_SZ /d password
netsh winhttp import proxy source=ie
Command to enable proxy usage:
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyEnable /t REG_DWORD /d 1 /f
Command to disable proxy usage:
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyEnable /t REG_DWORD /d 0 /f
Command to change the proxy address:
reg add “HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings” /v ProxyServer /t REG_SZ /d proxyserveraddress:proxyport /f
Linux 
export http_proxy=http://your_proxy:your_port
export http_proxy=http://username:password@your_proxy:your_port
export https_proxy=https://your_proxy:your_port
export https_proxy=https://username:password@your_proxy:your_port
export https_proxy=https://%username%:%password%@your_proxy:your_port
FAQ
1. How to escape if password has a @ character
Ans – try %40 instead of @
2. What is the file name where it stroed the proxy in Ubantu
Ans – /etc/environment
3. How to set proxy inforamtin in Apt?
Ans – Adding following line to /etc/apt/apt.conf has solved the problem:
Acquire::http::proxy “http://10.1.3.1:8080/”;
If file does not exist, create it. Do not confuse it with apt.conf.d directory.
4. How to set proxy inforamtin in linux Profile?
5. Why manual export failed to affect apt-get with the proxy info?
Ans – The reason your manual export failed to affect apt-get is because sudo ignores that environment variable by default (i.e. it doesn’t pass it on to the command). For one-off runs, you could do sudo env http_proxy=http://10.1.3.1:8080 apt-get update. Otherwise, you could configure sudo to allow http_proxy to fall through.
Tagged : / / / / / / / / / / / / /

Java Installation Process in Linux – Complete guide

java-installation-in-linux

Download, Install and Configure JDK 8 & JRE 8

Platfrom – Debian & Ubuntu

#JRE8 - Package contains just the Java Runtime Environment 8
$ sudo apt-get install openjdk-8-jre

#JKD8 - Package contains just the Java Developement Environment 8
$ sudo apt-get install openjdk-8-jdk

Platfrom – Fedora, Oracle Linux, Red Hat Enterprise Linux, etc

#JRE8 - Package contains just the Java Runtime Environment 8
$ su -c “yum install java-1.8.0-openjdk”

#JKD8 - Package contains just the Java Developement Environment 8
$ su -c "yum install java-1.8.0-openjdk-devel"

$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"

$ wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm

curl -v -j -k -L -H "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm > jdk-8u112-linux-x64.rpm

Platfrom – All platforms of Linux, Windows and Mac in Tar ball format

$ wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz

$ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz"

$ wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz

How to set JAVA in Linux System?

$ export JAVA_HOME=/opt/jdk1.8.0_144/
$ export PATH=/opt/jdk1.8.0_144/bin:$PATH;

Download, Install and Configure JDK 7 & JRE 7

Platfrom – Debian & Ubuntu

#JRE7 - Package contains just the Java Runtime Environment 7
$ sudo apt-get install openjdk-7-jre

#JKD7 - Package contains just the Java Developement Environment 7
$ sudo apt-get install openjdk-7-jdk

Platfrom – Fedora, Oracle Linux, Red Hat Enterprise Linux, etc

$ su -c “yum install java-1.7.0-openjdk”

$ su -c “yum install java-1.7.0-openjdk-devel”

Platfrom – All platforms of Linux, Windows and Mac in Tar ball format

wget –no-cookies –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com” “http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz”

wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz

curl -v -j -k -L -H “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.rpm > jdk-7u79-linux-x64.rpm

JDK 6
Debian, Ubuntu, etc.
On the command line, type:
$ sudo apt-get install openjdk-6-jre
The openjdk-6-jre package contains just the Java Runtime Environment.
$ sudo apt-get install openjdk-6-jdk
If you want to develop Java programs then install the openjdk-6-jdk package.
Fedora, Oracle Linux, Red Hat Enterprise Linux, etc.
On the command line, type:
$ su -c “yum install java-1.6.0-openjdk”
The java-1.6.0-openjdk package contains just the Java Runtime Environment.
$ su -c “yum install java-1.6.0-openjdk-devel”
If you want to develop Java programs then install the java-1.6.0-openjdk-devel package.

 

 

Tagged : / / / / / / / / / / / / /

Configure the Knife Command – Chef

configure-the-knife-command

We now have to configure the knife command. This command is the central way of communicating with our server and the nodes that we will be configuring. We need to tell it how to authenticate and then generate a user to access the Chef server.

Luckily, we’ve been laying the groundwork for this step by acquiring the appropriate credential files. We can start the configuration by typing:

knife configure --initial 

This will ask you a series of questions. We will go through them one by one:

WARNING: No knife configuration file found Where should I put the config file? [/home/your_user/.chef/knife.rb]

The values in the brackets ([]) are the default values that knife will use if we do not select a value.

We want to place our knife configuration file in the hidden directory we have been using:

/home/your_user/chef-repo/.chef/knife.rb

In the next question, type in the domain name or IP address you use to access the Chef server. This should begin with https:// and end with :443:

https://server_domain_or_IP:443

You will be asked for a name for the new user you will be creating. Choose something descriptive:

Please enter a name for the new user: [root] station1

It will then ask you for the admin name. This you can just press enter on to accept the default value (we didn’t change the admin name).

It will then ask you for the location of the existing administrators key. This should be:

/home/your_user/chef-repo/.chef/admin.pem

It will ask a similar set of questions about the validator. We haven’t changed the validator’s name either, so we can keep that as chef-validator. Press enter to accept this value.

It will then ask you for the location of the validation key. It should be something like this:

/home/your_user/chef-repo/.chef/chef-validator.pem

Next, it will ask for the path to the repository. This is the chef-repo folder we have been operating in:

/home/your_user/chef-repo

Finally, it will ask you to select a password for your new user. Select anything you would like.

This should complete our knife configuration. If we look in our chef-repo/.chef directory, we should see a knife configuration file and the credentials of our new user:

ls ~/chef-repo/.chef 
admin.pem  chef-validator.pem  knife.rb  station1.pem
Tagged : / / / / / / / / / / / /

Docker Command line Reference | Docker Tutorial | Docker Guide

One liner to stop all of Docker containers:
> docker stop $(docker ps -a -q)
One liner to remove all of Docker containers:
> docker rm $(docker ps -a -q)
OR
> docker rm $(docker images -q)
OR
> docker rmi $(docker images -q)
In case of error message Get http:///var/run/docker.sock/v1.14/containers/json?all=1: dial unix /var/run/docker.sock: permission denied
> sudo docker rm $(sudo docker ps -a -q)
To only stop exited containers and delete only non-tagged images.:
> docker ps –filter ‘status=Exited’ -a | xargs docker stop
> docker images –filter “dangling=true” -q | xargs docker rmi
Remove all containers that aren’t currently running:
> docker rm $(docker ps -a -q -f “status=exited*”)
if not the volumes will not be delete !!! (like if you are using a mysql docker image) and all the volumes will be orphans !
> sudo docker rm -f -v $(sudo docker ps -a -q)
Tagged : / / / / / / / /

Docker Training | Docker Course | Agenda | scmGalaxy

docker-training-agenda

Need to learn Docker? This is the training for you! This training provides a soup-to-nuts learning experience for core Docker technologies, including the Docker Engine, Images, Containers, Registries, Networking, Storage, and more. All of the behind the scenes theory is explained, and all concepts are clearly demonstrated on the command line. No prior knowledge of Docker or Linux is required. Training Overview Training

Introduction

o Training Introduction
o What We’ll Learn
o Prerequisites
Introducing Containers
o The Rise of the Virtual Machine
o The Ugly Virtual Machine
o What Are Containers?3
o Containers Under the Hood
o Docker6
o The Future of Docker and Containers
Installing Ubuntu Linux and CentOS Linux
o Module Intro
o Downloading Ubuntu
o Creating a VM to Install Ubuntu
o Installing Ubuntu
o Downloading CentOS
o Creating a VM to Install CentOS
o Installing CentOS
Installing and Updating Docker
o Module Intro1m 22s
o Docker on Ubuntu
o Installing Docker on CentOS
o Updating Docker
o Granting Docker Control to Non-root Users
o Configuring Docker to Communicate Over the Network
o Playing Around with Our First Docker Container
o Wrap-up
Major Docker Components
o Module Intro
o The High Level Picture
o The Docker Engine
o Docker Images
o Docker Containers
o Docker Hub
o Wrap-up
A Closer Look at Images and Containers
o Module Intro
o Image Layers
o Union Mounts
o Where Images Are Stored
o Copying Images to Other Hosts
o The Top Writeable Layer of Containers
o One Process per Container
o Commands for Working with Containers
o Wrap-up
Container Management
o Module Intro56s
o Starting and Stopping Containers
o PID 1 and Containers
o Deleting Containers
o Looking Inside of Containers
o Low-level Container Info
o Getting a Shell in a Container
o Wrap-up
Building from a Dockerfile
o Module Intro
o Introducing the Dockerfile
o Creating a Dockerfile
o Building an Image from a Dockerfile
o Inspecting a Dockerfile from Docker Hub
o Wrap-up
Working with Registries
o Module Intro1m 25s
o Creating a Public Repo on Docker Hub
o Using Our Public Repo on Docker Hub
o Introduction to Private Registries
o Building a Private Registry
o Using a Private Registry
o Docker Hub Enterprise
o Wrap-up
Diving Deeper with Dockerfile
o Module Intro
o The Build Cache
o Dockerfile and Layers
o Dockerfile
o Launching the Web Server Container
o Reducing the Number of Layers in an Image
o The CMD Instruction
o The ENTRYPOINT Instruction
o The ENV Instruction
o Volumes and the VOLUME Instruction
o Module Recap
Docker Networking
o Module Intro
o The docker0 Bridge
o Virtual Ethernet Interfaces
o Network Configuration Files
o Exposing Ports
o Viewing Exposed Ports
o Linking Containers
Troubleshooting
o Module Intro
o Docker Daemon Logging
o Container Logging
o Planning Image Builds
o Intermediate Images
o The docker0 Bridge
o IPTables
o Wrap-up
Lightning Fast Recap
o Module Intro
o Recapping Some of What We’ve Learned
Tagged : / / / / / /

How to Setup Puppet Learning VM – Complete Process/Guide

setup-a-puppet-learning-vm
Download the VM(Zip File here)

 

Minimum requirements

  • Internet-enabled Windows, OS X, or Linux computer with 10GB free space and a VT-x/AMD-V enabled processor.
  • Up to date virtualization software. See the setup instructions below for details.

Setting up the Learning VM

  1. Before beginning, you may want to use the MD5 sum provided at the VM download page to verify your download. On Mac OS X and *nix systems, you can use the command md5 learning_puppet_vm.zip and compare the output to the text contents of thelearning_puppet_vm.zip.md5 file provided on the download page. On Windows systems, you will need to download and use a tool such as the Microsoft File Checksum Integrity Verifier.

  2. Get an up-to-date version of your virtualization software. We suggest using either VirtualBox or a VMware application appropriate for your platform. VirtualBox is free and available for Linux, OS X, and Windows. VMware has several desktop virtualization applications, including VMWare Fusion for Mac and VMware Workstation for Windows.

  3. The Learning VM’s Open Virtualization Archive format must be imported rather than opened directly. Launch your virtualization software and find an option for Import or Import Appliance. (This will usually be in a File menu. If you cannot locate an Import option, please refer to your virtualization software’s documentation.)

  4. Before starting the VM for the first time, you will need to adjust its settings. We recommend allocating 4GB of memory for the best performance. If you don’t have enough memory on your host machine, you may leave the allocation at 3GB or lower it to 2GB, though you may encounter stability and performance issues. Set the Network Adapter to Bridged. Use an Autodetect setting if available, or accept the default Network Adapter name. (If you started the VM before making these changes, you may need to restart the VM before the settings will be applied correctly.) If you are unable to use a bridged network, we suggest using the port-forwarding instructions provided in the troubleshooting guide.

  5. Start the VM. When it is started, make a note of the IP address and password displayed on the splash page. Rather than logging in directly, we highly recommend using SSH. On OS X, you can use the default Terminal application or a third-party application like iTerm. For Windows, we suggest the free SSH client PuTTY. Connect to the Learning VM with the login root and password you noted from the splash page. (e.g. ssh root@<IPADDRESS>) Be aware that it might take several minutes for the services in the PE stack to fully start after the VM boots. Once you’re connected to the VM, we suggest updating the clock with ntpdate pool.ntp.org.

  6. You can access this Quest Guide via a webserver running on the Learning VM itself. Open a web broswer on your host and enter the Learning VM’s IP address in the address bar. (Be sure to use http://<ADDRESS> for the Quest Guide, as https://<ADDRESS> will take you to the PE console.

 

Troubleshooting

For the most up-to-date version of this troubleshooting information, check the GitHub repository. If nothing here resolves your issue, feel free to email us at learningvm@puppetlabs.com and we’ll do our best to address your issue.

For issues with Puppet Enterprise that are not specific to the Learning VM, see the Puppet Enterprise Known Issues page.

The cowsay package won’t install

The Learning VM version 2.29 has an error in the instructions for this quest. The cowsay package declaration should includeprovider => 'gem', rather than ensure => 'gem'.

If you continue to get puppet run failures related to the gem, you can install the cached version manually: gem install /var/cache/rubygems/gems/cowsay-0.2.0.gem

I completed a task, but the quest tool doesn’t show it as complete

The quest tool uses a series of Serverspec tests for each quest to track task progress. Certain tasks simply check your bash history for an entered command. In some cases, the /root/.bash_history won’t be properly initialized, causing these tests to fail. Exiting the VM and logging in again will fix this issue.

It is also possible that we have written the test for a task in a way that is too restrictive and doesn’t correctly capture a valid syntactical variation in your Puppet code or another relevant file. You can check the specific matchers by looking at a quest’s spec file in the ~/.testing/spec/localhost/ directory. If you find an issue here, please let us know by sending an email tolearningvm@puppetlabs.com.

Password Required for the Quest Guide

The Learning VM’s Quest Guide is accessible at http://<VM's IP Address>. Note that this is http and not https which is reserved for the PE console. The PE console will prompt you for a password, while no password is required for the Quest Guide. (The Quest Guide includes a password for the PE console in the Power of Puppet quest: admin/puppetlabs)

I can’t find the VM password

The password to log in to the VM is generated randomly and will be displayed on the splash page displayed on the terminal of your virtualization software when you start the VM.

If you are already logged in via your virtualization software’s terminal, you can use the following command to view the password: cat /var/local/password.

Does the Learning VM work on vSphere, ESXi, etc.?

Possibly, but we don’t currently have the resources to test or support the Learning VM on these platforms.

My puppet run fails and/or I cannot connect to the PE console

It may take some time after the VM is started before all the Puppet services are fully started. If you recently started or restarted the VM, please wait a few minutes and try to access the console or trigger your puppet run again.

Also, because the Learning VM’s puppet services are configured to run in an environment with restricted resources, they are more prone to crashes than a default installation with dedicated resources.

You can check the status of puppet services with the following command:

systemctl --all | grep pe- 

If you notice any stopped puppet-related services (e.g. pe-puppetdb), double check that you have sufficient memory allocated to the VM and available on your host before you try starting them (e.g. service pe-puppetdb start).

If you get an error along the lines of Error 400 on SERVER: Unknown function union... it is likely because the puppetlabs-stdlib module has not been installed. This module is a dependency for many modules, and provides a set of common functions. If you are running the Learning VM offline, you cannot rely on the Puppet Forge’s dependency resolution. We have this module and all other modules required for the Learning VM cached, with instructions to install them in the Power of Puppet quest. If that installation fails, you may try adding the --force flag after the --ignore-dependencies flag.

I can’t import the OVA

First, ensure that you have an up-to-date version of your virtualization software installed. Note that the “check for updates” feature of VirtualBox may not always work as expected, so check the website for the most recent version.

The Learning VM has no IP address or the IP address will not respond.

If your network connection has changed since you loaded the VM, it’s possible that your IP address is different from that displayed on the Learning VM splash screen. Log in to the VM via the virtualization directly (rather than SSH) and use thefacter ipaddress command the check the current address.

Some network configurations may still prevent you from accessing the Learning VM. If this is the case, you can still access the Learning VM by configuring port forwarding.

Change your VM’s network adapter to NAT, and configure port forwarding as follows:

Name   -   Protocol - HostIP -   HostPort - GuestIP - GuestPort SSH        TCP        127.0.0.1  2222                 22 HTTP       TCP        127.0.0.1  8080                 80 HTTPS      TCP        127.0.0.1  8443                 443 GRAPHITE   TCP        127.0.0.1  8090                 90 

Once you have set up port forwarding, you can use those ports to access the VM via ssh (ssh -p 2222 root@localhost) and access the Quest Guide and PE console by entering http://localhost:8080 and https://localhost:8443 in your browser address bar.

I can’t scroll up in my terminal

The Learning VM uses a tool called tmux to allow us to display the quest status. You can scroll in tmux by first hitting control-b, then [ (left bracket). You will then be able to use the arrow keys to scroll. Press q to exit scrolling.

Running the VM in VirtualBox, I encounter a series of “Rejecting I/O input from offline devices”

Reduce the VM’s processors to 1 and disable the “I/O APIC” option in the system section of the settings menu.

Still need help?

If your puppet runs still fail after trying the steps above, feel free to contact us at learningvm@puppetlabs.com or check the Puppet Enterprise Known Issues page.

Tagged : / / / / / / / / / /

MSBuild Tutorial Reference for Beginner | MSBuild Learning Resources | scmGalaxy

msbuild-tutorial

Walkthrough: Creating an MSBuild Project File from Scratch

How to: Write a Simple MSBuild Project

MSBuild Basics

Build Your Project File from Scratch using MSBuild

Tagged : / / / / / / / / / / / /

Extension used in DOTNET and MSBuild Projects

extension-used-in-dotnet-and-msbuild
.proj
A popular convention for generic use. Commonly used by a main build script.
Examples:
build.proj
main.proj
company.product.build.proj
.targets
.targets files are those which is meant to be imported into other files using the Import element. Since these files are strictly re-useable they don’t actually build anything. They typically are missing the properties and item values to actually build anything.
Examples:
Microsoft.Common.targets
Microsoft.CSharp.targets
Microsoft.Data.Entity.targets
.**proj
Language specific convention where **** represents the language short acronym.
Well-known extensions:
.csproj    | C#
.vbproj    | VB.NET
.vcxproj   | Visual C++
.dbproj    | Database project
.fsproj    | F#
.pyproj    | IronPython
.rbproj    | IronRuby
.wixproj   | Windows Installer XML (WiX)
.vdproj    | Visual Studio Deployment Project
.isproj    | InstallShield
.pssproj   | PowerShell
.modelproj | Modeling project
.props
A project property sheet used by Visual C++ projects (.vcxproj).
Examples:
Microsoft.Cl.Common.props
Microsoft.Cpp.CoreWin.props
Microsoft.Cpp.props
Microsoft.Link.Common.props
.tasks
A common include file to be imported by a calling MSBuild project. Contains a list of <UsingTask> elements.
Examples:
Microsoft.Common.Tasks
MSBuild.ExtensionPack.tasks
.settings.targets
(This is a related convention if not strictly-speaking a file extension.)
A common include file to be imported by a calling MSBuild project. Contains “various properties related to shared utilities used during the build and deployment processes as well as any other common settings” (Sayed Ibrahim Hashimi, 2009).
Examples:
EntityFramework.settings.targets
Compiler.settings.targets
Library.Settings.targets
Tagged : / / / / / / / / /

Top 25 TFS Interview Questions and Answers

tfs-interview-questions-and-answers

TFS Interview Questions

1) What is Team  Foundation Server? What does it cover – version control? build processes? bug tracking? task management?

Team Foundation Server is defined in the documentation as:

Team Foundation is a collection of collaborative technologies that support a team effort to deliver a product. While the Team Foundation technologies are typically employed by a software team to build a software product, they can also be used on other types of projects.

As the customer already noted three of the core deliverables of Team Foundation Server:

1. Build Process

2. List/Work item Tracking

3. Source Control

This is leaving off probably the two most import features of Team Foundation Server. By integrating the build process, source control,policy and work item tracking you can get a deep insight into what teams are doing and some analytics for future trends which leads to the 4th core deliverable of Team Foundation Server

4. Reporting

Having insight into how a team is tracking is really only half the answer their also needs to a mechanism to share this information which brings us to the last feature of Team Foundation Server:

5. Collaboration (Typically enabled through the Team Portal, Team Project and Process Guidance)

Interestingly it is the two missing categories that set Team Foundations Server apart from other offerings.

2) List out the functionalities provided by team foundation server?

– Project Management

– Tracking work items

– Version Control

– Test case management

– Build Automation

– Reporting

– Virtual Lab Management

3) Explain TFS in respect to GIT?

t

4) Explain how you can create a Git-TFS in Visual Studio 2013 express?

To create a Git-TFS in Visual Studio 2013 express

– Create an account with MS TFS service if you don’t have inhouse TFS server

– After that, you will be directed to TFS page, where you will see tow option for creating project, one with new team project and another with a new team project+Git

– The account URL will be found right below “Getting Started.”

– Click on create git project and it will take you to a new window, where you specify details about the project like project name, description, the process template, version control, etc. and once completed click on create project.

– Now you can create a local project in team foundation server by creating a new project in Visual studio and do not forget to mark the check box that says “Add to source control”

– In the next window, select mark Git as your version control and click ok, and you will be able to see the alteration made in the source code

– After that, commit your code, right click a file in team explorer and you can compare version differences

5) Mention whether all of the team foundation service features are included into the Team foundation server?

TFS service is updated every 3 weeks while Team Foundation Server “on-premise” is updated every 3 months.  So, the on-premise version will always remain a little behind. However, TFS on-premise has got something that the TFS service does not.

– You can use TFS Lab

– Customize work items/process templates

6) Explain what kind or report server you can add in TFS?

TFS uses SQL for its data storage, so you have to add SQL server reporting services to provide a report server for TFS.

7) How one would know whether the report is updated in TFS?

For each report, there will be an option “Date Last Updated” in the lower light corner, when you click or select that option, it will give details about when it was last updated.

8) Explain how you can restore hidden debugger commands in Visual Studio 2013?

To restore the debugger feature that is hidden, you have to add the command back to the command

– Open your project, click on Tools menu and then click customize

– Tap the command tab in the customize dialog box

– In the menu bar, drop down, choose the debug menu for which you want to contain the restored command

– Tap on the Add command button

– In the Add command box, choose the command you want to add and click OK

– Repeat the step to add another command

9) Explain how you can track your code by customizing the scroll bar in Visual Studio 2013?

To show the annotations on the scroll bar

– You can customize the scroll bar to display code changes, breakpoints, bookmarks and errors

– Open the scroll bar options page

– Choose the option “show annotations over vertical scroll bar”, and then choose the annotations you want to see

– You can replace anything in the code that frequently appears in the file which is not meant to be

10) Can I install the TFS 2010 Build Service on my TFS 2008 build machine? 

Yes, you can. Even though they both default to the same port (9191), they can share that port without any problems.

11) Can we disable the “Override CheckIn Policy Failure” checkbox? Can that be customized based on User Login, Policy Type of File type?

No. It is designed it to be fully auditable by including policy compliance data in the changeset details and in the checkin mail that is delivered, but left it up to the developer to determine whether they have a good reason for overriding.

12) What are the different events available in the event model and is there any documentation on them?

There is really only one SCC event and that is the one that is raised on checkin. Subscription is via the general event model that is discussed in the extensibility kit.

13) Are Deletes you make in TFS 2010 Source Control physical or logical? Can accidental deletes be recovered?

Deletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.

14) Can different CheckIn Policies be applied on different branches? E.g. Can they have QA specific policies applied on CheckIn in a QA branch?

No.

15) How do I redisplay source control explorer?

Selecting View > Other Windows > Source Control Explorer will display the Source Control Explorer window within the IDE.

16) Why doesn’t source control detect that I have deleted a file/folder on my local disk?

The main scenario here is deleting a file (by mistake or intentionally) outside of Team Foundation and then trying the get that file back from source control. If the file version has not changed the server thinks the user already has the file and does not copy it over. This is because the server keeps a list of files that the user already has and when activities are made outside of source control this list becomes out of date. Team Foundation Version Control does have a force get option which will provide the functionality needed to obtain the desired version but it is currently partially hidden under the Get Specific Version Dialog window as a check box item.

17) Can I compare directory structures in TFS Source Control?

No, you cannot compare Directory Structures in TFS Source Control

18) Can we configure SCC to not check-in the binary files? Where are such configurations done?

Team Foundation Version Control provides a way to limit check-ins by setting up check-in policies that are evaluated before a check-in can take effect. The easiest way to do this is by authoring a policy that checks if the user is trying to check-in a binary file from a given folder structure and reject or accept it in accordance.

19) How can I add non-solution items to source control?

This can be achieved by either clicking the Add icon or by going to File > Source Control and selecting the Add To Source Control menu item.

20) When a user “edits” a file in a “source controlled” project, it gets checked out automatically. Is this configurable? Can we change this behavior?

Yes it can be done by configuring TFS by going to Tools > Options > Source Control > Environment provides an option where a user can change the settings to not checkout files automatically on edit.

21) What plugin / extensibility API does it expose?

The Team Foundation Server component model for modifying both the Process Template and creating plugins is built on to be entirely open(in many cases the entry points are defined in XML configuration files). In addition to the having this the development team and community is quite active in supplying samples of this:

Brian Harry

Buck Hodges

Rob Caron

This open platform has also enabled a ecosystem of add-ons like Teamlook, Teamprise, Teamplain, Teamword, TFSPermission Manager.

22)  How does it integrate with other non-MS platforms?

Team Foundation Server uses Web Services for cross machine communication therefore the Team Foundation Server functionality can be made available to any computer. (see MSDN Team System Article on how to use these web services) This is exactly how companies likeTeamprise, Teamplain, have built their clients to run on non windows computers.

23) How does it integrate with other software (eg custom task management software etc)?

In addition to the integration methods mentioned above Team Foundation is also a popular platform for other software manufacturers to host themselves in. Examples of this is Borland with their Together and Caliber Products and Compuware Testing with DevPartner.

24) How does the version control compare to Perforce? Branching, merging, change lists etc?

Team Foundation Server supports all normally expected Source Control features such as branching, merging, exclusively locking, remote disconnected scenarios, labeling, searching on various properties high fidelity reporting (how much code churn per person per project per iteration etc) plus a couple of newer paradigms like shelving and optimization for things like branching scenarios (many version control systems do a full copy for branches). I would have some performance comparisons but most systems don’t allow this.

25)  Automated build system?

Yes Team Foundation Server includes an Automated Build System. This system is based on MSBUILD and offers the additional functionality of automatically running tests, profiling, code analysis, verifying policies, collating the changesets and workitems for reporting.

26) Any support for distributed build tools? Eg integrating our custom data build tools into the system throughout a network?

MSbuild was written to be extensible and integrate with existing tools through easy to use XML configuration files. Many of the commercial build utilities are already using and/or integrated with MSBuild –such as Cruisecontrol.net. In addition to making these actions part of the build script I have found the generic tests set to run as part of the build to do just as good a job with a rich user interface and support for managing/filtering etc.

27) Documentation support – eg integrating documentation with code check-ins etc?

This would typically be done through an entry to a work item (to be either associated or resolved) on time of check in and linked with this work item.

The links to the documentation can exist in a couple of ways.

1. Checked in as Files (ie doc, HTML etc) Team Foundation Server makes it trivial to link all object checked in (as well as other workitems.)

2. Process guidance files that exist on the Windows Sharepoint Site – Again making it easy for linking.

3. External files once again to linked in a Workitem entry.

28) Does it send data compressed over the network?

Team Foundation uses Web Services for cross machine communication and by default automatically configures IIS use Compression.

29) Working from home / remote location?

Since cross machine communication is accomplished through web services remote access is vastly simplified.

30) Working offline? If the server is offline?

Yes, you need to change the file property to offline via a command utility called TFPT and save changes your local workspace. Any subsequent check-in does a get latest which would resolve if there are conflicts to be merged.

Tagged : / / / / / / / / / / / / /