Month: July 2016
How to Set or Configure Proxy in Linux and Windows System? – scmGalaxy
Java Installation Process in Linux – Complete guide
Download, Install and Configure JDK 8 & JRE 8
Platfrom – Debian & Ubuntu
#JRE8 - Package contains just the Java Runtime Environment 8 $ sudo apt-get install openjdk-8-jre #JKD8 - Package contains just the Java Developement Environment 8 $ sudo apt-get install openjdk-8-jdk
Platfrom – Fedora, Oracle Linux, Red Hat Enterprise Linux, etc
#JRE8 - Package contains just the Java Runtime Environment 8 $ su -c “yum install java-1.8.0-openjdk” #JKD8 - Package contains just the Java Developement Environment 8 $ su -c "yum install java-1.8.0-openjdk-devel" $ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm" $ wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm curl -v -j -k -L -H "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.rpm > jdk-8u112-linux-x64.rpm
Platfrom – All platforms of Linux, Windows and Mac in Tar ball format
$ wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz $ wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz" $ wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz
How to set JAVA in Linux System?
$ export JAVA_HOME=/opt/jdk1.8.0_144/ $ export PATH=/opt/jdk1.8.0_144/bin:$PATH;
Download, Install and Configure JDK 7 & JRE 7
Platfrom – Debian & Ubuntu
#JRE7 - Package contains just the Java Runtime Environment 7 $ sudo apt-get install openjdk-7-jre #JKD7 - Package contains just the Java Developement Environment 7 $ sudo apt-get install openjdk-7-jdk
Platfrom – Fedora, Oracle Linux, Red Hat Enterprise Linux, etc
$ su -c “yum install java-1.7.0-openjdk” $ su -c “yum install java-1.7.0-openjdk-devel”
Platfrom – All platforms of Linux, Windows and Mac in Tar ball format
wget –no-cookies –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com” “http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz” wget –no-check-certificate –no-cookies –header “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz curl -v -j -k -L -H “Cookie: oraclelicense=accept-securebackup-cookie” http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.rpm > jdk-7u79-linux-x64.rpm
JDK 6
Debian, Ubuntu, etc.
On the command line, type:
$ sudo apt-get install openjdk-6-jre
The openjdk-6-jre package contains just the Java Runtime Environment.
$ sudo apt-get install openjdk-6-jdk
If you want to develop Java programs then install the openjdk-6-jdk package.
Fedora, Oracle Linux, Red Hat Enterprise Linux, etc.
On the command line, type:
$ su -c “yum install java-1.6.0-openjdk”
The java-1.6.0-openjdk package contains just the Java Runtime Environment.
$ su -c “yum install java-1.6.0-openjdk-devel”
If you want to develop Java programs then install the java-1.6.0-openjdk-devel package.
Configure the Knife Command – Chef
We now have to configure the knife
command. This command is the central way of communicating with our server and the nodes that we will be configuring. We need to tell it how to authenticate and then generate a user to access the Chef server.
Luckily, we’ve been laying the groundwork for this step by acquiring the appropriate credential files. We can start the configuration by typing:
knife configure --initial
This will ask you a series of questions. We will go through them one by one:
WARNING: No knife configuration file found Where should I put the config file? [/home/your_user/.chef/knife.rb]
The values in the brackets ([]) are the default values that knife will use if we do not select a value.
We want to place our knife configuration file in the hidden directory we have been using:
/home/your_user/chef-repo/.chef/knife.rb
In the next question, type in the domain name or IP address you use to access the Chef server. This should begin with https://
and end with :443
:
https://server_domain_or_IP:443
You will be asked for a name for the new user you will be creating. Choose something descriptive:
Please enter a name for the new user: [root] station1
It will then ask you for the admin name. This you can just press enter on to accept the default value (we didn’t change the admin name).
It will then ask you for the location of the existing administrators key. This should be:
/home/your_user/chef-repo/.chef/admin.pem
It will ask a similar set of questions about the validator. We haven’t changed the validator’s name either, so we can keep that as chef-validator
. Press enter to accept this value.
It will then ask you for the location of the validation key. It should be something like this:
/home/your_user/chef-repo/.chef/chef-validator.pem
Next, it will ask for the path to the repository. This is the chef-repo
folder we have been operating in:
/home/your_user/chef-repo
Finally, it will ask you to select a password for your new user. Select anything you would like.
This should complete our knife configuration. If we look in our chef-repo/.chef
directory, we should see a knife configuration file and the credentials of our new user:
ls ~/chef-repo/.chef
admin.pem chef-validator.pem knife.rb station1.pem
Docker Command line Reference | Docker Tutorial | Docker Guide
Docker Training | Docker Course | Agenda | scmGalaxy
Need to learn Docker? This is the training for you! This training provides a soup-to-nuts learning experience for core Docker technologies, including the Docker Engine, Images, Containers, Registries, Networking, Storage, and more. All of the behind the scenes theory is explained, and all concepts are clearly demonstrated on the command line. No prior knowledge of Docker or Linux is required. Training Overview Training
Introduction
How to Setup Puppet Learning VM – Complete Process/Guide
Minimum requirements
- Internet-enabled Windows, OS X, or Linux computer with 10GB free space and a VT-x/AMD-V enabled processor.
- Up to date virtualization software. See the setup instructions below for details.
Setting up the Learning VM
-
Before beginning, you may want to use the MD5 sum provided at the VM download page to verify your download. On Mac OS X and *nix systems, you can use the command
md5 learning_puppet_vm.zip
and compare the output to the text contents of thelearning_puppet_vm.zip.md5
file provided on the download page. On Windows systems, you will need to download and use a tool such as the Microsoft File Checksum Integrity Verifier. -
Get an up-to-date version of your virtualization software. We suggest using either VirtualBox or a VMware application appropriate for your platform. VirtualBox is free and available for Linux, OS X, and Windows. VMware has several desktop virtualization applications, including VMWare Fusion for Mac and VMware Workstation for Windows.
-
The Learning VM’s Open Virtualization Archive format must be imported rather than opened directly. Launch your virtualization software and find an option for Import or Import Appliance. (This will usually be in a File menu. If you cannot locate an Import option, please refer to your virtualization software’s documentation.)
-
Before starting the VM for the first time, you will need to adjust its settings. We recommend allocating 4GB of memory for the best performance. If you don’t have enough memory on your host machine, you may leave the allocation at 3GB or lower it to 2GB, though you may encounter stability and performance issues. Set the Network Adapter to Bridged. Use an Autodetect setting if available, or accept the default Network Adapter name. (If you started the VM before making these changes, you may need to restart the VM before the settings will be applied correctly.) If you are unable to use a bridged network, we suggest using the port-forwarding instructions provided in the troubleshooting guide.
-
Start the VM. When it is started, make a note of the IP address and password displayed on the splash page. Rather than logging in directly, we highly recommend using SSH. On OS X, you can use the default Terminal application or a third-party application like iTerm. For Windows, we suggest the free SSH client PuTTY. Connect to the Learning VM with the login
root
and password you noted from the splash page. (e.g.ssh root@<IPADDRESS>
) Be aware that it might take several minutes for the services in the PE stack to fully start after the VM boots. Once you’re connected to the VM, we suggest updating the clock withntpdate pool.ntp.org
. -
You can access this Quest Guide via a webserver running on the Learning VM itself. Open a web broswer on your host and enter the Learning VM’s IP address in the address bar. (Be sure to use
http://<ADDRESS>
for the Quest Guide, ashttps://<ADDRESS>
will take you to the PE console.
Troubleshooting
For the most up-to-date version of this troubleshooting information, check the GitHub repository. If nothing here resolves your issue, feel free to email us at learningvm@puppetlabs.com and we’ll do our best to address your issue.
For issues with Puppet Enterprise that are not specific to the Learning VM, see the Puppet Enterprise Known Issues page.
The cowsay package won’t install
The Learning VM version 2.29 has an error in the instructions for this quest. The cowsay package declaration should includeprovider => 'gem'
, rather than ensure => 'gem'
.
If you continue to get puppet run failures related to the gem, you can install the cached version manually: gem install /var/cache/rubygems/gems/cowsay-0.2.0.gem
I completed a task, but the quest tool doesn’t show it as complete
The quest tool uses a series of Serverspec tests for each quest to track task progress. Certain tasks simply check your bash history for an entered command. In some cases, the /root/.bash_history
won’t be properly initialized, causing these tests to fail. Exiting the VM and logging in again will fix this issue.
It is also possible that we have written the test for a task in a way that is too restrictive and doesn’t correctly capture a valid syntactical variation in your Puppet code or another relevant file. You can check the specific matchers by looking at a quest’s spec file in the ~/.testing/spec/localhost/
directory. If you find an issue here, please let us know by sending an email tolearningvm@puppetlabs.com.
Password Required for the Quest Guide
The Learning VM’s Quest Guide is accessible at http://<VM's IP Address>
. Note that this is http
and not https
which is reserved for the PE console. The PE console will prompt you for a password, while no password is required for the Quest Guide. (The Quest Guide includes a password for the PE console in the Power of Puppet quest: admin/puppetlabs)
I can’t find the VM password
The password to log in to the VM is generated randomly and will be displayed on the splash page displayed on the terminal of your virtualization software when you start the VM.
If you are already logged in via your virtualization software’s terminal, you can use the following command to view the password: cat /var/local/password
.
Does the Learning VM work on vSphere, ESXi, etc.?
Possibly, but we don’t currently have the resources to test or support the Learning VM on these platforms.
My puppet run fails and/or I cannot connect to the PE console
It may take some time after the VM is started before all the Puppet services are fully started. If you recently started or restarted the VM, please wait a few minutes and try to access the console or trigger your puppet run again.
Also, because the Learning VM’s puppet services are configured to run in an environment with restricted resources, they are more prone to crashes than a default installation with dedicated resources.
You can check the status of puppet services with the following command:
systemctl --all | grep pe-
If you notice any stopped puppet-related services (e.g. pe-puppetdb), double check that you have sufficient memory allocated to the VM and available on your host before you try starting them (e.g. service pe-puppetdb start
).
If you get an error along the lines of Error 400 on SERVER: Unknown function union...
it is likely because the puppetlabs-stdlib
module has not been installed. This module is a dependency for many modules, and provides a set of common functions. If you are running the Learning VM offline, you cannot rely on the Puppet Forge’s dependency resolution. We have this module and all other modules required for the Learning VM cached, with instructions to install them in the Power of Puppet quest. If that installation fails, you may try adding the --force
flag after the --ignore-dependencies
flag.
I can’t import the OVA
First, ensure that you have an up-to-date version of your virtualization software installed. Note that the “check for updates” feature of VirtualBox may not always work as expected, so check the website for the most recent version.
The Learning VM has no IP address or the IP address will not respond.
If your network connection has changed since you loaded the VM, it’s possible that your IP address is different from that displayed on the Learning VM splash screen. Log in to the VM via the virtualization directly (rather than SSH) and use thefacter ipaddress
command the check the current address.
Some network configurations may still prevent you from accessing the Learning VM. If this is the case, you can still access the Learning VM by configuring port forwarding.
Change your VM’s network adapter to NAT, and configure port forwarding as follows:
Name - Protocol - HostIP - HostPort - GuestIP - GuestPort SSH TCP 127.0.0.1 2222 22 HTTP TCP 127.0.0.1 8080 80 HTTPS TCP 127.0.0.1 8443 443 GRAPHITE TCP 127.0.0.1 8090 90
Once you have set up port forwarding, you can use those ports to access the VM via ssh (ssh -p 2222 root@localhost
) and access the Quest Guide and PE console by entering http://localhost:8080
and https://localhost:8443
in your browser address bar.
I can’t scroll up in my terminal
The Learning VM uses a tool called tmux to allow us to display the quest status. You can scroll in tmux by first hitting control-b, then [ (left bracket). You will then be able to use the arrow keys to scroll. Press q to exit scrolling.
Running the VM in VirtualBox, I encounter a series of “Rejecting I/O input from offline devices”
Reduce the VM’s processors to 1 and disable the “I/O APIC” option in the system section of the settings menu.
Still need help?
If your puppet runs still fail after trying the steps above, feel free to contact us at learningvm@puppetlabs.com or check the Puppet Enterprise Known Issues page.
MSBuild Tutorial Reference for Beginner | MSBuild Learning Resources | scmGalaxy
Extension used in DOTNET and MSBuild Projects
Top 25 TFS Interview Questions and Answers
TFS Interview Questions
Team Foundation Server is defined in the documentation as:
Team Foundation is a collection of collaborative technologies that support a team effort to deliver a product. While the Team Foundation technologies are typically employed by a software team to build a software product, they can also be used on other types of projects.
As the customer already noted three of the core deliverables of Team Foundation Server:
1. Build Process
2. List/Work item Tracking
3. Source Control
This is leaving off probably the two most import features of Team Foundation Server. By integrating the build process, source control,policy and work item tracking you can get a deep insight into what teams are doing and some analytics for future trends which leads to the 4th core deliverable of Team Foundation Server
4. Reporting
Having insight into how a team is tracking is really only half the answer their also needs to a mechanism to share this information which brings us to the last feature of Team Foundation Server:
5. Collaboration (Typically enabled through the Team Portal, Team Project and Process Guidance)
Interestingly it is the two missing categories that set Team Foundations Server apart from other offerings.
2) List out the functionalities provided by team foundation server?
– Project Management
– Tracking work items
– Version Control
– Test case management
– Build Automation
– Reporting
– Virtual Lab Management
3) Explain TFS in respect to GIT?
4) Explain how you can create a Git-TFS in Visual Studio 2013 express?
To create a Git-TFS in Visual Studio 2013 express
– Create an account with MS TFS service if you don’t have inhouse TFS server
– After that, you will be directed to TFS page, where you will see tow option for creating project, one with new team project and another with a new team project+Git
– The account URL will be found right below “Getting Started.”
– Click on create git project and it will take you to a new window, where you specify details about the project like project name, description, the process template, version control, etc. and once completed click on create project.
– Now you can create a local project in team foundation server by creating a new project in Visual studio and do not forget to mark the check box that says “Add to source control”
– In the next window, select mark Git as your version control and click ok, and you will be able to see the alteration made in the source code
– After that, commit your code, right click a file in team explorer and you can compare version differences
5) Mention whether all of the team foundation service features are included into the Team foundation server?
TFS service is updated every 3 weeks while Team Foundation Server “on-premise” is updated every 3 months. So, the on-premise version will always remain a little behind. However, TFS on-premise has got something that the TFS service does not.
– You can use TFS Lab
– Customize work items/process templates
6) Explain what kind or report server you can add in TFS?
TFS uses SQL for its data storage, so you have to add SQL server reporting services to provide a report server for TFS.
7) How one would know whether the report is updated in TFS?
For each report, there will be an option “Date Last Updated” in the lower light corner, when you click or select that option, it will give details about when it was last updated.
8) Explain how you can restore hidden debugger commands in Visual Studio 2013?
To restore the debugger feature that is hidden, you have to add the command back to the command
– Open your project, click on Tools menu and then click customize
– Tap the command tab in the customize dialog box
– In the menu bar, drop down, choose the debug menu for which you want to contain the restored command
– Tap on the Add command button
– In the Add command box, choose the command you want to add and click OK
– Repeat the step to add another command
9) Explain how you can track your code by customizing the scroll bar in Visual Studio 2013?
To show the annotations on the scroll bar
– You can customize the scroll bar to display code changes, breakpoints, bookmarks and errors
– Open the scroll bar options page
– Choose the option “show annotations over vertical scroll bar”, and then choose the annotations you want to see
– You can replace anything in the code that frequently appears in the file which is not meant to be
10) Can I install the TFS 2010 Build Service on my TFS 2008 build machine?
Yes, you can. Even though they both default to the same port (9191), they can share that port without any problems.
11) Can we disable the “Override CheckIn Policy Failure” checkbox? Can that be customized based on User Login, Policy Type of File type?
No. It is designed it to be fully auditable by including policy compliance data in the changeset details and in the checkin mail that is delivered, but left it up to the developer to determine whether they have a good reason for overriding.
12) What are the different events available in the event model and is there any documentation on them?
There is really only one SCC event and that is the one that is raised on checkin. Subscription is via the general event model that is discussed in the extensibility kit.
13) Are Deletes you make in TFS 2010 Source Control physical or logical? Can accidental deletes be recovered?
Deletes are fully recoverable with the “undelete” operation. You wouldn’t want to do a SQL restore because that would roll back every change to the TFS in the time since the file was deleted.
14) Can different CheckIn Policies be applied on different branches? E.g. Can they have QA specific policies applied on CheckIn in a QA branch?
No.
15) How do I redisplay source control explorer?
Selecting View > Other Windows > Source Control Explorer will display the Source Control Explorer window within the IDE.
16) Why doesn’t source control detect that I have deleted a file/folder on my local disk?
The main scenario here is deleting a file (by mistake or intentionally) outside of Team Foundation and then trying the get that file back from source control. If the file version has not changed the server thinks the user already has the file and does not copy it over. This is because the server keeps a list of files that the user already has and when activities are made outside of source control this list becomes out of date. Team Foundation Version Control does have a force get option which will provide the functionality needed to obtain the desired version but it is currently partially hidden under the Get Specific Version Dialog window as a check box item.
17) Can I compare directory structures in TFS Source Control?
No, you cannot compare Directory Structures in TFS Source Control
18) Can we configure SCC to not check-in the binary files? Where are such configurations done?
Team Foundation Version Control provides a way to limit check-ins by setting up check-in policies that are evaluated before a check-in can take effect. The easiest way to do this is by authoring a policy that checks if the user is trying to check-in a binary file from a given folder structure and reject or accept it in accordance.
19) How can I add non-solution items to source control?
This can be achieved by either clicking the Add icon or by going to File > Source Control and selecting the Add To Source Control menu item.
20) When a user “edits” a file in a “source controlled” project, it gets checked out automatically. Is this configurable? Can we change this behavior?
Yes it can be done by configuring TFS by going to Tools > Options > Source Control > Environment provides an option where a user can change the settings to not checkout files automatically on edit.
21) What plugin / extensibility API does it expose?
The Team Foundation Server component model for modifying both the Process Template and creating plugins is built on to be entirely open(in many cases the entry points are defined in XML configuration files). In addition to the having this the development team and community is quite active in supplying samples of this:
Brian Harry
Buck Hodges
Rob Caron
This open platform has also enabled a ecosystem of add-ons like Teamlook, Teamprise, Teamplain, Teamword, TFSPermission Manager.
22) How does it integrate with other non-MS platforms?
Team Foundation Server uses Web Services for cross machine communication therefore the Team Foundation Server functionality can be made available to any computer. (see MSDN Team System Article on how to use these web services) This is exactly how companies likeTeamprise, Teamplain, have built their clients to run on non windows computers.
23) How does it integrate with other software (eg custom task management software etc)?
In addition to the integration methods mentioned above Team Foundation is also a popular platform for other software manufacturers to host themselves in. Examples of this is Borland with their Together and Caliber Products and Compuware Testing with DevPartner.
24) How does the version control compare to Perforce? Branching, merging, change lists etc?
Team Foundation Server supports all normally expected Source Control features such as branching, merging, exclusively locking, remote disconnected scenarios, labeling, searching on various properties high fidelity reporting (how much code churn per person per project per iteration etc) plus a couple of newer paradigms like shelving and optimization for things like branching scenarios (many version control systems do a full copy for branches). I would have some performance comparisons but most systems don’t allow this.
Yes Team Foundation Server includes an Automated Build System. This system is based on MSBUILD and offers the additional functionality of automatically running tests, profiling, code analysis, verifying policies, collating the changesets and workitems for reporting.
26) Any support for distributed build tools? Eg integrating our custom data build tools into the system throughout a network?
MSbuild was written to be extensible and integrate with existing tools through easy to use XML configuration files. Many of the commercial build utilities are already using and/or integrated with MSBuild –such as Cruisecontrol.net. In addition to making these actions part of the build script I have found the generic tests set to run as part of the build to do just as good a job with a rich user interface and support for managing/filtering etc.
27) Documentation support – eg integrating documentation with code check-ins etc?
This would typically be done through an entry to a work item (to be either associated or resolved) on time of check in and linked with this work item.
The links to the documentation can exist in a couple of ways.
1. Checked in as Files (ie doc, HTML etc) Team Foundation Server makes it trivial to link all object checked in (as well as other workitems.)
2. Process guidance files that exist on the Windows Sharepoint Site – Again making it easy for linking.
3. External files once again to linked in a Workitem entry.
28) Does it send data compressed over the network?
Team Foundation uses Web Services for cross machine communication and by default automatically configures IIS use Compression.
29) Working from home / remote location?
Since cross machine communication is accomplished through web services remote access is vastly simplified.
30) Working offline? If the server is offline?
Yes, you need to change the file property to offline via a command utility called TFPT and save changes your local workspace. Any subsequent check-in does a get latest which would resolve if there are conflicts to be merged.