Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

SQL Queries

 👉  To give a table, or a column in a table, a temporary name we can use Alias:

Syntax: SELECT "column_name" AS "column_Alias" FROM "table_name";

For example we have a table with column_name as “SNo” , “Country” and “Capital” and we want to update it as “Serial Number” , “State” and “Country_Capital then we will run the following command:

👉 To count the number of rows in the table:

Syntax: SELECT COUNT(column_name) AS (alias_name) FROM table_name;

For example we want to count number of orders placed by a particular customer from customer_table:

This will show the count of numbers of order places by the customer id “CG-1234′ and the products ordered.

👉 To add the values in a columns we use Sum command:

Syntax: SELECT SUM (column_name) FROM table_name;

For example we want to add the profit amount from the sales table:

👉 To find out an average for a column in the table:

Syntax: SELECT AVG (column_name) FROM table_name;

For example we need to find the average age of the customers from customer table:

👉To find the minimum and the maximum value in a table:

  • Minimum
SELECT MIN (column_name) FROM table_name;

For example we want to find the minimum order sale made for a product from sales table:

  • Maximum
SELECT MAX (column_name) FROM table_name;

For example we want to find the maximum order sale made for a product from sales table:

👉 To groups rows that have the same values into summary rows, like “find the number of customers in each region” we use GROUP BY statement. The GROUP BY statement is often used with aggregate functions (COUNT(), MAX(), MIN(), SUM(), AVG()) to group the result-set by one or more columns.

Syntax: SELECT "column_name","function-type" (column_name) FROM table_name Group By "column-name";

For example we need to find the number orders places in each region in the table:

👉 To make conditions for aggregate functions we use HAVING clause. The difference between Having clause and Where clause is that we cannot make conditions in aggregate functions whereas we can make conditions in aggregate functions using Having clause.

Syntax: SELECT column_name, AGGREGATE FUNCTION column_name FROM tables GROUP BY (column1) HAVING condition;

For example we have some set of customers in four regions and we want to see in which region the count of customers is more than 200. In this scenario we will use the HAVING clause and set a condition of count more than 200.

This command will show the count of customers in the regions where it is more than 200.

👉 To go through a condition and return a value when the first condition is met (like an if-then-else statement) “CASE” statement is used. So, once a condition is true, it will stop reading and return the result. If no conditions are true, it returns the value in the ELSE clause.

Syntax: SELECT * CASE WHEN condition THEN result ELSE result END;
Tagged : / / / / / / /

Creator 2009 > Installation/Uninstallation/Update

InstallerExpert created the topic: Creator 2009 > Installation/Uninstallation/Update
I am a new user of Roxio products…

I am trying to install Roxio Creator 2009 on Windows XP. It displays and error “0012” at about 3/4 of the way through the install.
I have tried installing many times – all with the same result.

I have searched the KB and done the following per the instructions I found there:
– copied the CD/DVD to my hard drive (ALL files)
– run the Microsoft Installer Cleanup Utility
– cleared the registry settings manually

I still get the same error.

I am amazed at the apparent complexity of this install (take a look at the other posts on this forum). I work for a software company and I am quite comfortable with using a PC – so please spare us both and pause to think before posting any unhelpful comments.

msiexpert replied the topic: Re: Creator 2009 > Installation/Uninstallation/Update
Ok lets try this :

1. Confirm your date and time are in sync.
2. Try creating a new user account with admin rights
3. Copy the contents of the installation DVD to hdd say desktop in a folder
4. Temp disable AV and anything else that might interfere.
5. Try running setup.exe using the file in the folder copied from installation disc.

Tagged : / /

Step by Step Instruction to Upgrade Perforce to 2014

perforce-2014-upgrade-instructions

Step by Step Instruction to Upgrade Perforce to 2014

The step are as follows:

a) Check if the license is current.

p4 license -o

The expiry date must be later than the binary release date you are installing

b) Verify server archives using the command ‘p4 verify -q //… > verify.txt’

* Note – If the file ‘verify.txt’ contains a ‘BAD!’ or MISSING!’
signature during the verify command, please contact Perforce Support prior to upgrading.
** If the depot is large and you get an error message “Request too large for server memory”, you can split the verify to check chink of the depot:

Example:
p4 verify -q //depot/project1/… > prj1-verify.txt
p4 verify -q //depot/project2/… > prj2-verify.txt
p4 verify -q //depot/project3/… > prj3-verify.txt

c) Stop the Perforce Server.

p4 admin stop

d) Make a checkpoint of the database.

p4d -r P4ROOT -J journal -jc

e) Copy the db.* files to a safe location.

f) Backup your archive files.

g) Read the release notes so you are aware of the latest changes.

http://www.perforce.com/perforce/doc.current/user/relnotes.txt

h) Download the latest P4D version.

ftp://ftp.perforce.com/perforce/

i) Verify that the newly downloaded binary is the latest. You might need to copy the p4d.exe to p4s.exe

p4d.exe -V
p4s.exe -V

j) Install the upgraded version of the Perforce server.

k) Restore the Perforce Server using the new P4D Binary and the checkpoint taken from the old P4D binary, with the following command ‘p4d –r root –jr checkpoint.xxx’.

l) If the server contains more than 1000 changelists, please run the following command ‘p4d -r root -xu’ from the command line’.

m) Start the perforce Sever.

n) Verify server archive files.

p4 verify -q //… > upgrade-verify.txt

** Note – If the file ‘upgrade-verify.txt’ will contains a ‘BAD!’ or MISSING!’ signature during the verify command, please contact us.

o) Take a checkpoint
p4d -r root -J journal -jc

This is also explained in the below article:

Upgrading to 2013.3 and beyond – http://answers.perforce.com/articles/KB_Article/Upgrading-to-2013-3-and-beyond

Windows To Linux
Moving from Linux to Windows is not a trivial issue, and such a migration is also not supported and will most likely require consolidation of case sensitivity conflicts using our P4 Migrate product. Please see the link below:

Cross-Server Platform Migration
http://answers.perforce.com/articles/KB_Article/Cross-Platform-Perforce-Server-Migration/?q=linux+to+windows&l=en_US&fs=Search&pn=1
http://answers.perforce.com/articles/KB_Article/Cross-Platform-Perforce-Server-Migration

Change of Server is a “Self-serve” option, please fill out this form and licensing will update accordingly.
P4 Support Change of Server License Request
http://www.perforce.com/support-services/change-server-request

Please pass the -z switch during the replay as the checkpoint/journal you are replaying are compressed:
p4d -r P4ROOT -jr -z checkpoint.ckp.104.gz

You will need to request for a background user I believe to be able to do all the required command. This can be done by filling in this below form and using one background user for all of you build machines.
http://www.perforce.com/support-services/request-background-user

Moving a Perforce Server
http://answers.perforce.com/articles/KB_Article/Moving-a-Perforce-Server

 By Chris Lesemann <support@perforce.com

Our Knowledge Base has accurate and up to date information that will help you prepare your own procedure for upgrading. Because the database structures have changed, you will need to take an existing checkpoint and then replay that checkpoint with the 2014.1 version of the P4 Server.

It appear you are on a 32-bit Windows instance…will you be doing an inplace upgrade or moving to 64-bit hardware? If you are moving from 32-bit to -64bit you have to recover from checkpoint anyway.

Please review the two links below and if you have any further questions I would be happy to assist you:

 Upgrading to 2013.3 and beyond <– specific procedure for moving from 2013.2 and earlier P4 to 2013.3 and later

 Upgrading a Perforce Server <– in-place upgrade

 Moving a Perforce Server <– if you have to move to new hard – do NOT start a 64-bit P4D binary against 32-bit db.* databases, or vice versa.

 Upgrading the Perforce Proxy

General information on the new Btree structures and functionality:

 BTree Format Changed for Perforce Server Versions 2013.3 and Later

 Lockless Reads

Tagged : / / / / / / / / / / / / / / / / /

Sonar PDF Report Plugin 2.1 – What is new in Sonar PDF Report Plugin 2.1?

sonar-pdf-report-plugin

Hi all,

I’m proud to announce the availability of a new release of Sonar
PDF Report Plugin (Commercial edition):
http://blog.klicap.es/products/sonarpdfreportplugin

Key features of this new release:

* Include information provided by other installed plugins
* Possibility of use SVG images in front page and header

Actually Sonar PDF Report Plugin 2.1 supports integration with:
* Technical Debt Plugin
* SIGMM Plugin
* Total Quality PLugin
* Quality Index Plugin
* Taglist Plugin

Regards,
Antonio.

Tagged : / / / / / / / / / / / / / / / / /

Build Stability Plugin 1.1.1 Released by Sonar team – Overview

Sonar team

The Sonar team is pleased to announce the release of the Build
Stability Plugin version 1.1.1.

The new version fixes an issue with Bamboo support.

The documentation, changes log and jar file are available on the
plugin page [1].

Enjoy !

– The Sonar Team

Tagged : / / / / / / / / / / / / /

Powerful New Amazon EC2 Boot Features – Introduction

amazon-ec2-boot-features

Today a powerful new feature is available for our Amazon EC2 customers: the ability to boot their instances from Amazon EBS (Elastic Block Store).

Customers like the simplicity of the AMI (Amazon Machine Image) model where they either choose a preconfigured AMI or upload their own AMI into Amazon S3. A wide variety of operating systems and software configurations is available for use. But customers have also asked us for more flexibility and control in the way that Amazon EC2 instances are booted such that they have finer grained control over for example what software configurations and data sets are available to the instance at boot time.

serverfolders-small.jpg

The ability to boot from Amazon EBS gives customers very powerful control over the boot configuration of the Amazon EC2 instances. In the traditional boot process, the root partition of the image will be the local disk, which is created and populated at boot time. In the new Amazon EBS boot process, the root partition is an Amazon EBS volume, which is created at boot time from an Amazon EBS snapshot. Other Amazon EBS volumes beyond the root disk can also made part of the instance before it is booted. This allows for a very fine-grain control of software and data configuration. An additional advantage of using the Amazon EBS boot process is that root partitions are no longer constrained by the size of the local disk and can be up to 1TB in size. And the new boot process is significantly faster because a local disk no longer needs to be populated.

With this new boot process another powerful feature is available to our Amazon EC2 customers: the ability to stop an instance and restart it at a later time with the disk configuration intact. When an instance is restarted, the customer can choose to use a different instance type (e.g., with more memory or CPU), a different operating system (e.g., with new security patches installed), or add new user data. While the instance is stopped it does not accrue any usage hours and customers are only charged for the storage associated with the Amazon EBS volume. The ability to stop and restart an instance is a very powerful mechanism that makes management of instances much easier; many scenarios related to adaptive instance sizing and software management have now become much simpler.

The new boot from Amazon EBS feature is an important step in our continuing quest to remove more and more of the heavy lifting that comes with today’s computer environments.

Tagged : / / / / / / / / / / / / / / / /

Upgrading Continuum – Continuum Upgradation Guide

continuum-upgrade

This document will help you upgrade Continuum from 1.2.x to 1.3.3 and above.

When upgrading Continuum, it could have some database model changes. Usually these changes will be migrated for you, but in some cases you may need to use a backup from the previous version and restore that data into the new version. The Data Management tool exports data from the old database model and imports the data into the new database model.

If you had used the APP_BASE environment variable in Continuum 1.2 to differentiate your configuration from the installation, you should rename it to CONTINUUM_BASE in Continuum 1.3.

Note: The Jetty version in Continuum 1.3.4 and above has been upgraded to 6.1.19. When upgrading to Continuum 1.3.4 or higher, there is a need to update the library contents listed in $CONTINUUM_BASE/conf/wrapper.conf with the ones included in the new distribution especially if the $CONTINUUM_BASE directory is separate from the installation.

Using Backup and Restore to upgrade

There are 2 databases that need to be considered: one for the builds and one for the users.

There were no changes in the users database from 1.2.x to 1.3.2, so you can simply point Continuum 1.3.2 at your existing user database.

The builds database has had model changes, and will need to be exported and imported.

First, download the Data Management tools you will need. The tool is a standalone JAR that you can download from the central repo.

You will need to download two versions of the tool, one for the export out of the old version and one for the import into the new version:

Note: The 1.2, 1.2.2 and 1.2.3 released versions of this tool have a bug. To export databases from 1.2.2 or 1.2.3, you will need to use version 1.2.3.1 of the tool. To export databases from 1.2, you may use the 1.1 version of the tool.

Next, follow these steps to export data from the old version

  • Stop the old version of Continuum
  • Execute this command to create the builds.xml export file
java -Xmx512m -jar data-management-cli-1.2.x-app.jar -buildsJdbcUrl jdbc:derby:${old.continuum.home}/data/databases/continuum -mode EXPORT -directory backups

Then, follow these steps to import the data to the new version

  • Start the new version of Continuum to create the new data model, but do not configure it.
  • Stop Continuum
  • Execute this command to import the builds data from the xml file you created earlier:
java -Xmx512m -jar data-management-cli-1.3.2-app.jar -buildsJdbcUrl jdbc:derby:${new.continuum.home}/data/databases/continuum -mode IMPORT -directory backups -strict

Note: Remove -strict when importing data from 1.3.1 to 1.3.x to ignore unrecognized tags due to model changes.

Finally, be aware that sometimes the NEXT_VAL values in the SEQUENCE_TABLE need to be adjusted.

  • Before starting Continuum for the first time after the import, connect to the db with a client like Squirrel SQL and check the values in the NEXT_VAL column of the SEQUENCE_TABLE.
  • Values must be greater than the max id value in each table.
  • For example, the next value of “org.apache.maven.continuum.model.Project” must be greater than the greatest id in Project table.
  • Here are some example SQL statements. You may need to add or remove lines depending on the contents of your database.
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(systemconfiguration_id)+1 from SYSTEMCONFIGURATION) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.system.SystemConfiguration';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from BUILDQUEUE) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.BuildQueue';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from SCHEDULE) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.Schedule';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from BUILDDEFINITION) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.BuildDefinition';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from LOCALREPOSITORY) WHERE SEQUENCE_NAME='org.apache.continuum.model.repository.LocalRepository';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from PROJECTGROUP) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.ProjectGroup';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(scmresult_id)+1 from SCMRESULT) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.scm.ScmResult';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(projectdependency_id)+1 from PROJECTDEPENDENCY) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.ProjectDependency';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from BUILDDEFINITIONTEMPLATE) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.BuildDefinitionTemplate';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from ABSTRACTPURGECONFIGURATION) WHERE SEQUENCE_NAME='org.apache.continuum.model.repository.AbstractPurgeConfiguration';

Now you can start your new version of Continuum.

Reference: http://continuum.apache.org/docs/1.3.6/installation/upgrade.html

 

 

Tagged : / / / / / / / / / / / / / / /