Social Media Marketing – TIPS – Interconnect Everything & Jumps on trends early

Interconnect your entire marketing efforts:- In this blog, we are going to write down “How to interconnect your social media with your other marketing channels”.

Social Media links in Newsletter:- For starters, starting today any newsletter you send out, needs to have at least two of your favorite social media networks included in them. (I would recommend Facebook and Twitter).

Social Media Links in Website:- Your website needs to have visible links to all your social media platforms. The header and footer are usually is a good place for those.

Coupon Redeem through Social platforms:- If you have a plan on creating any coupons, give them options to redeem them on your Facebook page.

Social Media links to your contents:- If you do any kind of Content marketing, include relevant social platforms links at the end of it. Also, put your Twitter handles in your business cards.

Mention Social Networks in outdoor events:- Put all major social handles into any outdoor promotions, events, and campaigns.

Try to create as many follow-up opportunities as possible, to reach your audience in the future. At the end of the day, that’s what Social Media is all about, creating authentic connections.

According to the science:

The mere-exposure effect, and the familiarity effect:- Its psychological phenomenon by which people tend to develop a preference for things merely because they are familiar with them.

Studies on this have been done since the 1960s. And more recent 2001 proved that this is not even done at a conscious level. This means your brain doesn’t even think when liking something which it considers familiar.

Do you think your choice of a brand is random or on purpose?

Well, guess again. And this knowledge has been used by advertising agencies for ages, that’s why you see the same ADs over and over again. So remember this the next time you have any doubts about mentioning your social media accounts anywhere else within your marketing channels. Because it can make the difference if people picks your products from the line-up, or goes for your biggest competitor.

Jumps on trends early

As a business owner it’s your responsibility to innovate and bring something new for the customer before your competitiors – so always JUMPS on trends early.

Peter Drucker (Marketing Expert) once said “Business is nothing more than marketing and innovation”.

The reason why you should be an early adopter, it’s obvious – get more traction. Because there is less competition. It’s like having more channels on TV. The more of them available, the smaller the percentage of people that will look at a particular channel. Even if the content your produce is of impeccable quality, due to the sheer number of alternatives, few people will ever get to discover you. But if you are an early adopter, the chances of users stumbling onto your brand are much higher.

Thus’ the chance for your business to grow on that particular platform will be higher. Even, this effect is more powerful for non-mega brands like Coke or Ford or GM. And let’s say the network you join doesn’t grow and is all hype, the loss in time spent there for your business is always outweighed by the alternative of that particular network exploding in attention, and getting the spotlight on your brand as well.

Being an early adopter is more than a social media choice. It should be a life choice if your goals are to deliver groundbreaking work. In the past, the risks involved were higher. Think of all the lives lost and accidents when designing and creating the first airplanes, submarines or cars. It took lots of iterations to get things safe for the masses. And it will take you as well, a lot of effort to create and offer great products and services. But unlike the past, then the risk was literally life-threatening, your risk is the only time, and sometimes money, and even these two are not in huge amounts.

So the next time you hear about a new social media network on the rise, give it a look. See if it makes sense for your business, and jump on it early on, and become a power player on that platform.

Tagged : / / / / / / / /

Joomla Website Performance


Tagged : / / / / / / / /

How can you become a successful DevOps Engineer ?


These days in software industry one word is high in trend and that is “DevOps”. Industry experts define DevOps either as a “culture” or “methodology”. But when organizations and companies looking for DevOps expert for their projects or organizations they post job ad like this “Hiring or Looking” for “DevOps Engineer” or “DevOps Architect”. So we can say DevOps is a culture and methodology but it’s a “Role” too.


Today DevOps is reforming the software industry. DevOps integrates developers and operation teams in order to improve collaboration and productivity by automation infrastructure, automating workflows and continuously application performance. These days almost all IT organizations are executing DevOps in their software development process from initial product planning stage to security assurance to quality to user’s feedback. They implementing it because it gives them technical and business benefits. It gives them to deploy code more frequently with less failure rates. By implementing DevOps organizations can provide continuous delivery of software’s with less complex problems to fix and faster solutions of the problems which ultimately means faster delivery of the features, more stable operating environments and they can have more time to add value rather than spending time on fixing and maintaining. This is the reason organizations wants to hire DevOps engineers without wasting any time.


So, let’s see who can become a DevOps engineer.


·         Anyone who is in software development or system operations can become a DevOps engineer.


Yes, you read it write, to become a DevOps engineer you does not need educational and formal career track. Either developers who are interested in network operations and deployments or system Admins who have an enthusiasm for scripting and coding, and move into the development side where they can enhance the planning of test and organizations they all can become a DevOps engineers.

Now, let’s check the skills a DevOps engineer should have:-

·         Knowledge of coding and scripting.

·         Experienced with systems and IT operations

·         Should be comfortable with frequent, incremental code testing and deployment and should be able to adapt ever changing environment.

·         Strong grasp of automation tools

·         They should have skills of Data management.

·         A strong emphasis on business results.

·         Ability to work in team and make them all work together.

·         Should be comfortable with collaboration, communication and reaching ability beyond functional areas.

·         Linux or Windows or Hybrid command line knowledge

·         Must be able to understand and utilize a wide variety of open source technologies and tools.

Now let’s look on the process to become a DevOps engineer.

One who accomplishes the thorough necessities to end up distinctly a DevOps engineer can hope to be enormously rewarded. It’s never been a superior or more gainful time to consider DevOps as a profession way or a career change.


One thing is for sure that DevOps is come here to stay and if you wants to become a DevOps engineer than you must need to enhance above mentioned skills. Remember that DevOps is less about doing things in a specific way, and more about advancing the business and giving it a more grounded innovative favorable position.


To start your career as a DevOps engineer you must need a mentor or instructor to help you in this and I would like to suggest scmGalaxy which is a one stop portal for DevOps learning where you can find DevOps tutorials, DevOps courses, certifications, trainers, study materials and much more all in one place. They have well designed DevOps courses and certification programs and well known dedicated DevOps trainers who can help you to become a successful DevOps engineers.


Tagged : / / / / / / / / / / /

Tips to find qualified DevOps Trainers, Instructors and Coach?

Bangalore is a silicon vallery of the India. There are many software organization and companies, currently working towards the implementing the automation thoughtout the software development life cycle. DevOps has become so popular now a days, every projects wants to implement it thus finding a right DevOps trainers in Bangalore and other city is a challenge. This challenge has been solved by
Before exploring the, Lets first understand more about DevOps, DevOps is a new phenomenon in which software companies are automating the release cycle with the quality by introducing many practices in the team and projects. Qualified trainers or coach is very much important for execution of DevOps implementation in the software projects.
So the questions arises here, what is DevOps – DevOps is a combination of two words “Dev” and “Ops” it means Development and Operations. DevOps is the practice of operations and development engineers participating together in the entire organization lifecycle, from plan through the development process to production support. DevOps is likewise operations staff making use many of the same techniques.
There are other aspects to define the DevOps is a product development strategy that highlights the cooperation and open communication between teams. DevOps groups are made out of developers and operations experts cooperating to make sounder, quality product release within shortest span of time. Groups that have adopted the DevOps ethos have a superior handle on their IT occurrences and endure less downtime.
In order to have a continuous software delivery, less complex problem to to settle and to get faster determination of issue, organizations need awesome representatives who can truly comprehend and knows DevOps totally. Also, to prepare them for doing such effective work, organization needs a great mentor and coach with the goal that they can prepare the software engineers to handle specialized issues of the organizations to develop the business effectively and faster.
Before selecting the right DevOps trainer and mentor, first organization must evalaute the qualities and experience what the DevOps instructors has and bringing the new capabilities in the projects. I would rather say, these are the following qualities of DevOps instructors must have in order to guide the project and company to implement the DevOps approach.
  • A decent trainer makes conveying a class look simple and consistent.
  • A  best DevOps trainer is sensitive to his or her own particular vitality level and that of the class.
  • Since DevOps trainer are good examples, they ought to be develop, certain and energetic.
  • A best DevOps trainer knows the material, lives it, inhales it, and can imbue their own understanding into it.

Status to allow and encourage participants to learn from themselves and the class in order to create as many organic learning moments as possible.

There are many cities in the world where IT and software business is growing like tremendously thus implementing the DevOps for organization and their projects is breakthrough and must have things to do. Bangalore in India and California in USA is the silicon valley of the world and thus finding the qualified DevOps trainer and Coach is one of the challenges. has simplied this process and created a platform in which any software company can find an experienced DevOps instructors and avail in easiest way to implement the DevOps culture in their projects. is providing the DevOps trainers and consultant for each city in the world, some of them are Hyderabad, Pune, Delhi, Chennai, London, Amsterdam, Singapore, san francisco etc.
Tagged : / / / / / / / / / / / / / / / / / / / / /

Vagrant installation in Centos, Ubuntu and Windows | Vagrant Tutorials

Vagrant installation in ubuntu
1. Update your apt repository
> sudo apt-get update
2. Install VirtualBox.
> sudo apt-get install virtualbox
3. Install Vagrant.
> sudo apt-get install vagrant
Vagrant installation in Centos
1. Update your system
> yum -y update
> cd etc/yum.repos.d/
> yum update -y
> yum install binutils qt gcc make patch libgomp glibc-headers glibc-devel kernel-headers kernel-devel
> dkms
> yum install virtualbox-5.0
2. Install Vagrant
> wget
> yum localinstall vagrant_1.8.1_x86_64.rpm
Vagrant installation in Windows
In this tutorial, we will be installing Vagrant, a bare bones server with Ubuntu installed. Vagrant is a server that runs under VirtualBox. You will need to have VirtualBox installed. You will also need to have Putty installed in order to access your new Vagrant server via SSH. These instructions also apply to Windows 8.
A hard connection to the Internet
Putty needs to be installed. (putty-0.62-installer.exe)
VirtualBox needs to be installed.
Recommended: 8 GB RAM is recommended to run VirtualBox on Windows PCs
A. Installing Vagrant – bare bones server – Ubuntu only
1. Download and install the most recent VirtualBox for Windows from
Start up VirtualBox
2. Download and install the latest version of Vagrant from For this tutorial, we will use version 1.0.6. Windows users, download Vagrant.msi
Open Windows cmd prompt
For Windows 8, press Windows key and then press “R” key. This will open the RUN dialog box for you. Type “cmd” and press Enter.
Note: I typed vagrant command and I got the error message saying, ‘vagrant’ command not recognized. It was not added to the Path during install. Restarting your computer may help to refresh the path.
3. Change directory to C:\vagrant\vagrant\bin
4. Then type the following commands:
C:\vagrant\vagrant\bin> vagrant box add lucid32
C:\vagrant\vagrant\bin> vagrant init lucid32
C:\vagrant\vagrant\bin> vagrant up
5. Open Putty and enter these credentials:
Port: 2222
Connection type: SSH
6. Login to Vagrant server
Enter username: vagrant
Password: vagrant
Type ls –lah at the prompt.
This is a bare bones server with Ubuntu installed.
vagrant@lucid32:~$ls -lah
Tagged : / / / / / / / / / / / / / / / / /

Useful Tips to Make Team Building Exercises Effective and Successful

Hi, I am prabhakar, i have read an article about Team Building and i would like to share it with you.All successful businesses and organizations know that teamwork and team synergy is the key to success. A lot of effort and resources are dedicated towards team building exercises and activities. A good team forms the foundation of any company, organization or community. Team building is not an easy task. This is because a team consists of various individuals with distinct personalities, schools of thought and disposition. But effective team building steps can slowly help these individuals to rise above petty differences, working together and complimenting each other to ensure the overall success of the team as well as personal growth of each individual. A discordant and inharmonious team will continue to experience failure even if it comprises of brilliant individuals. Sports are a very good example of this, in team sports like Football, Hockey or Basketball the co-ordination, communication and synergy of the team is what sets the best teams apart rather than individual superstars.

Team building requires dedication and effective skills. Each member of the team must be made to feel an integral and indispensible part of the team so that team strategy and goals can be placed above individual goals.There are various team building games and exercises available but it is only with perfect execution and participation of all members can these be effective. It is also important to resolve any conflicts as soon as possible. Conflicts can be very harmful for team building. Negative feedback, especially in public should be avoided at all cost as it may hamper the team’s performance.

Tagged : / / / / / / / / / / / / / / / / / /

Upgrading Continuum – Continuum Upgradation Guide


This document will help you upgrade Continuum from 1.2.x to 1.3.3 and above.

When upgrading Continuum, it could have some database model changes. Usually these changes will be migrated for you, but in some cases you may need to use a backup from the previous version and restore that data into the new version. The Data Management tool exports data from the old database model and imports the data into the new database model.

If you had used the APP_BASE environment variable in Continuum 1.2 to differentiate your configuration from the installation, you should rename it to CONTINUUM_BASE in Continuum 1.3.

Note: The Jetty version in Continuum 1.3.4 and above has been upgraded to 6.1.19. When upgrading to Continuum 1.3.4 or higher, there is a need to update the library contents listed in $CONTINUUM_BASE/conf/wrapper.conf with the ones included in the new distribution especially if the $CONTINUUM_BASE directory is separate from the installation.

Using Backup and Restore to upgrade

There are 2 databases that need to be considered: one for the builds and one for the users.

There were no changes in the users database from 1.2.x to 1.3.2, so you can simply point Continuum 1.3.2 at your existing user database.

The builds database has had model changes, and will need to be exported and imported.

First, download the Data Management tools you will need. The tool is a standalone JAR that you can download from the central repo.

You will need to download two versions of the tool, one for the export out of the old version and one for the import into the new version:

Note: The 1.2, 1.2.2 and 1.2.3 released versions of this tool have a bug. To export databases from 1.2.2 or 1.2.3, you will need to use version of the tool. To export databases from 1.2, you may use the 1.1 version of the tool.

Next, follow these steps to export data from the old version

  • Stop the old version of Continuum
  • Execute this command to create the builds.xml export file
java -Xmx512m -jar data-management-cli-1.2.x-app.jar -buildsJdbcUrl jdbc:derby:${old.continuum.home}/data/databases/continuum -mode EXPORT -directory backups

Then, follow these steps to import the data to the new version

  • Start the new version of Continuum to create the new data model, but do not configure it.
  • Stop Continuum
  • Execute this command to import the builds data from the xml file you created earlier:
java -Xmx512m -jar data-management-cli-1.3.2-app.jar -buildsJdbcUrl jdbc:derby:${new.continuum.home}/data/databases/continuum -mode IMPORT -directory backups -strict

Note: Remove -strict when importing data from 1.3.1 to 1.3.x to ignore unrecognized tags due to model changes.

Finally, be aware that sometimes the NEXT_VAL values in the SEQUENCE_TABLE need to be adjusted.

  • Before starting Continuum for the first time after the import, connect to the db with a client like Squirrel SQL and check the values in the NEXT_VAL column of the SEQUENCE_TABLE.
  • Values must be greater than the max id value in each table.
  • For example, the next value of “org.apache.maven.continuum.model.Project” must be greater than the greatest id in Project table.
  • Here are some example SQL statements. You may need to add or remove lines depending on the contents of your database.
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(systemconfiguration_id)+1 from SYSTEMCONFIGURATION) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.system.SystemConfiguration';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from BUILDQUEUE) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.BuildQueue';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from SCHEDULE) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.Schedule';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from BUILDDEFINITION) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.BuildDefinition';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from LOCALREPOSITORY) WHERE SEQUENCE_NAME='org.apache.continuum.model.repository.LocalRepository';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from PROJECTGROUP) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.ProjectGroup';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(scmresult_id)+1 from SCMRESULT) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.scm.ScmResult';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(projectdependency_id)+1 from PROJECTDEPENDENCY) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.ProjectDependency';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from BUILDDEFINITIONTEMPLATE) WHERE SEQUENCE_NAME='org.apache.maven.continuum.model.project.BuildDefinitionTemplate';
UPDATE SEQUENCE_TABLE set NEXT_VAL = (select max(id)+1 from ABSTRACTPURGECONFIGURATION) WHERE SEQUENCE_NAME='org.apache.continuum.model.repository.AbstractPurgeConfiguration';

Now you can start your new version of Continuum.




Tagged : / / / / / / / / / / / / / / /

JUnit 4 Test Logging Tips using SLF4J


When writing JUnit tests developers often add log statements that can help provide information on test failures. During the initial attempt to find a failure a simple System.out.println() statement is usually the first resort of most developers.

Replacing these System.out.println() statements with log statements is the first improvement on this technique. Using SLF4J (Simple Logging Facade for Java) provides some neat improvements using parameterized messages. Combining SLF4J with JUnit 4 rule implementations can provide more efficient test class logging techniques.

Some examples will help to illustrate how SLF4J and JUnit 4 rule implementation offers improved test logging techniques. As mentioned the inital solution by developers is to use System.out.println() statements. The simple example code below shows this method.

01    import org.junit.Test;
03    public class LoggingTest {
05      @Test
06      public void testA() {
07        System.out.println(“testA being run…”);
08      }
10      @Test
11      public void testB() {
12        System.out.println(“testB being run…”);
13      }
14    }

The obvious improvement here is to use logging statements rather than the System.out.println() statements. Using SLF4J enables us to do this simply whilst allowing the end user to plug in their desired logging framework at deployment time. Replacing the System.out.println() statements with SLF4J log statements directly results in the following code.
view source

01    import org.junit.Test;
02    import org.slf4j.Logger;
03    import org.slf4j.LoggerFactory;
05    public class LoggingTest {
07      final Logger logger =
08        LoggerFactory.getLogger(LoggingTest.class);
10      @Test
11      public void testA() {
12“testA being run…”);
13      }
15      @Test
16      public void testB() {
17“testB being run…”);
18      }
19    }

Looking at the code it feels that the hard coded method name in the log statements would be better obtained using the @Rule TestName JUnit 4 class. This Rule makes the test name available inside method blocks. Replacing the hard coded string value with the TestName rule implementation results in the following updated code.

01    import org.junit.Rule;
02    import org.junit.Test;
03    import org.junit.rules.TestName;
04    import org.slf4j.Logger;
05    import org.slf4j.LoggerFactory;
07    public class LoggingTest {
09      @Rule public TestName name = new TestName();
11      final Logger logger =
12        LoggerFactory.getLogger(LoggingTest.class);
14      @Test
15      public void testA() {
16 + ” being run…”);
17      }
19      @Test
20      public void testB() {
21 + ” being run…”);
22      }
23    }

SLF4J offers an improved method to the log statement in the example above which provides faster logging. Use of parameterized messages enable SLF4J to evaluate whether or not to log the message at all. The message parameters will only be resolved if the message will be logged. According to the SLF4J manual this can provide an improvement of a factor of at least 30, in case of a disabled logging statement.

Updating the code to use SLF4J parameterized messages results in the following code.

01    import org.junit.Rule;
02    import org.junit.Test;
03    import org.junit.rules.TestName;
04    import org.slf4j.Logger;
05    import org.slf4j.LoggerFactory;
07    public class LoggingTest {
09      @Rule public TestName name = new TestName();
11      final Logger logger =
12        LoggerFactory.getLogger(LoggingTest.class);
14      @Test
15      public void testA() {
16“{} being run…”, name.getMethodName());
17      }
19      @Test
20      public void testB() {
21“{} being run…”, name.getMethodName());  }
22      }
23    }

Quite clearly the logging statements in this code don’t conform to the DRY principle.

Another JUnit 4 Rule implementation enables us to correct this issue. Using the TestWatchman we are able to create an implementation that overrides the starting(FrameworkMethod method) to provide the same functionality whilst maintaining the DRY principle. The TestWatchman Rule also enables developers to override methods invoked when the test finishes, fails or succeeds.

Using the TestWatchman Rule results in the following code.

01    import org.junit.Rule;
02    import org.junit.Test;
03    import org.junit.rules.MethodRule;
04    import org.junit.rules.TestWatchman;
05    import org.junit.runners.model.FrameworkMethod;
06    import org.slf4j.Logger;
07    import org.slf4j.LoggerFactory;
09    public class LoggingTest {
11      @Rule public MethodRule watchman = new TestWatchman() {
12        public void starting(FrameworkMethod method) {
13“{} being run…”, method.getName());
14        }
15      };
17      final Logger logger =
18        LoggerFactory.getLogger(LoggingTest.class);
20      @Test
21      public void testA() {
23      }
25      @Test
26      public void testB() {
28      }
29    }

And there you have it. A nice test code logging technique using JUnit 4 rules taking advantage of SLF4J parameterized messages.

I would be interested to hear from anyone using this or similar techniques based on JUnit 4 rules and SLF4J.


Tagged : / / / / / / / / / / / / / / / / / /

How to run Remote Desktop Console by using command line?

How to run Remote Desktop Console using command line
If you may want to run Desktop Console from a batch file, for example RDC over VPN, you can use mstsc /v:servername /console command.


Creates connections to terminal servers or other remote computers, edits an existing Remote Desktop Connection (.rdp) configuration file, and migrates legacy connection files that were created with Client Connection Manager to new .rdp connection files.


mstsc.exe {ConnectionFile | /v:ServerName[:Port]} [/console] [/f] [/w:Width /h:Height]
mstsc.exe /edit”ConnectionFile”
mstsc.exe /migrate


Specifies the name of an .rdp file for the connection.
Specifies the remote computer and, optionally, the port number to which you want to connect.

Connects to the console session of the specified Windows Server 2003 family operating system.

Starts Remote Desktop connection in full-screen mode.

/w:Width /h:Height
Specifies the dimensions of the Remote Desktop screen.

Opens the specified .rdp file for editing.

Migrates legacy connection files that were created with Client Connection Manager to new .rdp connection files.

* You must be an administrator on the server to which you are connecting to create a remote console connection.
* default.rdp is stored for each user as a hidden file in My Documents. User created .rdp files are stored by default in My Documents but can be moved anywhere.

To connect to the console session of a server, type:
mstsc /console

To open a file called filename.rdp for editing, type:
mstsc /edit filename.rdp

Tagged : / / / / / / / / / / / / / / / / / / / /

How to Write Trigger in Perforce? – Perforce Triggers Guide


1 Introduction
Perforce introduced the first server-side trigger in release 99.1 with the pre-submit trigger. This trigger satisfied a long-standing desire in the user community, but demand continued for more hooks. In release 2004.2, Perforce squarely hit the need with the addition of five new trigger types. Release 2005.1 adds yet one more trigger type to this list rounding out one of the categories of triggers to completeness. This paper discusses triggers, techniques for implementing them and purposes for using them. It presumes a general knowledge of scripting. The examples follow in several programming languages. They should be easy to follow with knowledge of general programming, and any more arcane constructs will be explained. The paper also presumes a reasonable knowledge of Perforce scripting alternatives, such as that presented in [Bowles2005]. Although this paper will address the scripting of triggers comprehensively, it will refer to other Perforce scripting contexts and to Perforce commands with an assumption of familiarity.

1.1 What is a trigger?
Triggers are programs that run on the server immediately in response to some welldefined event in Perforce. Therefore, the context for a trigger is running on the server
using the trigger mechanism to start. Triggers are typically written in a shell script such as Perl, Python or Ruby due to the flexibility and facilities they provide. However, triggers can be written in any programming language that can interface with Perforce, including UNIX shell (sh, ksh, csh and work-alikes) and compiled languages like C/C++.

1.2 Types of triggers
Triggers fall into two categories. Pre-submit triggers enable actions in response to the submission of changelists. Form triggers allow actions in response to various stages of the life cycle of a form, regardless of the form type. This section provides a brief overview of the trigger types in preparation for the more detailed discussion.
1.2.1 Pre-submit triggers
There are three types of pre-submit triggers corresponding to different points in the life cycle of a submission.
• “Submit” triggers execute after the changelist has been created but before the files have been transferred, allowing inspection of the changelist details but disallowing file inspection.
• “Content” triggers execute after file transfer but before commit, allowing for inspection of the files.
• “Commit” triggers execute after the commit, allowing inspection of the changelist and file contents, but disallowing canceling of the submission.

1.2.2 Form triggers
Form triggers come in four types depending on the point in the form’s life cycle in which they are invoked.
• “Out” triggers execute when the form is generated and can modify the form before it is presented to the user.
• “In” triggers execute when the form is sent back to Perforce but before it is parsed, also allowing modification of the form on its way in.
• “Save” triggers execute after the form has been parsed but before it is saved, allowing reaction to the form but not modification.
• “Delete” triggers execute before a form is deleted, allowing failure of the deletion.

1.3 Why use a trigger?
Knowing why to use a trigger is partially a matter of knowing what are the competing alternatives. The alternatives naturally come from other contexts, since triggers define a context of their own. This section details the salient operational characteristics of triggers and contrasts them with Perforce alternatives. The three primary alternatives to triggers are wrapper scripts, such as p4wrapper, 1 review daemons, and journal tailers. A variation on the wrapper script would be a script available from the Tools menu in P4Win.

1.3.1 Synchronous execution
Triggers execute synchronously in response to their associated event. This provides an immediacy of response that is sometimes required or at least highly desirable. One option that provides synchronous execution could be an action invoked from a wrapper script such as p4wrapper. Another option would be to forego synchronous execution and rely on frequently running review daemons. Journal tailers would also perform asynchronously, although with very rapid and event-driven response.
1.3.2 Immediate user feedback on error
Triggers can provide messages back to the user, but only on error. Messages are not delivered on successful execution. A wrapper script can deliver messages to the user regardless of whether an error occurs or not. Review daemons and journal tailers can only provide feedback through indirect mechanisms.
1.3.3 Enforceability
Enforceability refers to the ability of an administrator to ensure that the script will run regardless of the client program used to initiate the operation. Because triggers are
installed on and executed by the server in response to server events, they will execute regardless of the client program. Wrapper scripts will only be invoked when the wrapper is used, whereas direct use of p4 or use of a different client program will circumvent the desired action. Review daemons and journal tailers are also enforceable due to their context on the server.

1.3.4 Modify a form
Form triggers can modify a form as it is delivered to the user or as it is sent back to Perforce. Wrapper scripts share this characteristic. Review daemons and journal tailers can not modify forms except to the extent that any user or administrator can after the operation has finished.
1.3.5 Customize any action
Form triggers have a limited ability to customize actions that involve forms, but they do not have the ability to react to any arbitrary command. Similarly, pre-submit triggers can only react to submits. Review daemons can only react to commands whose side effects can be reliably observed, something that is not readily available from the command line in many cases or from review mechanisms. Journal tailers have the ability to react to any action that affects database entries, which includes almost all.
1.3.6 Optimization for bulk processing
The review mechanism gives the ability to process a large number of actions in an orderly and efficient manner as long as those changes impact a counter. This optimization is not readily if at all available to triggers, wrappers or journal tailers due to their association with individual commands or journal entries.

1.3.7 Deterministic execution
Triggers and wrappers provide an exact and deterministic understanding of when they will execute relative to the command that initiates them. Review daemons are generally driven in a time-based manner and therefore do not execute deterministically relative to the Perforce command. Journal tailers are closer to deterministic than review daemons, but can conceivably execute prior to completion of a command.
1.3.8 Summary
The following table summarizes the characteristics of the scripting contexts that compete with triggers for Perforce scripting.


Tagged : / / / / / / / / / / / / / / / / / /