PHP Fatal error: Composer detected issues in your platform: Your Composer dependencies require a PHP version “>= 7.2.5”. You are running 7.1.32.

When you using PHP version 7.1.32 within your Laravel project, then showing this type of error in your project. If you want to update the composer still not solve you problem then After that Follow this trick-

add this line in config object of composer.json file

“platform-check”: false

run php artisan config:cache

then run composer dump-autoload in terminal

Tagged : / / / / / / / / / /

How to Resolve Windows Installer Problem

Installation of a program means inserting that particular program in your computer so that it can be executed properly. Some of the software programs can be simply copied to the computer and executed without doing anything further; they don’t require any kind of installation process. Many programs come with an executable suite, which requires to be installed. Installation is the process where you will have to unpack some files, copy them to desired locations, tailor the software to suite your hardware and give the desired information to the operating system.

Installation often means that once a program is installed, the user can run the program over and over again, without reinstalling it again before using each time. Until one does not uninstall the program or the program does not allow further execution, you will have to install it again. However, sometimes one can encounter problems while installing a program. Here are some simple steps that will help you in resolving windows installer problems:

teps:

  • First thing to do for resolving windows installer problem is identifying it. When you are trying to install or uninstall something, you might get a warning message like:

“The windows installer cannot be accessed”

“Windows installer service cannot be started”

“Could not start the windows installer service on Local computer. Error 5:

access is denied.”

  • These error messages will often appear on the screen when the installation of the MSI package has failed or when the windows installer service is disabled.
  • Method 1: first unregister windows installer, and then you will have to register it again. For resolving windows installer problems of such a nature, just do the following things. Go on the ’start’ menu and click on the ‘run’ option. In the dialog box start typing ‘msiexec/unreg’, and then press the enter key.
  • Go on the ’start’ menu again and click on the ‘run’ option. In the dialog box start typing, ‘msiexec/regserver’ and then press the ‘enter’ key.
  • Method 2: you can upgrade the windows installer to a higher version or a newer version. For this, open the internet explorer page, and go to the Microsoft website. Go to the link, http://msdn.microsoft.com/downloads. On the left side you will get an option of ’setup and system administration’ and now click on the ’setup’ option.
  • Select the ‘windows installer’, and then choose the appropriate link for your operating system. Now click on the ‘download’ option and install the new version or install the higher version of windows installer.
  • Method 3: for resolving windows installer problem, you might have to uninstall the failed product with the help of an installer cleanup. The description for the windows installer cleanup utility is, http://support.microsoft.com/default.aspx?scid=kb;en-us;290301
  • Method 4: if the windows service is disabled on your computer, then go to the ’start’ menu, select ‘run’ option and type ’services.msc’ and click enter. Now, double click on the option of windows installer.
  • Method 5: check the DCOM and the permission by the system through http://support.microsoft.com/?id=319624
  • Method 6: another thing you can do for resolving windows installer problem is
Tagged : / / / / / / /

Unregister Issues in JMX Interface

scmuser created the topic: Unregister Issues in JMX Interface
Hi,

I am just learning Cruise Control. I access http://localhost:8000/ link and click on un-register and then this Page is disappeared.

Can you guide me how to reset this back to earlier state?

scmuser replied the topic: Re:Unregister Issues in JMX Interface
Hi

I got this solution by restarting server…

still dont knw the root cause for this.

Tagged :

Windows installer issues. The windows installer could not be accesed when instal

msiexpert created the topic: Windows installer issues. The windows installer could not be accesed when instal
I feel I have tried everything when it come to the windows installer. I have tried to re-install the application…doesnt work. (not enough storage space it says). I have entered into the services and when I double click on windows installer it says “Configuation Manager: The specified device instance handel does not correspond to a present device. I have changed the settings from manual to automatic and back again plenty of times. …..doesnt fix problem. Have tried to unregister the mscexec files then re-register them… doesn’t work. What now?! I have become extremely frustrated with this ongoing problem!

msiexpert replied the topic: Re: Windows installer issues. The windows installer could not be accesed when in

support.microsoft.com/default.aspx?scid=kb;en-us;290301

support.microsoft.com/kb/555175


..

“reauks” wrote in message news:f1a75d27-2247-4f50-8d0e-5d3bed73e0af…
>I feel I have tried everything when it come to the windows installer. I
>have tried to re-install the application…doesnt work. (not enough storage
en I double click
>on windows installer it says “Configuation Manager: The specified device
>instance handel does not correspond to a present device. I have changed
>the settings from manual to automatic and back again plenty of times.
>…..doesnt fix problem. Have tried to unregister the mscexec files then
>re-register them… doesn’t work. What now?! I have become extremely
>frustrated with this ongoing problem!
Report This
Quick Reply Action Moderate Thank You

Tagged :

Git Troubleshooting | Git Troubleshooting Techniques

git-troubleshooting

export GIT_CURL_VERBOSE=1

git push -u origin –all –verbose

git config –list

&

GIT_CURL_VERBOSE=1 git push

&

export GIT_CURL_VERBOSE=1

git push

git config --global http.postBuffer

There are useful to debug, long running Git Commands or Git Commands that seem to be hanged for some reason,

Git has an in-built functionality for us to peek into what is running behind the scenes of a git command, just add GIT_TRACE=1 before ANY git command to get additional info, for example:

Other Flags that we can use are : GIT_CURL_VERBOSE=1, -v or –verbose

[server@user sp-server-branches]$ GIT_TRACE=1 git clone

https://github.com/gitlabhq/gitlab-public-wiki/wiki/Trouble-Shooting-Guide

https://drupal.org/node/1065850

http://mattberther.com/2013/12/29/pushing-large-git-repos-with-ssh

http://ocaoimh.ie/2008/12/10/how-to-fix-ssh-timeout-problems/

http://unix.stackexchange.com/questions/3026/what-do-the-options-serveraliveinterval-and-clientaliveinterval-in-sshd-conf

Tagged : / / / / / / / / /

Buildforge common Issues and Troubleshooting | Buildforge Troubleshooting Guide

buildforge-troubleshooting

Know about the BuildForge Server before Troubleshooting.

1. What is the Full version of Build Forge being used
(for example Build Forge  7.1.2.2-1-0111).This can be obtained under the console interface by hovering the mouse over the Build Forge icon in the upper left hand corner.

2. What is Full version of operating system for both the Management Console and the agent host (used for the selector)

for example Windows XP Professional SP3 32 bit)
> AIX = oslevel
> HP-UX = uname -r
> Solaris = uname -r
> Red Hat = cat /etc/redhat-release
> SUSE = cat /etc/SuSE-release
> Windows = winver (in a command (cmd) window)

3. What is the Full version of the database being used.

4. In What situtation and how, When did the behavior first start to occur?

5. Need to Refer Tomcat (Catalina) logs on the Build forge Server. 
UNIX/Linux ($BF_HOME/server/tomcat/logs/log_from_date_of_error.log) We will only require the Tomcat Catalina log for today’s date or the date you first encountered the error, i.e. catalina.2012-01-17.log

6. Need to Refer Build Forge engine log
UNIX/Linux ($BF_HOME/log)

7. Need to review database log
UNIX/Linux ($BF_HOME/Platform/db.log)

Issues 1 – API: Access denied.
Solution – Click Here

Issues 2: Build forge Consol gets slow after login

Issues 3: API: Access denied while accessing Environments Menu.

Issues 4: Error message in catalina.DATE.log”A valid Engine Unique Identifier must be specified”

Issues 5: Once copying one Environment varibale and duplicating multiple times. Error Occured after copying 7th times same variables. Post login, once You clicks on “Environments” left navigation, Everyone gets Error called “API: Access denied” on page.

 

Tagged : / / / / / / / / / / /

MySQL Basic Troubleshooting Guide | MySQL common Issues

mysql-troubleshooting

PLEASE NOTE: I am currently reviewing this Article.

How to check the mysql file location:
> which mysql
> locate mysql

Check mysqld process is started or not?
> service mysqld status
> “mysqld is stopped” – Means mysqld is not running
> “mysqld: unrecognized service” – Means mysqld is not set in service. This can be register using chkconfig under /etc/init.d.
> ps -eaf | grep mysqld

To check if port 3306 is bind with mysqld or another program.
> lsof -i TCP:3306
> netstat -lp | grep 3306
> netstat -tap | grep mysql
> ps -aux | grep mysql
> netstat -a -t – to show only tcp ports

Note: – if you could not found 3306 is listening with mysqld, then it must not be running or running with another ports. To find this, refer my.cnf and pid-file

How to Stop mysqld?
> /etc/init.d/mysqld stop
> kill <pid>
> /sbin/service mysqld start/stop/restart

If you have problems starting the server, here are some things to try:

Check the error log to see why the server does not start.
The Location of error log file can be found in my.cnf or my.ini(windows). please refer below to know more about my.cnf file. The log file can be specified also in mysqld service resided
under /etc/init.d/

Make sure that the server knows where to find the data directory.
Make sure my.cnf file is set with “datadir” and its required ownership and permission. Make sure that the server can access the data directory. The ownership and permissions of the data directory and its contents must be set such that the server can read and modify them.

Verify that the network interfaces the server wants to use are available. If the server starts but you cannot connect to it, you should make sure that you have an entry in /etc/hosts that
looks like this:

127.0.0.1 localhost

If mysqld is running, To find all the variable set using
> mysqladmin -h hostname -p variables

Issues 1:
Can’t start server: Bind on TCP/IP port: Address already in use
Can’t start server: Bind on unix socket…
Solution:
Use ps to determine whether you have another mysqld server running. If so, shut down the server before starting mysqld again.

Issues 2:
mysqld will not start
Can’t start server: Bind on TCP/IP port: Address already in use
Do you already have another mysqld server running on port: 3306 ?
Solution:
This may be due to 3306 port is being used or Disk Space issues. You can look up on the log file.

Recovering a crashed MySQL server if the system itself or just the MySQL daemon corrupted table files

You’ll see this when checking the /var/log/syslog, as the MySQL daemon checks tables during its startup.

Apr 17 13:54:44 live1 mysqld[2613]: 090417 13:54:44 [ERROR]
/usr/sbin/mysqld: Table ‘./database1/table1’ is marked as
crashed and should be repaired

In this situation, Database and tables need to be repaired.

> mysql -u root -p
mysql> REPAIR TABLE database1.table1;

This works, but there is a better way: First, using OPTIMIZE in combination with REPAIR is suggested and there is a command line tool only for REPAIR jobs. Consider this call:
> mysqlcheck -u username -p -o –auto-repair -v –optimize database_name

Using “mysqlcheck” is, that it can also be run against all databases in one run
> mysqlcheck -u root -p –auto-repair –check –optimize –all-databases

Recreating databases and tables the right way
mysql> show create database database1;

How to find location of my.cnf (or my.ini on Windows)?

Default options are read from the following files in the given order:
/etc/my.cnf
/etc/mysql/my.cnf
/usr/etc/my.cnf
~/.my.cnf

Or, on Windows:
Default options are read from the following files in the given order:
C:\Windows\my.ini
C:\Windows\my.cnf
C:\my.ini
C:\my.cnf
C:\Program Files\MySQL\MySQL Server 5.5\my.ini
C:\Program Files\MySQL\MySQL Server 5.5\my.cnf

This command also help you in linux to find my.conf file location…
> strace mysql “;” 2>&1 | grep cnf

Another Option to use following commands…
> whereis my.cnf
> locate my.cnf
> find – -name my.cnf

my.cnf will contain following…
datadir – The path to the MySQL data directory.
tmpdir
default-character-set
default-storage-engine
innodb_data_home_dir
log-error- The location of log file.
pid-file – The path name of the file in which the server should write its process ID.

MySQL Performance Troubleshooting
There are three main utilities I’ll run to in a situation like this:

top
First I’m going to use top to see if anything is hogging CPU on the machine. If there are non-mysql processes using a substantial percentage of the CPU cores, I’m going to want to havea look at what that is and see about limiting its use or moving it a dedicated server. If I see mysqld using up a lot of CPU, I know it’s working hard and will have to drill into what’shappening inside of MySQL (maybe some poorly written queries). If nothing is apparently chewing up the CPU time, I know that the problem is likely elsewhere.

vmstat 5
I generally run this for at least two or three minutes to get a sense of what the CPU and memory use are like. I’m also watching to see how much time the CPU is stalled waiting for I/Orequests. Doing this for several minutes will make the occasional spikes really stand out and also allow for more time to catch those cron jobs that fire up every few minutes.

iostat -x 5 | grep sdb
I’m going to run it with a short interval (5 or 10 seconds) and do so for several minutes. I’ll likely filter the output so that I only see the output for the most active disk or array (the onewhere all of MySQL’s data lives).

slow queries
To find out about slow queries I’m going hope that the slow query log is enabled and the server has a sane long_query_time. But even the default of 10 seconds is helpful in truly badsituations.

MySQL’s error log
I’ll also want to glance through MySQL’s error log to make sure nothing bad-looking has started to appear. To Find a error log file location, refer my.cnf file “log-error”.

Network issues
telnet your_host_name tcp_ip_port_number.

mysqladmin :
mysqladmin is a client for performing administrative operations. You can use it to check the server’s configuration and current status, to create and drop databases, and more.

mysqladmin -h hostname -p <command_as_follows>

–help, -? – Display a help message and exit.
refresh – Flush all tables and close and open log files.
variables – Display the server system variables and their values.
flush-logs – Flush all logs.
flush-privileges – Reload the grant tables (same as reload).
flush-status – Clear status variables.
password new-password – Set a new password. This changes the password to new-password for the account that you use with mysqladmin for connecting to the server.
ping – Check whether the server is available
processlist – Show a list of active server threads.
shutdown – Stop the server.
status – Display a short server status message.
Uptime – The number of seconds the MySQL server has been running.
Slow queries – The number of queries that have taken more than long_query_time seconds
Open tables – The number of tables that currently are open.

Reference
http://dev.mysql.com/doc/refman/5.5/en/mysqladmin.html
http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html
http://www.linux-mag.com/id/7473/
http://dev.mysql.com/doc/refman/5.1/en/starting-server.html

Tagged : / / / / / / / / / / / / / / / /

Perforce Slow Sync issues | Perforce Slow Sync Troubleshooting Guide

perforce-slow-sync-issues

Perforce Slow Sync issues

Network latency
Sync Performance issues are not obvious with locally connected hosts, as the network latency is low. However, as latency increases, performance worsens.

To correct the issue, follow these steps on the client machines exhibiting the issue:
http://kb.perforce.com/article/1191/slow-sync-on-remote-windows-clients

Network problems:
How do I determine if slow Perforce response time is caused by network problems?

A network issue can be suspected if Perforce commands run quickly on the local machine but run slowly across a network. You can also compare the lapse time against the usage time, for example, if the information is available in the Perforce logs.

Any of the following can cause slow responses:

  1. Misconfigured domain name server (DNS)
  2. Misconfigured Windows name server (WINS) or Windows domain
  3. Slow network response

1. p4 info

A good initial test is to run the p4 info command. If this command does not respond nearly immediately then there is a network related problem.
The p4 info command uses the P4PORT setting to contact the Perforce Server. If the P4PORT setting uses the hostname of the server machine, a DNS lookup is required to fetch the server IP address. If the DNS server is failing or the network is slow, the lookup process takes time. You can use the IP address directly to avoid the DNS lookup.

2. Hostname vs. IP Address
On a client machine, try using the Perforce Server’s IP address in the P4PORT setting. This avoids the DNS lookup used to convert the hostname to an IP address. Here is a P4PORT example which uses the server hostname:

P4PORT=hostname:1666
Using the IP address directly, the P4PORT setting then looks like this:

P4PORT=1.2.3.4:1666

If you do not have the Perforce Server machine’s IP address handy, you can use the ping command to find it. Here is an example of how you would run the ping command:

3. ping hostnam

The output of the ping command lists the IP address for the hostname.

If the p4 info command responds immediately when you use the IP address in the P4PORT setting, you have a misconfigured DNS.

“p4 info” vs. P4Win

The p4 info command is processed by the Perforce Server. As the Perforce Server compiles the output information for the command, it does a reverse DNS lookup on the client’s IP address. A forward DNS lookup might be fast, while a reverse DNS lookup is slow. One way to determine if a reverse DNS lookup is slow is by using P4Win, the Perforce Visual Client. The “Show Connection Info” operation in P4Win does not perform a reverse DNS lookup.

You can compare the response of P4Win’s “Show Connection Info” with the response from the command line p4 info. If a “Show Connection Info” operation is fast while a p4 info is slow, there is a reverse DNS lookup problem on the Perforce Server machine.

4. The “hosts” file

To work around DNS problems, add hostname IP address entries to the hosts file. This task is typically performed by a Systems Administrator; be sure to follow your company’s standard procedures.

Rather than using an IP address for the P4PORT setting, you can add a hostname IP address entry in your hosts file. The host file can be tricky to find. Here are a few places it can be:

Windows Server 2000 (and later):

C:\Winnt\System32\Drivers\etc\hosts

Unix:

/etc/hosts

The following is an example entry for the hosts file:

1.2.3.4 hostname

If you have determined there is a reverse DNS problem, you need to add an entry for your client hostname in the machine’s hosts file where the Perforce Server resides.

5. Wildcards on Windows

In some cases, p4 commands using unquoted file patterns with a combination of depot syntax or client syntax and wildcards can cause delays.

p4 files //depot/*

p4 files //client/*

The delay occurs because the wild card expansion routines for Windows mistakes the depot name or client name for a network system name. You can prevent the delay by putting double quotes around the file pattern:

p4 files “//depot/*”

Network Topology

Most networks use 100 Mbit technology. The maximum theoretical transfer rate of this type of network is 10 Mbytes per second. In most cases slower transfer rates are experienced. If a network is saturated, transfer rates can drop off significantly, maybe as low as 4 Mbytes per second. In cases of network saturation use of network routers or switches can help.

6. Client on a Network Filesystem

It is possible that the p4 executable itself is on a networked file system that is performing very poorly. To check for slow access to the executable, try running:

p4 -v

The p4 -v command simply prints out the version information and does not attempt to access the network. If you get a slow response, network access to the p4 executable itself may be the problem. Try copying or downloading the p4 executable onto a local filesystem.

7. Server on a Network Filesystem

Using a network filesystem for the Perforce Server must be thought through very carefully. Just as in the scenario above with the Perforce client stored on a network filesystem, data to and from the server application must go through the network twice. The Perforce Server also uses file locking on vital data files. Not all network filesystems have efficient locking implementations and some are buggy.

If the network is saturated and the transfer rate is down to about 4 Mbytes per second, the Perforce Server cannot satisfy client requests. Each client request uses part of the network, and the Perforce Server must share that resource in order to access the network filesystem. A cheap hard drive these days will provide a 20 Mbyte per second transfer rate. A good SCSI hard drive can transfer as much as 160 Mbytes per second. If possible, try to avoid a network filesystem related server bottle neck.

Try and play with “p4 ping” by progressively increasing the message count, receive and transmit lengths (one at a time). If you hit the same wall repeatedly, then this could very well be indicative of some throttling taking place somewhere

8. p4 -z tag info

9. Reference.

http://kb.perforce.com/article/40/isolating-network-problems
http://kb.perforce.com/article/1191/slow-sync-on-remote-windows-clients
http://kb.perforce.com/article/1462/slow-sync-performance-on-windows

Tagged : / / / / / / / / / / / / / /

Perforce Network Troubleshooting Guide | How to Resolve Perforce Network Issues?

perforce-network-troubleshooting-guide

1. netstat -a
Check to make sure that the server is running. netstat -a gives a list of all processes listening on network ports. Look for lines that contain “LISTEN” and “1666” (or whatever port you have Perforce running on.)  If you do not see such a line, the server is not running.

2. p4 -p 127.0.0.1:1666 info
Verify that the server accepts local connections using the localhost address from the server machine.
If you cannot connect check and make sure P4PORT is set to “1666” for the server. This ensures the server is listening on all interfaces.  Setting P4PORT to ‘localhost:1666’ will set it to only allow connections from the local machine.

Note:
Make sure that it is set properly with ‘p4 set -S Perforce P4PORT’ and if it’s not, set it:
p4 set -S Perforce P4PORT=1666

On Linux/Mac/Unix, instead of using ‘p4 set’, you can either set the environment variable $P4PORT or use the ‘-p’ flag to p4d:
p4d -r $P4ROOT -p 1666 [other flags]

3. ping server
Verify network connectivity by pinging the server from the client. If you cannot ping the server, then either ICMP is being blocked or there is a network issue outside of Perforce.

4. telnet <server> <port>
Verify that the server port is reachable by “telnet <server> <port>”. This can give you a descriptive error or confirm a connection. Note that on Windows servers the telnet utility might not be available by default; if it is not available, you can install it from your Programs and Features control panel item. In the sidebar there is a Turn Windows Features On or Off, select it and then check Telnet Client in the list. Click OK.

5. Check port filters/firewalls settings
If you still cannot connect, then verify the TCP/IP properties settings for any port filters/firewalls. If there is a firewall, make sure that incoming connections to port 1666 are permitted, and that all existing outbound connections are permitted (the latter is usually standard). On Windows machines, go to Control Panel -> Windows Firewall, click on Advanced Settings, and then click on Inbound Rules on the sidebar. Make sure p4d is enabled.

6. DNS is resolving the IP address correctly
Check to make sure that DNS is resolving the IP address correctly. Pinging by DNS name is quick verification. “nslookup <hostname>” or “dig <hostname>” can also work.

7. using the IP address
Run the Perforce command using the IP address instead of DNS name. If IP address works, suspect DNS resolution problems or an incorrectly spelled hostname. If IP address does not work, suspect a problem with either host table entries or routing or other network problems. The commands “route” and “arp -a” can be helpful in this regard.

8. If problem is intermittent, suspect hardware, interface configuration, or congestion problems. Tools such as fping and wireshark are useful for uncovering these sorts of errors.

9. tracert <perforce_server
Check for inordinate delays with the traceroute command “tracert <perforce_server>”. On linux, it is called “traceroute”.

Reference…
http://kb.perforce.com/article/905

Tagged : / / / / / / / / / / / / / / / / /

Deployment Foundation Issues

deployment-foundation-issue

Deployment Foundation Issues

Establish Key Roles/Charter for Deployment

The very first order of business is to firmly establish “who’s on first” for getting deployment done. Senior management is crucial at this point for making sure all their direct reports and managers are on board with this
and that it comes from the top. I mention this because at one place I worked, we immediately got into interdepartment squabbling due to a lack of senior management support and direction. If you hear a manager
say things like “do what you want — but don’t touch my area,” you will have deployment problems. I strongly recommend the formation of a process group as the focal point for all matters related to process and process deployment. This group has to have both the authorization and responsibility for process. If you have a distributed set of “process owners,” consolidate that responsibility and authority to this new group. My requirements for membership in this process group are:

Six to eight people. Larger process groups tend to be less efficient and more cumbersome. A smaller group tends to be ineffective. It is not necessary to have representatives from all corners of your organization. It is important that these domain experts get called in as necessary for process development and inspection. One company had a 15-person process group established by a non–process-oriented vice president. It was a disaster to get a
repeatable quorum present for any meeting. We spent subsequent meetings repeating stuff from earlier meetings to accommodate a different set of participants at every meeting.

Process-group commitments. My most successful process group was when I insisted that members commit 5 percent of their workweek to process-group meetings. Group members and their managers had to sign the commitment. The 5 percent figure is doable — even for busy people. Two one-hour meetings per week reflect that percentage. I also had fixed time meetings both by time and day of week. It became automatic to show up. To make this really work, I was the process-group lead and I dedicated 100 percent to this effort. I had clerical support services available to me. The most effective process-group meetings are concentrated sessions
with a time-stamped agenda and where my support staff and I do all extracurricular activities. You want to restrict extra time (beyond actual process-group meeting time) needed by your key process participants because they tend to be super busy.

Showing up on time. We could not tolerate people wandering in five or ten minutes late. We started promptly on the hour and stopped promptly on the hour. At one company, I removed a person for being late because it held everyone up. Promptness became so important at one commercial company that other process- group members would be “all over” tardy people. The tardiness stopped quickly when peers got involved in any discipline.

People who are process oriented. Do not have people in this group who don’t fit this requirement! At one company, a vice president insisted on naming people to the group (which became double the size I had wanted) who were almost completely ignorant about process. We spent almost all our precious process-group time just getting these people to understand the most fundamental aspects of process. It was painful. The VP wondered why progress was slow. Duh!

People who are opinionated — i.e., not afraid to speak up on issues. You cannot afford to have people just show up and suck air out of the room and not participate. The best processes I’ve developed came from sessions where it was not clear who would walk out alive after spirited process discussions.

People that others look up to. They may be leads or workers. Every organization has these types of people and they may not be in the management ranks. The reason for this requirement is to form an initial set of process champions right out of the box. These initial process champions will develop more champions.

People who are willing to have an enterprise perspective versus an organizational perspective. This could be a huge problem if process- group discussions degenerate into preservation of turf — no matter what. At one place, I actually went to a paint store, bought disposable painting hats, placed a big “E” for enterprise on the hats, and made process-group members wear the hats at our meetings to reinforce that enterprise focus. It got a few laughs and some grumbles but it worked.

People who are not “who” oriented. A process group avoids the “who” question and concentrates on the “whats.” Once the “what you have to do” is addressed, the “who” looks after itself. When process-group meetings degenerated into discussing “who does this” and “who does that,” I routinely stopped the meeting and reminded everyone that when you have a hole in the bottom of the boat, this is not the time to discuss whose hole it is! I got laughs but my point was taken.

This is your key group for process development and deployment. It’s obvious, but if you have this marvelous group put together without regard to an overall process architectural goal, you will fail. This is where this software process model will help you enormously. Ideally, the processgroup lead has an in-depth knowledge of the targeted process architecture with an initial goal to get the process group up to speed on this aspect first — before any company processes are tackled. If you are under pressure to “just get on with it” (without getting all process members up on the target process architecture), you will fail. You will end up flailing around for a large amount of time. You will also end up with a hodgepodge of process elements and no encompassing architecture. You want to end up with a hierarchy of goals supported by tasks that are measurable for earned value and progress reporting by the process group itself. Essentially, you want to create a balanced scorecard for process progress. This makes your process group accountable for progress just like any other project team.
For deployment success, I will repeat an important division of labor within the process group itself. You absolutely need to develop advocates for the process framework architecture itself and make sure the integrity
of the process model is maintained. This book will be invaluable for that aspect. These people are very different from most process-group members, who should be domain experts. The process framework advocates are
the folks that put the “meat on the bone” for process and they will make sure that the process parts all fit within that framework architecture, whereas the domain folks make sure to develop process elements that are useful and make sense.

I make this point because uneducated management personnel may pressure you to “just get on with it” without considering the importance of making sure that all process elements fit within a framework architecture.
The worst thing you can do is crank out process into an ever larger pile of stuff that increasingly gets more and more useless for the organization. The main litmus test for process is that it is useful. I have run into
managers who seem to think that bigger piles mean success. In reality, you may have just the opposite result. Resist those who are pushing you in that direction for success. The most successful process group I led was when I was not only the lead but also the process architect and had management backing to do what was needed. I mention management backing because at another place, I had the exact same situation but had a boss who was so insecure that all my suggestions and recommendations were either ignored or rejected because they didn’t come from him! Anything from me was dead on arrival. If you’re ever in that position, run, don’t walk! You cannot succeed. There are people like that out there and (sadly) some are in senior management positions. I simply didn’t want to manipulate him to have him believe that all ideas were his ideas. That’s what it would take
to deal with this kind of person.

Ensure an Inspection Procedure Is in Place

When actually doing process deployment for the software process model, there is one how-to procedure that absolutely needs to be addressed early on: the inspection procedure. This particular procedure is fundamental to
all the activities within this software process model as a quality gate. If you have a lousy how-to procedure here, you will have an awful time in getting people to buy into this model. Conversely, a good how-to will
take off like wildfire and become engrained in an organization real fast. The software process model wants quality built in the “what you have to do” world by placing the quality responsibility on the producer’s back.
The inspection procedure is critical to this end goal. I worked at one place that had a “review” procedure in place. It was hardly used, did not work well, and the management protected it with
their lives. I had the gall to suggest a better way of doing things. I had to present this new way at three different hearings to this management group, finally receiving a disposition of “rejected.” They could not handle
the fact that this software process model allows for better mousetraps. Both methods could coexist in this model. I knew that once the better way was an option, the bad way would drop off for usage very naturally.
These managers had a personal and vested interest in preserving the status

quo — regardless of usefulness. They had invested time in the existing process element. They wanted no interlopers on their possessive world. This company was very closed in their thinking. Consequently, we had
no effective inspection procedure at this company and had a huge management barrier to ever getting a better way proposed or deployed. This same company has the same ineffectual review procedure in place
today that is really bad and is barely used. Go figure! In another job, I had the privilege of working for a section of a very large company and had incredible support from the head person. In that
environment, I was able to provide this part of the company with a slick, efficient, Web-based inspection procedure that was up to ten times faster than the existing inspection procedure. My new inspection procedure also
produced higher-quality inspections and had built-in defect prevention to boot. What happened was incredible. The word spread like wildfire within my own group about how great this procedure was. That worker enthusiasm spilled over to other organizational elements that clamored to get onboard with our solution. I was deluged with training requests and guest appearances to various “all-hands” meetings regarding this way of doing
things. I didn’t have to do a thing to sell this. It sold itself. I knew that the software process model approach encourages better ways of doing things and encourages variances in scale or location quite naturally.

Why is the inspection procedure so critical to this software process model?

 Every activity at the “what you need to do” level has built-in inspections across the board (i.e., the inspection procedure is a how-to elaboration on all the “Inspect” verbs in all activities).
 A bad inspection procedure can have a huge detrimental effect on all activities’ elapsed completion times. Conversely, an efficient inspection procedure can vastly improve activity execution times across the board.
 A good inspection procedure increases work product quality and reduces rework. Rework is expensive and should be avoided at all costs.
 A good inspection procedure gives you the basis for defect prevention — in addition to defect detection. With the software process model, you now have the ability to ask, “Where should this defect have been found?” This provides the mechanism to improve any earlier inspection checklist associated with any earlier work product. With this inspection procedure you have a built-in process-improvement mechanism in this software process model.
 Finally, an efficient inspection procedure will be used and will become part of the company culture. A bad one will not be used.

Get at Pain Issues

To be successful with process deployment, you really want to keep coming back to pain issues for any organization. The big question is, how do you do that? And how do you do it so that the data is believable? This
is independent of the type of process model you’re using. You will achieve higher levels of buy-in from all levels of the company if the perception is that you’re solving real-world problems. If you separate
process initiatives from “pain” issues, you will get a lot of cold shoulders about this process stuff. An absolute killer is to tie process initiatives to a maturity model (like CMMI) in a vacuum. As I mentioned before, a
particular model or standard can be viewed as the flavor of the month. Some people may view all this with an “if I keep a low profile, this too shall pass” attitude. There’s nothing like solving real problems — especially
if people can reduce their 60-hour weeks to something more reasonable. I learned one big lesson when I got married — don’t discount the power of a spouse! As Dr. Phil has said repeatedly, “If Mom’s not happy, no one
is happy.” For most employees, you really have a shadow employee to deal with as well — the employee’s spouse. If the employee can get home earlier, play with the kids more, do family things more, etc., how
do you think that family unit is going to support you? Do you think you’ll get early support for your next process initiative? The people part of process improvement can be enormous as a huge positive factor or a
huge negative factor. The process group needs to come to grips with this aspect of deploying new processes in an organization. It is not enough to have a marvelous process framework architecture into which all the
process parts fit nicely. Personal interviews have mixed results for actually getting at pain issues. Can you be trusted as an interviewer? Will the person being interviewed be forthright or will he or she give you politically correct data? Will there be retribution if he or she dares to be totally honest? For these reasons, I would not get process problem data this way. Two companies where I worked tried the survey route. In my opinion,
surveys are best suited for getting simple check-off answers to specific questions. They are not suitable for open-ended responses. I still laugh at a British sitcom called “Yes, Prime Minister,” where you can organize
sets of questions and get a totally opposing poll result based on the question set — even by surveying the same people. My point here is that polls and surveys can be manipulated. Busy people tend to kick and
scream about surveys and certainly want to get them off their plates as fast as possible. This means that open-ended surveys don’t end up with a lot of useful data. For these reasons, surveys are not the way to go.
As an adjunct for getting at pain issues, always leave the door open for having process practitioners critique or suggest things directly or via

An Implementation Technique for Getting at Pain Issues

I have used two of the 7 M tools (modified somewhat) very successfully to get at both enterprise process pain issues and project pain issues (as a project postmortem). These two techniques have fancy names:

 Infinity brainstorming
 Interrelational digraphs

I don’t use these terms when I conduct these techniques — I just call them “focus groups,” “action groups,” or “postmortem.” Using fancy terms will turn people off. Don’t do it. A focus group is fast (it usually takes
less than two hours) and is totally anonymous (no retribution). This particular technique levels the playing field for quiet, introverted people versus loud, dominant people. That quiet, shy person may be the very
person with a lot to express anonymously. The most successful focus group in my experience was done with about 35 people in a single session of about an hour and a half. At this point, you’re probably thinking it’s impossible to have a successful session with 35 people. Conventional wisdom says the success of any meeting is conversely proportionate to the number of attendees. The higher number of people produces lower success. The lower number of people produces the higher success. This technique is just the opposite. You need at least 12 people to be successful. A small group simply won’t work for this technique.

Here are the supplies needed to conduct these sessions:
 Large Post-it notes — enough for about 20 Post-its minimum per participant.
 Butcher paper or flip-chart paper —

these are taped to three walls of the conference room. Four or five charts are taped to one wall. Five to six charts are taped to the opposite wall. One chart is taped
on a third wall (for infinity brainstorming rules). One chart will be used to capture the major impact analysis after we collect the data from the infinity brainstorming part of the session. The size of the room will affect how many walls are actually used. No matter what, you need two walls for charts.

 Masking tape for the large paper sheets above.
 Fine-point felt pens — enough for participants and facilitator.
You need a large conference room that will hold all the participants and has wall space onto which you can tape large paper charts on three walls. Reserve this room for about two and a half to three hours to allow
time for the facilitator to set up, for the actual session, and for wrapping up. The participants show up about half an hour after the room’s reserved start time. At that point, all supplies should be out and the paper should
be up on the walls. This is what you need to do ahead of time:
 Write down the session rules on a single chart. The rules are:
– One finding per Post-it
– You can write as many Post-its as you want within the allotted
time
– Use only the supplied fine-point felt pen for writing
– No handwriting — print your finding
– No names (i.e., anonymous)
– Don’t get personal — it’s process related
– Be businesslike (not crude) in your remarks
– Make finding clear as to your intent: Can another person understand
your point?
– Be quiet when writing findings
 Take a few minutes to explain what you will be doing to the assembled group. Make sure the group knows about your expectations and desired results. I have even put this in written form and sent it to the group ahead of time to make sure that everyone is onboard with this technique. This sets the foundation. (5 minutes maximum)
 Announce that participants are to write one finding per Post-it note on as many Post-it notes as they want — within a ten-minute time frame. This is a totally quiet part of the technique. After writing,
participants take their individual Post-its and stick them onto one wall’s paper charts. Random placement is in order. This part actually creates all the pain issues as experienced by the participants in a
nonretributional way because no names are used. (10 minutes maximum).

 Explain that the findings should be placed into “like” groupings by placing Post-its from one wall into Post-it groupings on another wall. Like things should be clustered together; some adjustments
may need to be made later. Also point out that there is a predetermined category called “orphans.” (When conducting a project postmortem, I add a “good” category for the things we did right
on a project.) Forget trying to establish any category names. (About 1 minute)

 Have everyone stand up, grab a pile of Post-its from one wall, and place them on another wall as Post-it clusters. Remind them that once a finding is placed, it can’t be removed. Some talk among
people can happen at this point. If you do this correctly, you will try to limit the category clusters to about 10–12 groups at a maximum. Have orphaned Post-its be placed under “orphans.”
(About 10–12 minutes)  Identify a “reader” from the group. This individual will read the Post-its to the entire group and possibly rearrange some Post-its. (About 1-2 minutes)
 Have the reader stand up and read each Post-it finding in each cluster out loud. This accomplishes the following:
– Everyone gets to hear all the findings.
– Everyone gets a chance to persuade the reader to remove a
Post-it if it is not in a “like” group.
– Finally, the group establishes a mailbox name for each cluster
of Post-its. Keep the name short if possible. (For project postmortems,
I found that using the names from one project as predetermined names for subsequent postmortems was helpful for metrics data. However, one group disagreed with this and felt it was stifling to have a set of mostly predetermined names, especially when they disagreed with an earlier group over those names.)
 The reader repeats this for all Post-it clusters until all cluster groups have category names. During this time frame, some Post-it notes may be moved from one group to another. Finally, an attempt is made to place any and all orphaned Post-it notes into a named category. If not, they stay as orphans. This part takes the findings and attempts to categorize them for the interrelational digraph part of this technique. (15–20 minutes)
 The moderator takes a large blank matrix and writes all the category names down the left side of the matrix and then writes the same set across the top of the matrix. The moderator shades out where
each category intersects with itself. You should end up with a diagonal line of shaded boxes from the top left down to the bottom right in that matrix. This is the foundation for the interrelationship digraph. We want to end up with some idea of what we need to work on first, second, third, etc., to get the biggest bang for the buck in process. (About 2 minutes)

 The moderator reads each category name down the left side of the matrix, and asks for each, “For this category, what are the other categories that have a major impact on it?” The group participates in identifying other categories that have that major impact. The moderator simply places an “X” across the row for that targeted category. This gets repeated for each category name down the left until done. (10 minutes maximum)

 The moderator tallies up the number of “X” marks per column and writes the totals at the bottom of each column. This provides a good idea of what categories should be attacked first that have the most impact on other things. (About 2 minutes)

 Thank the group for their time and dismiss them.

Is this a perfect technique? No. Is it fast? Yes. Does it get at process pain issues? You bet. By spending about one and a half to two hours on this, you will extract pain issues from everybody. There is no retribution
because there are no names involved. The quiet person can write stuff down anonymously just like the extroverted person can. The inputs come from the very people seeing and suffering from those pain issues.
What I have done after the session is to record all the findings by category into an Excel spreadsheet. This is a great application for counting things and coming up with percentages. This completed spreadsheet gets
sent back to all the participants immediately. I have cautioned this group to keep this data under wraps because it is confidential. The next step is to convene a senior management meeting to go over
the findings and categories. The senior staff needs an understanding of what went on and that this technique gathers data rapidly. As a moderator, take the top three categories in particular and concentrate on those for
this senior management group. This is done to:

 Acquaint the senior management on pain issues “from the trenches” and in a written form (not sanitized)
 Identify the top three categories that, if worked, should give the biggest bang for the buck in improving or removing pain issues
 Have this top-level management group develop an initial plan to attack the top three categories (or a subset of them) Finally, I arrange for a feedback meeting with all the participants, so that a member of senior staff:

 Tells participants that management has heard their pain issues

 Informs participants on the plan to attack pain issues This feedback meeting can be powerful to all involved. It closes the loop with participants and makes them feel like they have not wasted their time. It involves senior management directly with unsanitized pain issues. They can’t say they didn’t know about this or that. There’s no place to hide. They have to do something about it. It does cause action. When any improvements are made, you will keep going back to these pain issues. You don’t tell the rank and file that you’ve now satisfied the first goal of some part of the CMMI! They will not relate to that at all. Tell them that these processes directly address the pain issues that were established. When regular folks get to see less pain, you will rapidly develop more and more champions to your cause. If upper management sees smoother operations, better quality, smaller time-to-market costs, better repeatability, etc., which all contribute to a healthier bottom line, you will get more champions at that level. You can do this periodically to see how you’re doing. You can do this
as part of a preappraisal drill for process maturity. You can do this as a preaudit drill. The periodic approach will give you some powerful metrics related to pain issues. There’s nothing like solid numbers to show your
workforce that you are serious about reducing workforce pain.

Develop a Top-Level Life-Cycle Framework

This may be obvious but you really need to provide that top-level lifecycle framework into which to fit all the process pieces being developed. Without that top-level picture, there is no cohesive way of creating process
elements that “fit” into anything. One vice president I worked for insisted on forming various Process Action Teams (PATs) to get some deployment items done without this in place. I was even ordered to get these groups
going despite my strong objections. The results of this VP’s order were absolute chaos and a huge waste of time. I sure hope none of you will deal with some of the characters I’ve had to endure for process development
and improvement! People like that are out there. Some of them even get promoted! Hopefully, the top-level life cycle has been developed before insertion takes place. You can do a subset top-level life cycle if your initial
deployment efforts only deal with that part of the overall life cycle. For example, if you are attacking proposal-related processes, you can get away with just developing the proposal part of your life cycle. The bottom

line is that you absolutely need a framework into which to fit any process elements, so that you develop once and don’t need rework. With that top-level life cycle laid out with PADs per life-cycle phase,
you now have the ability to tie your pain issues to activities and to associative procedures. You also have the ability to tie event-driven procedures to any and all life-cycle phases.

Reference: Defining and Deploying Software Processes

Tagged : / / / / / / / / / / / / / / /