Deployment Foundation Issues

deployment-foundation-issue

Deployment Foundation Issues

Establish Key Roles/Charter for Deployment

The very first order of business is to firmly establish “who’s on first” for getting deployment done. Senior management is crucial at this point for making sure all their direct reports and managers are on board with this
and that it comes from the top. I mention this because at one place I worked, we immediately got into interdepartment squabbling due to a lack of senior management support and direction. If you hear a manager
say things like “do what you want — but don’t touch my area,” you will have deployment problems. I strongly recommend the formation of a process group as the focal point for all matters related to process and process deployment. This group has to have both the authorization and responsibility for process. If you have a distributed set of “process owners,” consolidate that responsibility and authority to this new group. My requirements for membership in this process group are:

Six to eight people. Larger process groups tend to be less efficient and more cumbersome. A smaller group tends to be ineffective. It is not necessary to have representatives from all corners of your organization. It is important that these domain experts get called in as necessary for process development and inspection. One company had a 15-person process group established by a non–process-oriented vice president. It was a disaster to get a
repeatable quorum present for any meeting. We spent subsequent meetings repeating stuff from earlier meetings to accommodate a different set of participants at every meeting.

Process-group commitments. My most successful process group was when I insisted that members commit 5 percent of their workweek to process-group meetings. Group members and their managers had to sign the commitment. The 5 percent figure is doable — even for busy people. Two one-hour meetings per week reflect that percentage. I also had fixed time meetings both by time and day of week. It became automatic to show up. To make this really work, I was the process-group lead and I dedicated 100 percent to this effort. I had clerical support services available to me. The most effective process-group meetings are concentrated sessions
with a time-stamped agenda and where my support staff and I do all extracurricular activities. You want to restrict extra time (beyond actual process-group meeting time) needed by your key process participants because they tend to be super busy.

Showing up on time. We could not tolerate people wandering in five or ten minutes late. We started promptly on the hour and stopped promptly on the hour. At one company, I removed a person for being late because it held everyone up. Promptness became so important at one commercial company that other process- group members would be “all over” tardy people. The tardiness stopped quickly when peers got involved in any discipline.

People who are process oriented. Do not have people in this group who don’t fit this requirement! At one company, a vice president insisted on naming people to the group (which became double the size I had wanted) who were almost completely ignorant about process. We spent almost all our precious process-group time just getting these people to understand the most fundamental aspects of process. It was painful. The VP wondered why progress was slow. Duh!

People who are opinionated — i.e., not afraid to speak up on issues. You cannot afford to have people just show up and suck air out of the room and not participate. The best processes I’ve developed came from sessions where it was not clear who would walk out alive after spirited process discussions.

People that others look up to. They may be leads or workers. Every organization has these types of people and they may not be in the management ranks. The reason for this requirement is to form an initial set of process champions right out of the box. These initial process champions will develop more champions.

People who are willing to have an enterprise perspective versus an organizational perspective. This could be a huge problem if process- group discussions degenerate into preservation of turf — no matter what. At one place, I actually went to a paint store, bought disposable painting hats, placed a big “E” for enterprise on the hats, and made process-group members wear the hats at our meetings to reinforce that enterprise focus. It got a few laughs and some grumbles but it worked.

People who are not “who” oriented. A process group avoids the “who” question and concentrates on the “whats.” Once the “what you have to do” is addressed, the “who” looks after itself. When process-group meetings degenerated into discussing “who does this” and “who does that,” I routinely stopped the meeting and reminded everyone that when you have a hole in the bottom of the boat, this is not the time to discuss whose hole it is! I got laughs but my point was taken.

This is your key group for process development and deployment. It’s obvious, but if you have this marvelous group put together without regard to an overall process architectural goal, you will fail. This is where this software process model will help you enormously. Ideally, the processgroup lead has an in-depth knowledge of the targeted process architecture with an initial goal to get the process group up to speed on this aspect first — before any company processes are tackled. If you are under pressure to “just get on with it” (without getting all process members up on the target process architecture), you will fail. You will end up flailing around for a large amount of time. You will also end up with a hodgepodge of process elements and no encompassing architecture. You want to end up with a hierarchy of goals supported by tasks that are measurable for earned value and progress reporting by the process group itself. Essentially, you want to create a balanced scorecard for process progress. This makes your process group accountable for progress just like any other project team.
For deployment success, I will repeat an important division of labor within the process group itself. You absolutely need to develop advocates for the process framework architecture itself and make sure the integrity
of the process model is maintained. This book will be invaluable for that aspect. These people are very different from most process-group members, who should be domain experts. The process framework advocates are
the folks that put the “meat on the bone” for process and they will make sure that the process parts all fit within that framework architecture, whereas the domain folks make sure to develop process elements that are useful and make sense.

I make this point because uneducated management personnel may pressure you to “just get on with it” without considering the importance of making sure that all process elements fit within a framework architecture.
The worst thing you can do is crank out process into an ever larger pile of stuff that increasingly gets more and more useless for the organization. The main litmus test for process is that it is useful. I have run into
managers who seem to think that bigger piles mean success. In reality, you may have just the opposite result. Resist those who are pushing you in that direction for success. The most successful process group I led was when I was not only the lead but also the process architect and had management backing to do what was needed. I mention management backing because at another place, I had the exact same situation but had a boss who was so insecure that all my suggestions and recommendations were either ignored or rejected because they didn’t come from him! Anything from me was dead on arrival. If you’re ever in that position, run, don’t walk! You cannot succeed. There are people like that out there and (sadly) some are in senior management positions. I simply didn’t want to manipulate him to have him believe that all ideas were his ideas. That’s what it would take
to deal with this kind of person.

Ensure an Inspection Procedure Is in Place

When actually doing process deployment for the software process model, there is one how-to procedure that absolutely needs to be addressed early on: the inspection procedure. This particular procedure is fundamental to
all the activities within this software process model as a quality gate. If you have a lousy how-to procedure here, you will have an awful time in getting people to buy into this model. Conversely, a good how-to will
take off like wildfire and become engrained in an organization real fast. The software process model wants quality built in the “what you have to do” world by placing the quality responsibility on the producer’s back.
The inspection procedure is critical to this end goal. I worked at one place that had a “review” procedure in place. It was hardly used, did not work well, and the management protected it with
their lives. I had the gall to suggest a better way of doing things. I had to present this new way at three different hearings to this management group, finally receiving a disposition of “rejected.” They could not handle
the fact that this software process model allows for better mousetraps. Both methods could coexist in this model. I knew that once the better way was an option, the bad way would drop off for usage very naturally.
These managers had a personal and vested interest in preserving the status

quo — regardless of usefulness. They had invested time in the existing process element. They wanted no interlopers on their possessive world. This company was very closed in their thinking. Consequently, we had
no effective inspection procedure at this company and had a huge management barrier to ever getting a better way proposed or deployed. This same company has the same ineffectual review procedure in place
today that is really bad and is barely used. Go figure! In another job, I had the privilege of working for a section of a very large company and had incredible support from the head person. In that
environment, I was able to provide this part of the company with a slick, efficient, Web-based inspection procedure that was up to ten times faster than the existing inspection procedure. My new inspection procedure also
produced higher-quality inspections and had built-in defect prevention to boot. What happened was incredible. The word spread like wildfire within my own group about how great this procedure was. That worker enthusiasm spilled over to other organizational elements that clamored to get onboard with our solution. I was deluged with training requests and guest appearances to various “all-hands” meetings regarding this way of doing
things. I didn’t have to do a thing to sell this. It sold itself. I knew that the software process model approach encourages better ways of doing things and encourages variances in scale or location quite naturally.

Why is the inspection procedure so critical to this software process model?

 Every activity at the “what you need to do” level has built-in inspections across the board (i.e., the inspection procedure is a how-to elaboration on all the “Inspect” verbs in all activities).
 A bad inspection procedure can have a huge detrimental effect on all activities’ elapsed completion times. Conversely, an efficient inspection procedure can vastly improve activity execution times across the board.
 A good inspection procedure increases work product quality and reduces rework. Rework is expensive and should be avoided at all costs.
 A good inspection procedure gives you the basis for defect prevention — in addition to defect detection. With the software process model, you now have the ability to ask, “Where should this defect have been found?” This provides the mechanism to improve any earlier inspection checklist associated with any earlier work product. With this inspection procedure you have a built-in process-improvement mechanism in this software process model.
 Finally, an efficient inspection procedure will be used and will become part of the company culture. A bad one will not be used.

Get at Pain Issues

To be successful with process deployment, you really want to keep coming back to pain issues for any organization. The big question is, how do you do that? And how do you do it so that the data is believable? This
is independent of the type of process model you’re using. You will achieve higher levels of buy-in from all levels of the company if the perception is that you’re solving real-world problems. If you separate
process initiatives from “pain” issues, you will get a lot of cold shoulders about this process stuff. An absolute killer is to tie process initiatives to a maturity model (like CMMI) in a vacuum. As I mentioned before, a
particular model or standard can be viewed as the flavor of the month. Some people may view all this with an “if I keep a low profile, this too shall pass” attitude. There’s nothing like solving real problems — especially
if people can reduce their 60-hour weeks to something more reasonable. I learned one big lesson when I got married — don’t discount the power of a spouse! As Dr. Phil has said repeatedly, “If Mom’s not happy, no one
is happy.” For most employees, you really have a shadow employee to deal with as well — the employee’s spouse. If the employee can get home earlier, play with the kids more, do family things more, etc., how
do you think that family unit is going to support you? Do you think you’ll get early support for your next process initiative? The people part of process improvement can be enormous as a huge positive factor or a
huge negative factor. The process group needs to come to grips with this aspect of deploying new processes in an organization. It is not enough to have a marvelous process framework architecture into which all the
process parts fit nicely. Personal interviews have mixed results for actually getting at pain issues. Can you be trusted as an interviewer? Will the person being interviewed be forthright or will he or she give you politically correct data? Will there be retribution if he or she dares to be totally honest? For these reasons, I would not get process problem data this way. Two companies where I worked tried the survey route. In my opinion,
surveys are best suited for getting simple check-off answers to specific questions. They are not suitable for open-ended responses. I still laugh at a British sitcom called “Yes, Prime Minister,” where you can organize
sets of questions and get a totally opposing poll result based on the question set — even by surveying the same people. My point here is that polls and surveys can be manipulated. Busy people tend to kick and
scream about surveys and certainly want to get them off their plates as fast as possible. This means that open-ended surveys don’t end up with a lot of useful data. For these reasons, surveys are not the way to go.
As an adjunct for getting at pain issues, always leave the door open for having process practitioners critique or suggest things directly or via

An Implementation Technique for Getting at Pain Issues

I have used two of the 7 M tools (modified somewhat) very successfully to get at both enterprise process pain issues and project pain issues (as a project postmortem). These two techniques have fancy names:

 Infinity brainstorming
 Interrelational digraphs

I don’t use these terms when I conduct these techniques — I just call them “focus groups,” “action groups,” or “postmortem.” Using fancy terms will turn people off. Don’t do it. A focus group is fast (it usually takes
less than two hours) and is totally anonymous (no retribution). This particular technique levels the playing field for quiet, introverted people versus loud, dominant people. That quiet, shy person may be the very
person with a lot to express anonymously. The most successful focus group in my experience was done with about 35 people in a single session of about an hour and a half. At this point, you’re probably thinking it’s impossible to have a successful session with 35 people. Conventional wisdom says the success of any meeting is conversely proportionate to the number of attendees. The higher number of people produces lower success. The lower number of people produces the higher success. This technique is just the opposite. You need at least 12 people to be successful. A small group simply won’t work for this technique.

Here are the supplies needed to conduct these sessions:
 Large Post-it notes — enough for about 20 Post-its minimum per participant.
 Butcher paper or flip-chart paper —

these are taped to three walls of the conference room. Four or five charts are taped to one wall. Five to six charts are taped to the opposite wall. One chart is taped
on a third wall (for infinity brainstorming rules). One chart will be used to capture the major impact analysis after we collect the data from the infinity brainstorming part of the session. The size of the room will affect how many walls are actually used. No matter what, you need two walls for charts.

 Masking tape for the large paper sheets above.
 Fine-point felt pens — enough for participants and facilitator.
You need a large conference room that will hold all the participants and has wall space onto which you can tape large paper charts on three walls. Reserve this room for about two and a half to three hours to allow
time for the facilitator to set up, for the actual session, and for wrapping up. The participants show up about half an hour after the room’s reserved start time. At that point, all supplies should be out and the paper should
be up on the walls. This is what you need to do ahead of time:
 Write down the session rules on a single chart. The rules are:
– One finding per Post-it
– You can write as many Post-its as you want within the allotted
time
– Use only the supplied fine-point felt pen for writing
– No handwriting — print your finding
– No names (i.e., anonymous)
– Don’t get personal — it’s process related
– Be businesslike (not crude) in your remarks
– Make finding clear as to your intent: Can another person understand
your point?
– Be quiet when writing findings
 Take a few minutes to explain what you will be doing to the assembled group. Make sure the group knows about your expectations and desired results. I have even put this in written form and sent it to the group ahead of time to make sure that everyone is onboard with this technique. This sets the foundation. (5 minutes maximum)
 Announce that participants are to write one finding per Post-it note on as many Post-it notes as they want — within a ten-minute time frame. This is a totally quiet part of the technique. After writing,
participants take their individual Post-its and stick them onto one wall’s paper charts. Random placement is in order. This part actually creates all the pain issues as experienced by the participants in a
nonretributional way because no names are used. (10 minutes maximum).

 Explain that the findings should be placed into “like” groupings by placing Post-its from one wall into Post-it groupings on another wall. Like things should be clustered together; some adjustments
may need to be made later. Also point out that there is a predetermined category called “orphans.” (When conducting a project postmortem, I add a “good” category for the things we did right
on a project.) Forget trying to establish any category names. (About 1 minute)

 Have everyone stand up, grab a pile of Post-its from one wall, and place them on another wall as Post-it clusters. Remind them that once a finding is placed, it can’t be removed. Some talk among
people can happen at this point. If you do this correctly, you will try to limit the category clusters to about 10–12 groups at a maximum. Have orphaned Post-its be placed under “orphans.”
(About 10–12 minutes)  Identify a “reader” from the group. This individual will read the Post-its to the entire group and possibly rearrange some Post-its. (About 1-2 minutes)
 Have the reader stand up and read each Post-it finding in each cluster out loud. This accomplishes the following:
– Everyone gets to hear all the findings.
– Everyone gets a chance to persuade the reader to remove a
Post-it if it is not in a “like” group.
– Finally, the group establishes a mailbox name for each cluster
of Post-its. Keep the name short if possible. (For project postmortems,
I found that using the names from one project as predetermined names for subsequent postmortems was helpful for metrics data. However, one group disagreed with this and felt it was stifling to have a set of mostly predetermined names, especially when they disagreed with an earlier group over those names.)
 The reader repeats this for all Post-it clusters until all cluster groups have category names. During this time frame, some Post-it notes may be moved from one group to another. Finally, an attempt is made to place any and all orphaned Post-it notes into a named category. If not, they stay as orphans. This part takes the findings and attempts to categorize them for the interrelational digraph part of this technique. (15–20 minutes)
 The moderator takes a large blank matrix and writes all the category names down the left side of the matrix and then writes the same set across the top of the matrix. The moderator shades out where
each category intersects with itself. You should end up with a diagonal line of shaded boxes from the top left down to the bottom right in that matrix. This is the foundation for the interrelationship digraph. We want to end up with some idea of what we need to work on first, second, third, etc., to get the biggest bang for the buck in process. (About 2 minutes)

 The moderator reads each category name down the left side of the matrix, and asks for each, “For this category, what are the other categories that have a major impact on it?” The group participates in identifying other categories that have that major impact. The moderator simply places an “X” across the row for that targeted category. This gets repeated for each category name down the left until done. (10 minutes maximum)

 The moderator tallies up the number of “X” marks per column and writes the totals at the bottom of each column. This provides a good idea of what categories should be attacked first that have the most impact on other things. (About 2 minutes)

 Thank the group for their time and dismiss them.

Is this a perfect technique? No. Is it fast? Yes. Does it get at process pain issues? You bet. By spending about one and a half to two hours on this, you will extract pain issues from everybody. There is no retribution
because there are no names involved. The quiet person can write stuff down anonymously just like the extroverted person can. The inputs come from the very people seeing and suffering from those pain issues.
What I have done after the session is to record all the findings by category into an Excel spreadsheet. This is a great application for counting things and coming up with percentages. This completed spreadsheet gets
sent back to all the participants immediately. I have cautioned this group to keep this data under wraps because it is confidential. The next step is to convene a senior management meeting to go over
the findings and categories. The senior staff needs an understanding of what went on and that this technique gathers data rapidly. As a moderator, take the top three categories in particular and concentrate on those for
this senior management group. This is done to:

 Acquaint the senior management on pain issues “from the trenches” and in a written form (not sanitized)
 Identify the top three categories that, if worked, should give the biggest bang for the buck in improving or removing pain issues
 Have this top-level management group develop an initial plan to attack the top three categories (or a subset of them) Finally, I arrange for a feedback meeting with all the participants, so that a member of senior staff:

 Tells participants that management has heard their pain issues

 Informs participants on the plan to attack pain issues This feedback meeting can be powerful to all involved. It closes the loop with participants and makes them feel like they have not wasted their time. It involves senior management directly with unsanitized pain issues. They can’t say they didn’t know about this or that. There’s no place to hide. They have to do something about it. It does cause action. When any improvements are made, you will keep going back to these pain issues. You don’t tell the rank and file that you’ve now satisfied the first goal of some part of the CMMI! They will not relate to that at all. Tell them that these processes directly address the pain issues that were established. When regular folks get to see less pain, you will rapidly develop more and more champions to your cause. If upper management sees smoother operations, better quality, smaller time-to-market costs, better repeatability, etc., which all contribute to a healthier bottom line, you will get more champions at that level. You can do this periodically to see how you’re doing. You can do this
as part of a preappraisal drill for process maturity. You can do this as a preaudit drill. The periodic approach will give you some powerful metrics related to pain issues. There’s nothing like solid numbers to show your
workforce that you are serious about reducing workforce pain.

Develop a Top-Level Life-Cycle Framework

This may be obvious but you really need to provide that top-level lifecycle framework into which to fit all the process pieces being developed. Without that top-level picture, there is no cohesive way of creating process
elements that “fit” into anything. One vice president I worked for insisted on forming various Process Action Teams (PATs) to get some deployment items done without this in place. I was even ordered to get these groups
going despite my strong objections. The results of this VP’s order were absolute chaos and a huge waste of time. I sure hope none of you will deal with some of the characters I’ve had to endure for process development
and improvement! People like that are out there. Some of them even get promoted! Hopefully, the top-level life cycle has been developed before insertion takes place. You can do a subset top-level life cycle if your initial
deployment efforts only deal with that part of the overall life cycle. For example, if you are attacking proposal-related processes, you can get away with just developing the proposal part of your life cycle. The bottom

line is that you absolutely need a framework into which to fit any process elements, so that you develop once and don’t need rework. With that top-level life cycle laid out with PADs per life-cycle phase,
you now have the ability to tie your pain issues to activities and to associative procedures. You also have the ability to tie event-driven procedures to any and all life-cycle phases.

Reference: Defining and Deploying Software Processes

Tagged : / / / / / / / / / / / / / / /

Why Worry About Versioning? – Versioning Complete guide

versioning

Why Worry About Versioning?

Having a good version scheme for your software is important for several reasons. The following are the top five things a version scheme allows you to do (in random order):

Track your product binaries to the original source files.

Re-create a past build by having meaningful labels in your source tree.

Avoid “DLL hell” —multiple versions of the same file (library in this case) on a machine.

Help your setup program handle upgrades and service packs.

Provide your product support and Q/A teams with an easy way to identify the bits they are working with.

So, how do you keep track of files in a product and link them back to the owner? How can you tell that you are testing or using the latest version of a released file? What about the rest of the reasons in the list? This chapter describes my recommendations for the most effective and easiest way to set up and apply versioning to your software. Many different schemes are available, and you should feel free to create your own versioning method.

Ultimately, like most of the other topics in this book, the responsibility to apply versioning to the files in a build tends to fall into the hands of the build team. That might be because it is usually the build team that has to deal with the headaches that come from having a poor versioning scheme. Therefore, it is in their best interest to publish, promote, and enforce a good versioning scheme. If the build team does not own this, then someone who does not understand the full implications of versioning will make the rules. Needless to say, this would not be desirable for anybody involved with the product.

File Versioning

Every file in a product should have a version number; a four-part number separated by periods such as the one that follows seems to be the established best practice. There are many variations of what each part represents. I will explain what I think is the best way of defining these parts.

Major version— The component owner usually assigns this number. It should be the internal version of the product. It rarely changes during the development cycle of a product release.

Minor version— The component owner usually assigns this number. It is normally used when an incremental release of the product is planned instead of a full feature upgrade. It rarely changes during the development cycle of a product release.

Build number— The build team usually assigns this number based on the build that the file was generated with. It changes with every build of the code.

Revision— The build team usually assigns this number. It can have several meanings: bug number, build number of an older file being replaced, or service pack number. It rarely changes. This number is used mostly when servicing the file for an external release.

Build Number

Each build that is released out of the Central Build Lab should have a unique version stamp (also known as the build number). This number should be incremented just before a build is started. Don’t use a date for a build number or mix dates into a version number simply because there are so many date formats out there that it can be difficult to standardize on one. Also, if you have more than one build on a given date, the naming can get tricky. Stick with an n=n+1 build number. For example, if you release build 100 in the morning and rebuild and release your code in the afternoon, the afternoon build should be build 101. If you were to use a date for the build number, say 010105, what would your afternoon build number be? 010105.1? This can get rather confusing.

It is also a good idea to touch all the files just prior to releasing the build so that you get a current date/time stamp on the files. By “touching” the files, I simply mean using a tool (touch.exe) to modify the date/time stamp on a file. There are several free tools available for you to download, or you can write one yourself. Touching the files helps in the tracking process of the files and keeps all of the dates and times consistent in a build release. It also eliminates the need to include the current date in a build number. You already have the date by just looking at the file properties.

In addition, try to avoid tools that inject version numbers into binaries (as a post-build step). Although the tools might seem reliable, they introduce an instability factor into your released binaries by hacking hexadecimal code. Most Q/A teams become distressed if this happens to the binaries they are testing—and justifiably so. The build number should be built into the binary or document files.

Source Code Control Trees

All the source code control (SCC) software that I have seen has some kind of labeling function to track either checked-in binaries or source code. (Remember that I am not for checking in binaries, but I mention this for the groups or teams that do. This type of versioning (or more appropriately named labeling) is typically used to track a group of sources that correspond to a product release. Most of the time, they combine labeling of the sources with the branching of the source code lines.

Should There Be Other Fields in the File Version Number?

I am of the opinion that no, there shouldn’t be other fields in the file version number. Let’s look at some other fields that might seem like they should be included but really don’t need to be:

  • Virtual Build Lab (VBL) or offsite development group number— You can use this number to track a check-in back to a specific site or lab that the code was worked on. If you have a golden tree or mainline setup, and all your VBLs or offsite trees have to reverse integrate into the golden tree, this extra field would be overkill. That’s because you can trace the owner through the golden tree check-in. Having a field in which you would have to look up the VBL or offsite number would take just as long.

    The reality is that when you check a version number, you won’t care where the file came from. You’ll only care if it is a unique enough number to accurately trace the file to the source that created it. Most likely, you’ll already know the version number you’re looking for and you’ll just need to confirm that you’re using it.

  • Component—If you break your whole project into separate components, should each component have its own identification number that would be included in the version string? No, similarly to the reasons in the previous bullet, you can track this information by the name of the binary or other information when checking the other properties of the file. This information would probably only come into play if you were filing a bug and you had other resources available to determine which component this file belonged to.
  • Service Pack Build Number— If you’re doing daily builds of a service pack release, you should use the earlier example; increment the build number and keep the revision number at the current in-place file build number. This seems like a good argument for a fifth field, but it isn’t if the revision field is used properly.

 

DLL or Executable Versions for .NET (Assembly Versions)

This section applies to you only if you are programming in .NET.

When Microsoft introduced .NET, one of its goals was to get rid of DLL hell and all the extra setup steps I talk about later in this chapter. In reviewing where .NET is today, it looks like Microsoft has resolved the sideby-side DLL hell problem, but now the problem is “assembly version hell.” Without going into too much detail about the .NET infrastructure, let’s touch on the difference between file versioning and assembly versioning.

Assembly versions are meant for binding purposes only; they are not meant for keeping track of different daily versions. Use the file version for that instead. It is recommended that you keep the assembly version the same from build to build and change it only after each external release. For more details on how .NET works with these versions, refer to Jeffrey Richter’s .NET book. Don’t link the two versions, but make sure each assembly has both an assembly version and a file version when you right-click. You can use the same format described earlier for file versioning for the assembly version.

How Versioning Affects Setup

Have you ever met someone who reformats his machine every six months because “it just crashes less if I do” or it “performs better?” It might sound draconian, but the current state of component versioning and setup makes starting from scratch a likely solution to these performance issues, which are further complicated by spyware.

Most of the problems occur when various pieces of software end up installing components (DLLs and COM components) that are not quite compatible with each other or with the full set of installed products. Just one incorrect or incorrectly installed DLL can make a program flaky or prevent it from starting up. In fact, DLL and component installation is so important that it is a major part of the Windows logo requirement.

If you are involved in your product’s setup, or if you are involved in making decisions about how to update your components (produce new versions), you can do some specific things to minimize DLL hell and get the correct version of your file on the machine.

Installing components correctly is a little tricky, but with these tips, you can install your components in a way that minimizes the chance of breaking other products on your own.

Install the Correct Version of a Component for the Operating System and Locale

If you have operating system (OS)-specific components, make sure your setup program(s) check which OS you are using and install only the correct components. Also, you cannot give two components the same name and install them in the same directory. If you do, you overwrite the component on the second install on a dual-boot system. Note that the logo requirements recommend that you avoid installing different OS files if possible. Related to this problem is the problem caused when you install the wrong component or typelib for the locale in use, such as installing a U.S. English component on a German machine. This causes messages, labels, menus, and automation method names to be displayed in the wrong language.

Write Components to the Right Places

Avoid copying components to a system directory. An exception to this is if you are updating a system component. For that, you must use an update program provided by the group within Microsoft that maintains the component.

In general, you should copy components to the same directory that you copy the EXE. If you share components between applications, establish a shared components directory. However, it is not recommended that you share components between applications. The risks outweigh the benefits of reduced disk space consumption.

Do Not Install Older Components Over Newer Ones

Sometimes, the setup writer might not properly check the version of an installed component when deciding whether to overwrite the component or skip it. The result can be that an older version of the component is written over a newer version. Your product runs fine, but anything that depends on new features of the newer component fails. Furthermore, your product gets a reputation for breaking other products. We address the issue of whether it makes sense to overwrite components at all. But if you do overwrite them, you don’t want to overwrite a newer version.

“Copy on Reboot” If Component Is in Use

Another common mistake is to avoid dealing with the fact that you cannot overwrite a component that is in use. Instead, you have to set up the file to copy on reboot. Note that if one component is in use, you probably should set up all the components to copy on reboot. If you don’t, and if the user doesn’t reboot promptly, your new components could be mixed with the old ones.

Register Components Correctly; Take Security into Account

Sometimes setups don’t properly register COM components correctly, including the proxy and stub. Note that Windows CE requires that you also register DLLs. Note, too, that when installing DCOM components, you must be vigilant about permissions and security.

Copy Any Component That You Overwrite

It is smart to make a copy of any component that you overwrite before you overwrite it. You won’t want to put it back when you uninstall unless you’re sure that no product installed after yours will need the newer component—a difficult prediction! But by storing the component in a safe place, you make it possible for the user to fix his system if it turns out that the component you installed breaks it. You can let users know about this in the troubleshooting section of your documentation, the README file, or on your Web site. Doing this might not save a call to support, but it does at least make the problem solvable. If the component is not in use, you can move it rather than copying it. Moving is a much faster operation.

Redistribute a Self-Extracting EXE Rather Than Raw Components

If your component is redistributed by others (for instance, your component is distributed with several different products, especially third-party products), it is wise to provide a self-extracting EXE that sets up your component correctly. Make this EXE the only way that you distribute your component. (Such an EXE is also an ideal distribution package for the Web.) If you just distribute raw components, you have to rely on those who redistribute your components to get the setup just right. As we have seen, this is pretty easy to mess up.

Your EXE should support command-line switches for running silently (without a UI) and to force overwriting, even of newer components, so that product support can step users through overwriting if a problem arises.

If you need to update core components that are provided by other groups, use only the EXE that is provided by that group.

Test Setup on Real-World Systems

If you’re not careful, you can install all your setup testing on systems that already happen to have the right components installed and the right registry entries made. Be sure to test on raw systems, on all operating systems, and with popular configurations and third-party software already installed. Also, test other products to make sure they still work after you install your components.

Even Installing Correctly Does Not Always Work

Even if you follow all the preceding steps and install everything 100 percent correctly, you can still have problems caused by updating components. Why? Well, even though the new component is installed correctly, its behavior might be different enough from the old component that it breaks existing programs. Here is an example. The specification for a function says that a particular parameter must not be NULL, but the old version of the component ran fine if you passed NULL. If you enforce the spec in a new version (you might need to do this to make the component more robust), any code that passes NULL fails. Because not all programmers read the API documentation each time they write a call, this is a likely scenario.

It is also possible for a new version to introduce a bug that you simply didn’t catch in regression testing. It is even possible for clients to break as a result of purely internal improvements if they were relying on the old behavior. Typically, we assume that nothing will break when we update a component. In fact, according to Craig Wittenberg, one of the developers of COM who now works in the ComApps group at Microsoft, if you don’t have a plan for versioning your components in future releases, it is a major red flag for any component development project. In other words, before you ship Version 1, you need to have a plan for how you will update Version 1.1, Version 2, and beyond—besides how to bug-fix your updates.

Reference: The Build Master: Microsoft’s Software Configuration Management Best Practices

 

Tagged : / / / / / / / / / / / / / / / / /