How to get DevSecOps Foundation Certification?

Are you interested in advancing your career in the field of cybersecurity? Have you heard about the DevSecOps Foundation Certification? In this blog article, we will dive deep into the world of DevSecOps and explore how you can obtain the highly coveted DevSecOps Foundation Certification. So, grab a cup of coffee, and let’s get started!

What is DevSecOps Foundation Certification?

DevSecOps Foundation Certification is a certification offered by the DevOps Institute that validates the knowledge and skills of professionals in the field of DevSecOps. DevSecOps is a methodology that combines development, security, and operations to ensure the security of software throughout the entire development lifecycle.

The DevSecOps Foundation Certification exam covers the following topics:

  • The principles of DevSecOps
  • The Role of security in the software development lifecycle
  • Security testing and vulnerability assessment
  • Security automation and orchestration
  • Compliance and risk management

Why DevSecOps Certification is important?

A DevSecOps certification can be important for several reasons, as it validates your knowledge and skills in integrating security practices into the DevOps workflow.

Here are some key reasons why a DevSecOps certification can be valuable:

  • It demonstrates your knowledge and skills in DevSecOps. The certification process requires you to study the principles and practices of DevSecOps in detail. This will give you a deep understanding of the field and show potential employers that you are qualified to work in DevSecOps.
  • It can help you get a job in DevSecOps. Many employers now require DevSecOps certification for their open positions. This is because DevSecOps is a rapidly growing field and employers are looking for qualified candidates.
  • It can help you advance your career in DevSecOps. The certification shows that you have the skills and knowledge to be successful in this field. This can give you a competitive edge when applying for jobs or promotions.
  • It can help you learn more about DevSecOps. The certification process will require you to study the principles and practices of DevSecOps in detail. This will give you a deeper understanding of the field and help you stay up-to-date on the latest trends.
  • It can help you network with other DevSecOps professionals. The certification exam is administered by the DevOps Institute, which has a large community of DevSecOps professionals. This can help you connect with other people in the field and learn from their experiences.

What are the tools needed to learn for a strong DevSecOps Foundation?

The tools needed to learn for a strong DevSecOps Foundation depend on the specific needs of your organization and the specific technologies that you use.

However, some of the most important tools to learn include:

  • Static application security testing (SAST) tools: SAST tools scan your code for vulnerabilities at the source code level. This is a great way to find vulnerabilities early in the development process before they can be exploited. Some popular SAST tools include Veracode, Checkmarx, and AppScan.
  • Dynamic application security testing (DAST) tools: DAST tools scan your running application for vulnerabilities. This is a good way to find vulnerabilities that are not exposed in the source code, such as SQL injection vulnerabilities. Some popular DAST tools include Burp Suite, Nikto, and OWASP ZAP.
  • Container security scanning tools: Container security scanning tools scan your containers for vulnerabilities. This is important for DevSecOps, as containers are often used to deploy applications. Some popular container security scanning tools include Aqua Security, Twistlock, and Snyk.
  • Infrastructure as code (IaC) security scanning tools: IaC security scanning tools scan your IaC code for vulnerabilities. This is important for DevSecOps, as IaC is often used to provision infrastructure. Some popular IaC security scanning tools include Terraform Cloud, Pulumi, and AWS Inspector.
  • Continuous integration and continuous delivery (CI/CD) tools: CI/CD tools automate the process of building, testing, and deploying software. This is essential for DevSecOps, as it allows you to quickly and easily deploy security fixes to your applications. Some popular CI/CD tools include Jenkins, CircleCI, and GitLab.

How DevOpsSchool’s is best for DevSecOps Foundation Certification?

Overall, DevOpsSchool is a great resource for anyone who wants to learn DevSecOps and get certified. It has a comprehensive curriculum, experienced instructors, engaging learning materials, a supportive community, and an affordable price. If you are serious about getting certified in DevSecOps, I highly recommend DevOpsSchool.

Here are some additional resources that you may find helpful when preparing for the DevSecOps Foundation Certification:

  • The DevOps Institute website: The DevOps Institute website has a wealth of resources for DevSecOps professionals, including the DevSecOps Foundation exam syllabus, practice exams, and study guides.
  • The DevSecOps subreddit: The DevSecOps subreddit is a great place to ask questions and get help from other DevSecOps professionals.
  • The DevSecOps Slack community: The DevSecOps Slack community is a great place to connect with other DevSecOps professionals and learn about the latest trends in DevSecOps.
Tagged : / / / /

How to get DevOps Foundation Certification?

Hey there! Are you interested in getting the DevOps Foundation Certification? Well, you’ve come to the right place! In this article, we’ll dive deep into the world of DevOps and explore how you can obtain this valuable certification. So, grab a cup of coffee, and let’s get started!

What is DevOps Foundation Certification?

DevOps Foundation Certification is a foundational level certification offered by the DevOps Institute. It is designed to validate the knowledge and skills of professionals who want to understand the principles of DevOps and how to apply them in their organization.

The DevOps Foundation Certification exam covers the following topics:

  • The principles of DevOps: This includes understanding the key concepts of DevOps, such as continuous integration and continuous delivery (CI/CD), infrastructure as code (IaC), and automation.
  • The role of security in DevOps: This includes understanding the importance of security in the DevOps pipeline and how to implement security best practices.
  • The tools and technologies of DevOps: This includes understanding the different tools and technologies that are used in DevOps, such as version control systems, CI/CD tools, and IaC tools.

Why DevOps Certification is important?

Here are some of the specific benefits of getting a DevOps certification:

  • Increased job opportunities: DevOps professionals are in high demand, and a certification can help you to stand out from the competition.
  • Higher salary: DevOps professionals typically earn higher salaries than those in other IT roles. A certification can help you to negotiate a higher salary.
  • Better career advancement opportunities: A certification can help you to advance your career in DevOps and take on more senior roles.
  • Improved knowledge and skills: A certification can help you to learn about the latest DevOps practices and tools, which can make you a more valuable asset to your team.
  • Increased confidence: A certification can give you the confidence to take on new challenges and responsibilities in DevOps.

What are the tools needed to learn for a strong DevOps Foundation?

These are just some of the essential tools that you need to learn for a strong DevOps Foundation. The specific tools that you need to learn will depend on the specific needs of your organization. However, by learning these essential tools, you will be well on your way to becoming a successful DevOps engineer.

In addition to the tools listed above, there are a few other tools that you may want to consider learning, such as:

  • Containerization tool: A containerization tool is used to create and manage containers. The most popular containerization tools are Docker and Kubernetes.
  • DevOps automation tool: A DevOps automation tool is used to automate the entire DevOps lifecycle. The most popular DevOps automation tools are Ansible, Chef, and Puppet.
  • Security tool: A security tool is used to scan code and infrastructure for vulnerabilities. The most popular security tools are Nessus, QualysGuard, and Tenable.

How DevOpsSchool’s is best for DevOps Foundation Certification?

DevOpsSchool is a great option for DevOps Foundation Certification. The course is comprehensive, well-taught, and comes with a number of features that will help you succeed.

In addition to the features mentioned above, DevOpsSchool also offers a number of other benefits, such as:

  • A supportive community of learners: The DevOpsSchool community is a great place to ask questions, get help, and connect with other DevOps professionals.
  • Regularly updated content: The DevOpsSchool course content is regularly updated to reflect the latest changes in the DevOps landscape.
  • Access to additional resources: DevOpsSchool offers a number of additional resources, such as cheat sheets, flashcards, and blog posts, to help you prepare for the DevOps Foundation exam.
Tagged : / / /

How to get SRE Foundation Certification?

So you’re interested in becoming a Site Reliability Engineer (SRE) and want to get certified? That’s great! In this article, we’ll explore the steps you need to take to obtain the SRE Foundation Certification.

What is SRE Foundation Certification?

The SRE Foundation Certification is a certification offered by the DevOps Institute that validates the knowledge and skills of Site Reliability Engineers (SREs). SREs are responsible for the reliability, performance, scalability, and security of software systems.

The SRE Foundation Certification exam covers the following topics:

  • SRE principles and practices
  • Service level objectives (SLOs)
  • Monitoring and alerting
  • Incident management
  • Change management
  • Continuous delivery (CD)
  • DevOps culture

Why SRE Certification is important?

SRE Certification is a valuable credential for SREs and other IT professionals who want to demonstrate their knowledge and skills in Site Reliability Engineering. The certification can help you advance your career, improve your knowledge and skills, and network with other SREs.

Here are some additional benefits of getting an SRE certification:

  • Increased job satisfaction: SREs who are certified are more likely to be satisfied with their jobs, as they are able to use their skills and knowledge to make a real impact on the reliability and performance of their organization’s software systems.
  • Improved career prospects: The demand for SREs is growing rapidly, and certified SREs are in high demand. This means that certified SREs have a better chance of finding a job and getting promoted.
  • Higher salaries: Certified SREs typically earn higher salaries than non-certified SREs. This is because certified SREs have the skills and knowledge that employers are looking for.

What are the tools needed to learn for a strong SRE Foundation?

Here are some of the tools that you need to learn for a strong SRE Foundation:

  • Monitoring and alerting tools: SREs need to be able to monitor their systems and applications for performance and availability issues. They also need to be able to set up alerts so that they are notified when problems occur. Some popular monitoring and alerting tools include Prometheus, Grafana, and ELK Stack.
  • Incident management tools: SREs need to be able to manage incidents quickly and effectively. They need to be able to identify the root cause of the incident, triage the issue, and restore service to users as quickly as possible. Some popular incident management tools include PagerDuty, VictorOps, and Opsgenie.
  • Change management tools: SREs need to be able to manage changes to their systems and applications in a safe and controlled manner. They need to be able to track changes, test changes, and roll back changes if necessary. Some popular change management tools include Octopus Deploy, Puppet, and Chef.
  • Continuous integration and continuous delivery (CI/CD) tools: SREs need to be able to automate the deployment of their systems and applications. This helps to ensure that changes are deployed in a reliable and repeatable manner. Some popular CI/CD tools include Jenkins, CircleCI, and Travis CI.
  • DevOps culture: SREs need to be able to work effectively with other teams, such as development, operations, and security. They need to be able to collaborate and communicate effectively to ensure that systems are reliable, performant, and secure.

How DevOpsSchool’s is best for SRE Foundation Certification?

DevOpsSchool is a great option for SRE Foundation Certification. The course is comprehensive, well-taught, and comes with a number of features that will help you succeed.

DevOpsSchool is a great option for SRE Foundation Certification for a number of reasons:

  • The course is comprehensive and covers all of the essential topics for the exam. The course includes over 100 lectures and hands-on exercises that will help you master the fundamentals of SRE.
  • The instructors are experienced SRE professionals who can share their real-world knowledge and experience with you. The instructors are passionate about SRE and are committed to helping you succeed.
  • The course is self-paced, so you can learn at your own pace. This is great if you have a busy schedule or if you want to take the time to really understand the material.
  • The course includes a practice exam that will help you assess your readiness for the real exam. This is a great way to identify any areas where you need additional practice.
  • The course comes with a 30-day money-back guarantee. This means that you can try the course risk-free. If you’re not satisfied with the course, you can simply ask for a refund.
Tagged : / / / /

Introducing Team Foundation Build 2010 | TFS 2010 Training

Introducing Team Foundation Build 2010
  • Introduction
  • Build Automation
  • Flickr’s Continuous Deployment
  • Why Automate the Build?
  • Team Build Overview
  • Demo: Team Build Overview
  • Machines, Controllers, and Agents
  • Build System Topologies
  • Build Agent Software Installation
  • New in 20102
  • Build Status and Notification
  • Demo: Build Alerts
  • Demo: Build Notification Application4m 44s
  • The End Game
  • Summary
The Build Environment
  • Introduction
  • Installation and Configuration
  • Topology and Restrictions
  • Installing the Build Service
  • Demo: Installing the Build Service
  • Demo: Configuring Controller and Agents
  • Demo: Creating the Build Drop folder
  • Demo: Installing a Test Agent Instance
  • Installing and Configuration Summary
  • Demo: Creating and Running a Simple Build
  • Demo: Managing Build Artifacts
  • Summary
Simple Build Automation
  • Introduction
  • Build Definitions Options
  • Demo: General Options
  • Demo: Trigger Options
  • Demo: Workspace Mapping
  • Demo: Build Defaults Options
  • Demo: Process Options
  • Demo: Private or Buddy Builds
  • Gated Check In
  • Demo: Gated Check In
  • Build Reports
  • Summary
Working with Build Process Templates (Scripts)
  • Introduction
  • Build Process Templates
  • Demo: Hello World
  • Demo: Execution Scope
  • Demo: Build Script Arguments
  • Demo: Build Script Variables
  • Demo: InvokeProcess Activity
    Summary
Migrating from TFS 2008 
  • Introduction
  • Overview
  • Build Automation in TFS 2008
  • Demo: Using the Upgrade Script4
  • Demo: Calling MSBuild from 2010 Build3
  • Demo: Custom MSBuild Tasks4
  • Summary
Tagged : / / / / / / / / / / / /

Deployment Foundation Issues

deployment-foundation-issue

Deployment Foundation Issues

Establish Key Roles/Charter for Deployment

The very first order of business is to firmly establish “who’s on first” for getting deployment done. Senior management is crucial at this point for making sure all their direct reports and managers are on board with this
and that it comes from the top. I mention this because at one place I worked, we immediately got into interdepartment squabbling due to a lack of senior management support and direction. If you hear a manager
say things like “do what you want — but don’t touch my area,” you will have deployment problems. I strongly recommend the formation of a process group as the focal point for all matters related to process and process deployment. This group has to have both the authorization and responsibility for process. If you have a distributed set of “process owners,” consolidate that responsibility and authority to this new group. My requirements for membership in this process group are:

Six to eight people. Larger process groups tend to be less efficient and more cumbersome. A smaller group tends to be ineffective. It is not necessary to have representatives from all corners of your organization. It is important that these domain experts get called in as necessary for process development and inspection. One company had a 15-person process group established by a non–process-oriented vice president. It was a disaster to get a
repeatable quorum present for any meeting. We spent subsequent meetings repeating stuff from earlier meetings to accommodate a different set of participants at every meeting.

Process-group commitments. My most successful process group was when I insisted that members commit 5 percent of their workweek to process-group meetings. Group members and their managers had to sign the commitment. The 5 percent figure is doable — even for busy people. Two one-hour meetings per week reflect that percentage. I also had fixed time meetings both by time and day of week. It became automatic to show up. To make this really work, I was the process-group lead and I dedicated 100 percent to this effort. I had clerical support services available to me. The most effective process-group meetings are concentrated sessions
with a time-stamped agenda and where my support staff and I do all extracurricular activities. You want to restrict extra time (beyond actual process-group meeting time) needed by your key process participants because they tend to be super busy.

Showing up on time. We could not tolerate people wandering in five or ten minutes late. We started promptly on the hour and stopped promptly on the hour. At one company, I removed a person for being late because it held everyone up. Promptness became so important at one commercial company that other process- group members would be “all over” tardy people. The tardiness stopped quickly when peers got involved in any discipline.

People who are process oriented. Do not have people in this group who don’t fit this requirement! At one company, a vice president insisted on naming people to the group (which became double the size I had wanted) who were almost completely ignorant about process. We spent almost all our precious process-group time just getting these people to understand the most fundamental aspects of process. It was painful. The VP wondered why progress was slow. Duh!

People who are opinionated — i.e., not afraid to speak up on issues. You cannot afford to have people just show up and suck air out of the room and not participate. The best processes I’ve developed came from sessions where it was not clear who would walk out alive after spirited process discussions.

People that others look up to. They may be leads or workers. Every organization has these types of people and they may not be in the management ranks. The reason for this requirement is to form an initial set of process champions right out of the box. These initial process champions will develop more champions.

People who are willing to have an enterprise perspective versus an organizational perspective. This could be a huge problem if process- group discussions degenerate into preservation of turf — no matter what. At one place, I actually went to a paint store, bought disposable painting hats, placed a big “E” for enterprise on the hats, and made process-group members wear the hats at our meetings to reinforce that enterprise focus. It got a few laughs and some grumbles but it worked.

People who are not “who” oriented. A process group avoids the “who” question and concentrates on the “whats.” Once the “what you have to do” is addressed, the “who” looks after itself. When process-group meetings degenerated into discussing “who does this” and “who does that,” I routinely stopped the meeting and reminded everyone that when you have a hole in the bottom of the boat, this is not the time to discuss whose hole it is! I got laughs but my point was taken.

This is your key group for process development and deployment. It’s obvious, but if you have this marvelous group put together without regard to an overall process architectural goal, you will fail. This is where this software process model will help you enormously. Ideally, the processgroup lead has an in-depth knowledge of the targeted process architecture with an initial goal to get the process group up to speed on this aspect first — before any company processes are tackled. If you are under pressure to “just get on with it” (without getting all process members up on the target process architecture), you will fail. You will end up flailing around for a large amount of time. You will also end up with a hodgepodge of process elements and no encompassing architecture. You want to end up with a hierarchy of goals supported by tasks that are measurable for earned value and progress reporting by the process group itself. Essentially, you want to create a balanced scorecard for process progress. This makes your process group accountable for progress just like any other project team.
For deployment success, I will repeat an important division of labor within the process group itself. You absolutely need to develop advocates for the process framework architecture itself and make sure the integrity
of the process model is maintained. This book will be invaluable for that aspect. These people are very different from most process-group members, who should be domain experts. The process framework advocates are
the folks that put the “meat on the bone” for process and they will make sure that the process parts all fit within that framework architecture, whereas the domain folks make sure to develop process elements that are useful and make sense.

I make this point because uneducated management personnel may pressure you to “just get on with it” without considering the importance of making sure that all process elements fit within a framework architecture.
The worst thing you can do is crank out process into an ever larger pile of stuff that increasingly gets more and more useless for the organization. The main litmus test for process is that it is useful. I have run into
managers who seem to think that bigger piles mean success. In reality, you may have just the opposite result. Resist those who are pushing you in that direction for success. The most successful process group I led was when I was not only the lead but also the process architect and had management backing to do what was needed. I mention management backing because at another place, I had the exact same situation but had a boss who was so insecure that all my suggestions and recommendations were either ignored or rejected because they didn’t come from him! Anything from me was dead on arrival. If you’re ever in that position, run, don’t walk! You cannot succeed. There are people like that out there and (sadly) some are in senior management positions. I simply didn’t want to manipulate him to have him believe that all ideas were his ideas. That’s what it would take
to deal with this kind of person.

Ensure an Inspection Procedure Is in Place

When actually doing process deployment for the software process model, there is one how-to procedure that absolutely needs to be addressed early on: the inspection procedure. This particular procedure is fundamental to
all the activities within this software process model as a quality gate. If you have a lousy how-to procedure here, you will have an awful time in getting people to buy into this model. Conversely, a good how-to will
take off like wildfire and become engrained in an organization real fast. The software process model wants quality built in the “what you have to do” world by placing the quality responsibility on the producer’s back.
The inspection procedure is critical to this end goal. I worked at one place that had a “review” procedure in place. It was hardly used, did not work well, and the management protected it with
their lives. I had the gall to suggest a better way of doing things. I had to present this new way at three different hearings to this management group, finally receiving a disposition of “rejected.” They could not handle
the fact that this software process model allows for better mousetraps. Both methods could coexist in this model. I knew that once the better way was an option, the bad way would drop off for usage very naturally.
These managers had a personal and vested interest in preserving the status

quo — regardless of usefulness. They had invested time in the existing process element. They wanted no interlopers on their possessive world. This company was very closed in their thinking. Consequently, we had
no effective inspection procedure at this company and had a huge management barrier to ever getting a better way proposed or deployed. This same company has the same ineffectual review procedure in place
today that is really bad and is barely used. Go figure! In another job, I had the privilege of working for a section of a very large company and had incredible support from the head person. In that
environment, I was able to provide this part of the company with a slick, efficient, Web-based inspection procedure that was up to ten times faster than the existing inspection procedure. My new inspection procedure also
produced higher-quality inspections and had built-in defect prevention to boot. What happened was incredible. The word spread like wildfire within my own group about how great this procedure was. That worker enthusiasm spilled over to other organizational elements that clamored to get onboard with our solution. I was deluged with training requests and guest appearances to various “all-hands” meetings regarding this way of doing
things. I didn’t have to do a thing to sell this. It sold itself. I knew that the software process model approach encourages better ways of doing things and encourages variances in scale or location quite naturally.

Why is the inspection procedure so critical to this software process model?

 Every activity at the “what you need to do” level has built-in inspections across the board (i.e., the inspection procedure is a how-to elaboration on all the “Inspect” verbs in all activities).
 A bad inspection procedure can have a huge detrimental effect on all activities’ elapsed completion times. Conversely, an efficient inspection procedure can vastly improve activity execution times across the board.
 A good inspection procedure increases work product quality and reduces rework. Rework is expensive and should be avoided at all costs.
 A good inspection procedure gives you the basis for defect prevention — in addition to defect detection. With the software process model, you now have the ability to ask, “Where should this defect have been found?” This provides the mechanism to improve any earlier inspection checklist associated with any earlier work product. With this inspection procedure you have a built-in process-improvement mechanism in this software process model.
 Finally, an efficient inspection procedure will be used and will become part of the company culture. A bad one will not be used.

Get at Pain Issues

To be successful with process deployment, you really want to keep coming back to pain issues for any organization. The big question is, how do you do that? And how do you do it so that the data is believable? This
is independent of the type of process model you’re using. You will achieve higher levels of buy-in from all levels of the company if the perception is that you’re solving real-world problems. If you separate
process initiatives from “pain” issues, you will get a lot of cold shoulders about this process stuff. An absolute killer is to tie process initiatives to a maturity model (like CMMI) in a vacuum. As I mentioned before, a
particular model or standard can be viewed as the flavor of the month. Some people may view all this with an “if I keep a low profile, this too shall pass” attitude. There’s nothing like solving real problems — especially
if people can reduce their 60-hour weeks to something more reasonable. I learned one big lesson when I got married — don’t discount the power of a spouse! As Dr. Phil has said repeatedly, “If Mom’s not happy, no one
is happy.” For most employees, you really have a shadow employee to deal with as well — the employee’s spouse. If the employee can get home earlier, play with the kids more, do family things more, etc., how
do you think that family unit is going to support you? Do you think you’ll get early support for your next process initiative? The people part of process improvement can be enormous as a huge positive factor or a
huge negative factor. The process group needs to come to grips with this aspect of deploying new processes in an organization. It is not enough to have a marvelous process framework architecture into which all the
process parts fit nicely. Personal interviews have mixed results for actually getting at pain issues. Can you be trusted as an interviewer? Will the person being interviewed be forthright or will he or she give you politically correct data? Will there be retribution if he or she dares to be totally honest? For these reasons, I would not get process problem data this way. Two companies where I worked tried the survey route. In my opinion,
surveys are best suited for getting simple check-off answers to specific questions. They are not suitable for open-ended responses. I still laugh at a British sitcom called “Yes, Prime Minister,” where you can organize
sets of questions and get a totally opposing poll result based on the question set — even by surveying the same people. My point here is that polls and surveys can be manipulated. Busy people tend to kick and
scream about surveys and certainly want to get them off their plates as fast as possible. This means that open-ended surveys don’t end up with a lot of useful data. For these reasons, surveys are not the way to go.
As an adjunct for getting at pain issues, always leave the door open for having process practitioners critique or suggest things directly or via

An Implementation Technique for Getting at Pain Issues

I have used two of the 7 M tools (modified somewhat) very successfully to get at both enterprise process pain issues and project pain issues (as a project postmortem). These two techniques have fancy names:

 Infinity brainstorming
 Interrelational digraphs

I don’t use these terms when I conduct these techniques — I just call them “focus groups,” “action groups,” or “postmortem.” Using fancy terms will turn people off. Don’t do it. A focus group is fast (it usually takes
less than two hours) and is totally anonymous (no retribution). This particular technique levels the playing field for quiet, introverted people versus loud, dominant people. That quiet, shy person may be the very
person with a lot to express anonymously. The most successful focus group in my experience was done with about 35 people in a single session of about an hour and a half. At this point, you’re probably thinking it’s impossible to have a successful session with 35 people. Conventional wisdom says the success of any meeting is conversely proportionate to the number of attendees. The higher number of people produces lower success. The lower number of people produces the higher success. This technique is just the opposite. You need at least 12 people to be successful. A small group simply won’t work for this technique.

Here are the supplies needed to conduct these sessions:
 Large Post-it notes — enough for about 20 Post-its minimum per participant.
 Butcher paper or flip-chart paper —

these are taped to three walls of the conference room. Four or five charts are taped to one wall. Five to six charts are taped to the opposite wall. One chart is taped
on a third wall (for infinity brainstorming rules). One chart will be used to capture the major impact analysis after we collect the data from the infinity brainstorming part of the session. The size of the room will affect how many walls are actually used. No matter what, you need two walls for charts.

 Masking tape for the large paper sheets above.
 Fine-point felt pens — enough for participants and facilitator.
You need a large conference room that will hold all the participants and has wall space onto which you can tape large paper charts on three walls. Reserve this room for about two and a half to three hours to allow
time for the facilitator to set up, for the actual session, and for wrapping up. The participants show up about half an hour after the room’s reserved start time. At that point, all supplies should be out and the paper should
be up on the walls. This is what you need to do ahead of time:
 Write down the session rules on a single chart. The rules are:
– One finding per Post-it
– You can write as many Post-its as you want within the allotted
time
– Use only the supplied fine-point felt pen for writing
– No handwriting — print your finding
– No names (i.e., anonymous)
– Don’t get personal — it’s process related
– Be businesslike (not crude) in your remarks
– Make finding clear as to your intent: Can another person understand
your point?
– Be quiet when writing findings
 Take a few minutes to explain what you will be doing to the assembled group. Make sure the group knows about your expectations and desired results. I have even put this in written form and sent it to the group ahead of time to make sure that everyone is onboard with this technique. This sets the foundation. (5 minutes maximum)
 Announce that participants are to write one finding per Post-it note on as many Post-it notes as they want — within a ten-minute time frame. This is a totally quiet part of the technique. After writing,
participants take their individual Post-its and stick them onto one wall’s paper charts. Random placement is in order. This part actually creates all the pain issues as experienced by the participants in a
nonretributional way because no names are used. (10 minutes maximum).

 Explain that the findings should be placed into “like” groupings by placing Post-its from one wall into Post-it groupings on another wall. Like things should be clustered together; some adjustments
may need to be made later. Also point out that there is a predetermined category called “orphans.” (When conducting a project postmortem, I add a “good” category for the things we did right
on a project.) Forget trying to establish any category names. (About 1 minute)

 Have everyone stand up, grab a pile of Post-its from one wall, and place them on another wall as Post-it clusters. Remind them that once a finding is placed, it can’t be removed. Some talk among
people can happen at this point. If you do this correctly, you will try to limit the category clusters to about 10–12 groups at a maximum. Have orphaned Post-its be placed under “orphans.”
(About 10–12 minutes)  Identify a “reader” from the group. This individual will read the Post-its to the entire group and possibly rearrange some Post-its. (About 1-2 minutes)
 Have the reader stand up and read each Post-it finding in each cluster out loud. This accomplishes the following:
– Everyone gets to hear all the findings.
– Everyone gets a chance to persuade the reader to remove a
Post-it if it is not in a “like” group.
– Finally, the group establishes a mailbox name for each cluster
of Post-its. Keep the name short if possible. (For project postmortems,
I found that using the names from one project as predetermined names for subsequent postmortems was helpful for metrics data. However, one group disagreed with this and felt it was stifling to have a set of mostly predetermined names, especially when they disagreed with an earlier group over those names.)
 The reader repeats this for all Post-it clusters until all cluster groups have category names. During this time frame, some Post-it notes may be moved from one group to another. Finally, an attempt is made to place any and all orphaned Post-it notes into a named category. If not, they stay as orphans. This part takes the findings and attempts to categorize them for the interrelational digraph part of this technique. (15–20 minutes)
 The moderator takes a large blank matrix and writes all the category names down the left side of the matrix and then writes the same set across the top of the matrix. The moderator shades out where
each category intersects with itself. You should end up with a diagonal line of shaded boxes from the top left down to the bottom right in that matrix. This is the foundation for the interrelationship digraph. We want to end up with some idea of what we need to work on first, second, third, etc., to get the biggest bang for the buck in process. (About 2 minutes)

 The moderator reads each category name down the left side of the matrix, and asks for each, “For this category, what are the other categories that have a major impact on it?” The group participates in identifying other categories that have that major impact. The moderator simply places an “X” across the row for that targeted category. This gets repeated for each category name down the left until done. (10 minutes maximum)

 The moderator tallies up the number of “X” marks per column and writes the totals at the bottom of each column. This provides a good idea of what categories should be attacked first that have the most impact on other things. (About 2 minutes)

 Thank the group for their time and dismiss them.

Is this a perfect technique? No. Is it fast? Yes. Does it get at process pain issues? You bet. By spending about one and a half to two hours on this, you will extract pain issues from everybody. There is no retribution
because there are no names involved. The quiet person can write stuff down anonymously just like the extroverted person can. The inputs come from the very people seeing and suffering from those pain issues.
What I have done after the session is to record all the findings by category into an Excel spreadsheet. This is a great application for counting things and coming up with percentages. This completed spreadsheet gets
sent back to all the participants immediately. I have cautioned this group to keep this data under wraps because it is confidential. The next step is to convene a senior management meeting to go over
the findings and categories. The senior staff needs an understanding of what went on and that this technique gathers data rapidly. As a moderator, take the top three categories in particular and concentrate on those for
this senior management group. This is done to:

 Acquaint the senior management on pain issues “from the trenches” and in a written form (not sanitized)
 Identify the top three categories that, if worked, should give the biggest bang for the buck in improving or removing pain issues
 Have this top-level management group develop an initial plan to attack the top three categories (or a subset of them) Finally, I arrange for a feedback meeting with all the participants, so that a member of senior staff:

 Tells participants that management has heard their pain issues

 Informs participants on the plan to attack pain issues This feedback meeting can be powerful to all involved. It closes the loop with participants and makes them feel like they have not wasted their time. It involves senior management directly with unsanitized pain issues. They can’t say they didn’t know about this or that. There’s no place to hide. They have to do something about it. It does cause action. When any improvements are made, you will keep going back to these pain issues. You don’t tell the rank and file that you’ve now satisfied the first goal of some part of the CMMI! They will not relate to that at all. Tell them that these processes directly address the pain issues that were established. When regular folks get to see less pain, you will rapidly develop more and more champions to your cause. If upper management sees smoother operations, better quality, smaller time-to-market costs, better repeatability, etc., which all contribute to a healthier bottom line, you will get more champions at that level. You can do this periodically to see how you’re doing. You can do this
as part of a preappraisal drill for process maturity. You can do this as a preaudit drill. The periodic approach will give you some powerful metrics related to pain issues. There’s nothing like solid numbers to show your
workforce that you are serious about reducing workforce pain.

Develop a Top-Level Life-Cycle Framework

This may be obvious but you really need to provide that top-level lifecycle framework into which to fit all the process pieces being developed. Without that top-level picture, there is no cohesive way of creating process
elements that “fit” into anything. One vice president I worked for insisted on forming various Process Action Teams (PATs) to get some deployment items done without this in place. I was even ordered to get these groups
going despite my strong objections. The results of this VP’s order were absolute chaos and a huge waste of time. I sure hope none of you will deal with some of the characters I’ve had to endure for process development
and improvement! People like that are out there. Some of them even get promoted! Hopefully, the top-level life cycle has been developed before insertion takes place. You can do a subset top-level life cycle if your initial
deployment efforts only deal with that part of the overall life cycle. For example, if you are attacking proposal-related processes, you can get away with just developing the proposal part of your life cycle. The bottom

line is that you absolutely need a framework into which to fit any process elements, so that you develop once and don’t need rework. With that top-level life cycle laid out with PADs per life-cycle phase,
you now have the ability to tie your pain issues to activities and to associative procedures. You also have the ability to tie event-driven procedures to any and all life-cycle phases.

Reference: Defining and Deploying Software Processes

Tagged : / / / / / / / / / / / / / / /