Research and Advances
Computing Applications

How to Manage Your Software Product Life Cycle with Maui

MAUI monitors multiple software engineering metrics and milestones to analyze a software product's development life cycle in real time.
Posted
  1. Introduction
  2. Measuring Critical Features
  3. Enter MAUI
  4. Conclusion
  5. References
  6. Author
  7. Figures
  8. Tables

Software vendors depend on writing, maintaining, and selling quality software products and solutions. But software product conception, planning, development, and deployment are complex, time-consuming, and costly activities. While the market for commercial software demands ever higher quality and ever shorter product development cycles, overall technological complexity and diversification keep increasing. One result is that the rate of project failure is increasing, too. Software products consist of bytes of data, functions, files, images; companies’ exclusive resources are human beings. Development organizations thus require ways to measure human code-writing productivity and quality-assurance processes to guarantee the continuous improvement of each new product release.

The following steps outline a hypothetical software product life cycle:

Customer data. The sales force collects and enters customer data into a Siebel Systems’ customer relationship management system.

Product requirements. The customer data is converted by product architects into product requirements and entered into a Borland CaliberRM collaborative requirements management system for high-level product definition.

Development. Project management tools (such as MS Project, ER/Studio, and Artemis) are used by product development managers during product design and engineering. At the same time, source code development is supported by the Concurrent Versioning System [2], allowing programmers to track code changes and support parallel development.

Testing. Concluding the coding phase, the quality-assurance team uses various testing tools (such as Purify, PureCoverage, and TestDirector) to isolate defects and perform integration, scalability, and stress tests, while the build-and-packaging team uses other tools to generate installable CD images and burn CDs. In this phase of product development the Vantive system tracks product issues and defects. Testers open Vantive tickets, or descriptions of problems, that are then examined and eventually resolved by product developers. Robohelp and Documentum support their documentation efforts.

Release and maintain. The software product itself is finally released to the market, where its maintenance process begins a more simplified customer-support life cycle with the help of such tools as MS Project, Concurrent Versions System (CVS), and Vantive.

What happens when something goes wrong, as it inevitably does, milestones slip, or productivity, quality, or customer satisfaction falls off? How does the development company address and solve these problems? Critical questions the product developer should be able to answer include:

  • How much does product development, maintenance, and support cost the company? How does quality relate to cost and stability to milestones?;
  • Why does a particular product’s schedule keep slipping while other product schedules stay on track? Is resource allocation adequate and management stable?;
  • How many worker-hours have gone into a particular product? How are they apportioned among requirements management, planning, design, implementation, testing, and support? How long does it take to resolve customer complaints?;
  • How stable is each software artifact? What is the appropriate level of quality? What is the extent of the quality assurance team’s test coverage?; and
  • What can be done to minimize the rate of failure and the cost of fixing defects, improving milestone accuracy and quality and optimizing resource allocation? (see Figure 1).

Those struggling to find answers include product-line and quality-assurance managers, process analysts, directors of research, and chief technology officers.

One approach they might take is to change the product development process by adopting a more formal product life cycle [4], possibly introducing an integrated product development suite (such as those from Rational Software, Starbase, and Telelogic), leveraging embedded guidelines, integration, and collaboration functions. It may be their only option when the degree of software product failure is so great that the software product development life cycle must be completely reengineered. However, this approach is too often subjective and politically driven, producing culture shock yet still not solving problems in such critical areas as project management and customer support.

Another approach is to maintain the current process, acquire a much better understanding of life cycle performance, and take more specific actions targeting one or more phases of the life cycle. The resulting data is used to plan and guide future changes in a particular product’s life cycle. It is highly objective, involving fewer opportunities for subjective decision making while minimizing change and culture shock and promising extension to other phases of the product life cycle.

Back to Top

Measuring Critical Features

Software product life cycle management starts by measuring the critical features of the product and the activities performed in each phase of its life cycle. Useful life cycle metrics [1, 3] include:

  • Changes of scope;
  • Milestone delays;
  • Rate of functional specification change;
  • Percent of test coverage;
  • Defect resolution time;
  • Product team meeting time;
  • Customer problem response time; and
  • Critical systems availability.

Ideally, the data is aggregated and summarized in a product life cycle data warehouse where it is then available for generating views, automated alerts, and correlation rules from day-to-day operations, product status summaries, and top-level aggregated and executive data. These views, alerts, and rules support a variety of users, from the product line manager to the process analyst, and from the software test engineer to the chief technology officer. Figure 2 outlines an agent-based data acquisition, alert, and correlation architecture for the hypothetical life cycle described earlier.

Much of this work is performed by software agents autonomously mining the data and providing automated alerts based on thresholds and correlation rules. Phase-specific alerts are generated in, for example, engineering, when fixing defects would take too long or require too many new lines of code. Global alerts are generated when, for example, the research and development expenses are not proportional to the sales levels for a specific product or when new requirements crop up toward the end of the development cycle. Such a system might alert development managers to the following situations:

Too much time to resolve product defects. The managers drill into details provided by the system and notice that some components keep changing, prompting them to organize a code review of the components and identify and order improvements to their design and modularity. As a result, the product becomes more stable, and the time to resolve defects decreases.

Too many defects. The system reports that many more defects are generated for product X than for the other, say, eight products for which the quality assurance managers are responsible. After analyzing the current resource allocation with their direct reports they move resources from the most stable products to product X and notify the development organization of the situation. Focusing on the right products and quickly reacting to alerts increases overall product quality.

Missed milestones. The system reports the correlation of metrics (such as rate of defect generation, time needed to resolve a support issue, and overall stability and quality) indicates a product’s next release milestones are likely to slip. Further analysis shows the entire development staff is busy addressing current release problems. An analyst prepares a detailed report to alert company executives, who might then decide to: assign an expert product architect to assess the situation and propose a recovery plan; notify customers the next release is delayed (quantified based on the assessment); and review the product team’s technical and management skills to determine whether and which actions (such as training and adjusting responsibilities) are needed to increase product quality and customer satisfaction.

Back to Top

Enter MAUI

In 2002, BMC Software implemented a prototype life cycle management approach called Measure, Alert, Understand, Improve, or MAUI, to manage several problematic software projects. Focusing on the engineering phase of the software product life cycle, it was designed to monitor development activities carried out through CVS and Telelogic’s Synergy Configuration Management (SCM) system. Several months of daily monitoring revealed trends and patterns in the metrics and parameters of these projects’ life cycles. The tables here summarize this data, along with the advice we generated for a number of critical BMC projects and teams.

The metrics in Table 1 help the development team analyze project activities. The software engineering monitor (SEM) agent collects them daily at the file level, aggregates them into directories and projects according to the hierarchies defined in the underlying SCM tools, and stores them in a metric history database for later use [5]. The SEM agent generates alerts and events and notifies development team members automatically when metric thresholds are crossed.

For any given metric collection cycle the observation window is 200 days, so all activities older than 200 days from collection time are ignored by the SEM agent. The indexes are special metrics defined by the development team with BMC’s project managers in light of their own criteria for stability and quality. The indexes are defined as weighted sums of basic metrics and calculated using a formula normalizing their value from 0 to 10.

Table 2 indicates the SEM agent has reported an overall alarm status for a project called Jupiter due to the low number of lines of code (LOC) per developer, thus supporting the following analysis:

  • The significant level of code turmoil suggests the product is unstable and therefore not yet ready for testing;
  • The defect-complexity trend suggests immaturity and a related lack of well-defined components;
  • The low number of LOC per developer suggests too many developers may still be changing the code; and
  • Low average time spent by developers on the project could indicate a lack of ownership within the development team or the reshuffling of resources.

This analysis allows project managers to proactively review resource allocation and task assignments and perform targeted code reviews of the aspects of the product that change most often and that involve an unacceptably high number of defects. Long-term savings of time and money in customer support and maintenance are potentially significant.

Table 3 indicates that the SEM agent has reported an overall OK status for another project, this one called Saturn, supporting the following analysis:

  • The product is fairly stable and evolving toward even greater stability;
  • Quality is improving, and the documentation level is good;
  • Decreased defect complexity and lack of complexity fixes suggest the product is becoming mature and will soon be ready for release to customers; and
  • High numbers of LOC per developer and time spent by developers on the project indicate project ownership and resource allocation are well defined.

This analysis shows that project managers are doing a good job. Further analysis might also suggest these managers would probably be comfortable releasing the product earlier than predicted, even beating the schedule. Benefits from reinvesting immediate revenue into product improvements are potentially significant.

These early experiments in MAUI real-time development monitoring demonstrate the value of continuously measuring software engineering metrics (see Figure 3). The MAUI prototype provides a real-time feedback loop that helps teams and managers quickly identify problem areas and steer software development projects in the right direction.

BMC’s adoption of MAUI has been limited by three main concerns:

  • No instant results. Monitoring must be done for at least two to three months before meaningful patterns emerge;
  • Novelty of the approach. Most people still feel other more important things must be done first; and
  • Misuse of data. Some people see the risk of a Big Brother syndrome, where both managers and programmers fear their work is constantly being monitored, evaluated, and criticized.

Future MAUI improvements include:

  • Pattern identification. New project behavior could be compared to previous project behavior, helping pinpoint required changes in direction;
  • Adjustable metric thresholds depending on current project phase. Integration of components, testing, and maintenance would make it possible for MAUI to generate better alerts;
  • Adjustable metric thresholds depending on type of product or component. Core APIs, user interfaces, utilities, and other components would be adjusted to generate more meaningful alerts and provide tighter control of critical components; and
  • Determine how the MAUI approach might be used with standard framework metrics. Monitoring metrics from such frameworks as the Capability Maturity Model developed by the Carnegie Mellon Software Engineering Institute and the ISO 9000 software quality standard developed by the International Organization for Standardization would help software development teams make better decisions about the critical steps needed to ensure delivery of software products to market.

Back to Top

Conclusion

Most of the value of life cycle management follows from automating data acquisition, providing alerts and correlation rules, identifying bottlenecks, increasing quality, optimizing critical processes, saving time, money, and resources, and reducing risk and failure rate. Key benefits include:

  • Improved product-development processes. The knowledge, repeatability, and performance of the product-development process already being used can be improved without dramatically changing the process itself;
  • Automated alerts. Data is acquired from various phases of the product life cycle, monitored against predefined thresholds and correlation rules, and used to automatically alert users to problems;
  • Proportional scaling. The complexity and cost of the management infrastructure needed to solve problems is proportional to the difficulty of solving the problems;
  • Improved evaluation. Having numeric and quantitative data is a good starting point for producing a more objective evaluation of the software being developed while helping justify and optimize changes to the process;
  • Improved predictability of results. Analyzing historical data helps reveal patterns and predict future results.
  • Happier people. Because life cycle management makes it possible to accomplish tasks through best-of-breed solutions and tools, the people responsible for the tasks are happier;
  • Critical tool availability. Uptime of critical tools is monitored to ensure tool availability; and
  • Continuous monitoring. The product development process is continuously monitored, measured, and improved.

A software product life cycle that is stable, predictable, and repeatable ensures timely delivery of software applications within budget. A predictable life cycle is achievable only through continuous improvement and refinement of the processes and their tools. The real-time MAUI approach to product life cycle monitoring and analysis promises the continuous improvement and refinement of the software product life cycle from initial product concept to customer delivery and beyond.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Software product life cycle improvement scenarios.

F2 Figure 2. Managed software product life cycle.

F3 Figure 3. Project Jupiter and project Saturn trends viewed through the SEM Web interface.

Back to Top

Tables

T1 Table 1. Metrics definitions; LOC = lines of code.

T2 Table 2. Project Jupiter engineering activities monthly analysis, Jan. 2002.

T3 Table 3. Project Saturn engineering activities monthly analysis, June 2002.

Back to top

    1. Florac, W., Park R., and Carleton A. Practical Software Measurement: Measuring for Process Management and Improvement. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, 1997.

    2. Fogel, K. Open Source Development with CVS. Coriolis Press, Scottsdale, AZ, 1999.

    3. Grady, R. Practical Software Metrics for Project Management and Process Improvement. Prentice Hall, Inc., Upper Saddle River, NJ, 1992.

    4. Jacobson, I., Booch, G., and Rumbaugh, J. The Unified Software Development Process. Addison-Wesley Publishing Co., Reading, MA, 1999.

    5. Spuler, D. Enterprise Application Management With PATROL. Prentice Hall, Inc., Upper Saddle River, NJ, 1999.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More