Software Productivity

By Robert Sidler

MSIS 488 Fall 2002

Introduction

Software productivity is a deceptively simple concept, but a matter of some debate. Although its earliest measurement was in lines of code per man-hours worked, a better definition is the ratio between the functional value of software produced to the labor and expense of producing it. There are several ways to measure software productivity, including Function Point Analysis, Cost Component Modeling, Cyclomatic Complexity, and program performance metrics that take into account the costs of running and maintaining the software.

Using these tools, the software development process can be managed and productivity enhanced by reusing code to leverage existing programs, minimizing rework through reliability initiatives, and adopting sound development practices and standards. However, even when sound practices are adhered to, software productivity may not increase because of circumstances outside the control of the development team. This includes rapidly changing technologies and the fixed-cost behavior of significant parts of the software development process.

The concept of software productivity is not a theoretical abstract. It is a critical part of the software engineering process. Understanding software productivity becomes important in systems analysis when you consider that good systems analysis enhances software productivity and software productivity is a success measure of systems analysis.

What is Software Productivity?

In standard economic terms, productivity is the ratio between the amount of goods or services produced and the labor or expense that goes into producing them (Jones 1). The assumption that follows, then, is that software productivity is the ratio between the amount of software produced to the labor and expense of producing it. This is a simple theory that appears to be logical, but in practice becomes a matter of some debate.

In order to define software productivity, we must first establish a definition of software. At its most fundamental level, software is a computer program comprised of lines of code. However, lines of code, in and of themselves, are not the primary deliverables of a software project and customers often do not know how many lines of code are in the software they are buying (Jones 1).

A broader definition of software encompasses not only the computer program, but also the related procedures and documentation associated with the program. This often includes documentation of requirements, specifications, software design, and end-user procedures (Mills 2). The complete set of documentation provides a more tangible deliverable of a software project than does the program itself.

However, even though program code and documentation are the primary outputs of software production, they are not of direct interest to the software consumer. Software is bought based on what it can do, not on how it was coded or documented. This means that the economic value of goods and services consumed is not measured in the same units as the natural units of production. Subsequently, a different measure of software needs to be used in order to get a meaningful definition of software productivity. This measure needs to reflect the utility value of the software, to wit the function that the software is intended to perform (Jones 1).

Basing our measurements on the utility value of software, we can revise our original assumption and define software productivity as the ratio between the functional value of software produced to the labor and expense of producing it. This definition allows us to measure productivity based on the value of results to the software consumer, which is more realistic than basing results on lines of code (Mills 2).

How is Software Productivity Measured?

With a working definition of software productivity established, we are next faced with the question of what to measure. Unlike lines of code and pages of documentation that are easy to count, program functionality does not have a natural unit of measure and is thus harder to quantify. Software metrics that we can use as quantifiable measure of various characteristics of a software system or software development process need to be established (Bordoloi 3). These metrics need to capture both the effort required to produce the software and the functionality provided to the software consumer.

There are various methods by which software productivity is measured, but whichever method is employed the goal should be uniform: to give software managers and professionals a set of useful, tangible data points for sizing, estimating, managing, and controlling software projects with rigor and precision (Jones 1). Some of the more common methods of measuring software productivity are Function Point Analysis, Constructive Cost Modeling, and Cyclomatic Complexity.

Function Points and Function Point Analysis

A function point is a synthetic measure developed in the mid 1970’s by A. J. Albrecht of IBM to provide a workable surrogate for the goods produced by software projects. In function point analysis, a numeric value is derived by counting up the number of five different product parameters that Albrecht defined as “end-user benefits” and weighting the resulting values to determine a software project’s function point value (Jones 1). This value is then used as a measurement in determining the level of effort required to complete the project.

The number of function points for a project is calculated by first counting the number of external inputs, external outputs, internal logical files (i.e. master files), external interfaces, and external inquiries to be used by the software. Each parameter is assigned a complexity rating of simple (low), average, or complex (high) (Bordoloi 3).

An unadjusted count is derived from the sum of the parameters and their assigned weights and then adjusted using a set of fourteen general system characteristics. The general system characteristics are intended to value additional functionality of the system that includes such things as user friendliness, transaction rates, performance, and reusability. A final function point count is then computed using the unadjusted count and the total value of the summed general system characteristics (Garmus 4).

To determine the size, cost, or work effort required for a particular project, its calculated function point value is compared to historical data for projects with the same or relatively similar values (Bordoloi 3).

Constructive Cost Model (CoCoMo)

Another approach to measuring the effort required to produce software is the Constructive Cost Model approach developed by Barry Boehm to predict the work effort and development time required for the management and technical staffs in a software development project. The CoCoMo model is designed to provide predictions at three levels (basic, intermediate, or detailed) depending on the information known about the product being developed (Bordoloi 3).

The basic level is used to obtain a quick-and-dirty estimate of the overall project’s effort and development time early on, either right after analysis ends or as soon as a reasonable estimate of the lines of source code is available. The intermediate level is used later on in the system design phase to refine and to update the estimates calculated using basic CoCoMo. Detailed CoCoMo is used to further refine estimates to the module, subsystem, and systems levels, but the increased complexity of the calculations often does not significantly improve estimate accuracy. Detailed CoCoMo is not commonly used in general practice (Bordoloi 3).

CoCoMo estimates factor in the number of source code lines (in thousands), project type (organic, embedded, or semi-detached) and a series of 15 cost drivers to determine the effort and development time of the project. Organic projects are ones involving small, highly experienced teams familiar with the technology and embedded projects involve large teams with very little experience with the technology, with semi-detached projects falling somewhere in between. The cost factors relate to product attributes, computer attributes, personnel attributes, and project attributes (Bordoloi 3).

It is important to note that CoCoMo is size oriented because its estimates are based on the number of lines of source code delivered. As such, this method does not cover the full software life cycle. It is best used as an estimator of the effort required for system design (after system requirements have been analyzed) through integration and testing (Bordoloi 3).

Cyclomatic Complexity (McCabe Metrics)

Whereas CoCoMo employs a size-oriented approach to analyzing a program, Cyclomatic Complexity, developed by Thomas McCabe, is based on program complexity. Cyclomatic Complexity (often referred to as McCabe metrics) reasons that complexity is directly related to paths created by control and decision statements. As the number of paths through a program or module increases, the program or module complexity increases. As complexity increases, the effort required to produce the program increases and its testability and maintainability decreases (Bordoloi 3).

Cyclomatic complexity metrics use graph theory to illustrate the number of linearly independent paths in the program or module. A control graph for the program is created that shows blocks of sequentially executable code as nodes and the flow or paths through the program as arcs. The cyclomatic complexity number is calculated by subtracting the number of nodes from the number of arcs (including a dummy arc from the exit node back to the entry node), and adding the number of components (program modules or programs). This value represents the number of independent paths in the program (Bordoloi 3).

Although initially used to analyze code after it was written, the cyclomatic complexity is now routinely being used to analyze the control and data flow diagrams created in the design phase. The early detection of complex programs or modules significantly lowers the time and effort expended to code, test, and maintain them in subsequent life cycle phases. This, in turn, reduces the cost of the entire software project and improves productivity (Bordoloi 3).

Other Measures of Productivity and Performance

Of course, Function Point Analysis, CoCoMo, and McCabe metrics are only a few of the methods employed to measure software productivity, and even as such they focus on development aspects of software productivity. These measurements are needed for project management and cost estimation, but they focus on human input to the software productivity equation and not necessarily to the output side of the equation, to the value delivered to the customer (Shaw 5).

Software productivity should also take into account program performance, or how quickly a program solves a specific problem. The speed of a program is influenced by a number of factors, some of which are programmer controlled and others that are dependant on the hardware and software environment of the program’s execution. Other performance measures include such issues as memory usage, code portability, and reliability (Jalics 6). These additional metrics take into account the costs associated with running and maintaining the software, as well as those associated with developing it.

What Factors Improve Software Productivity?

In keeping with our definition of software productivity as the ratio between the functional value of software produced to the labor and expense of producing it, our next step is to determine ways to improve software productivity. Whereas computer hardware has a reputation for performance and cost improvements unprecedented in the history of technology, software productivity has lagged behind (Shaw 5). This is due in part to shortcomings in software productivity measurements as noted above, and in part to the fact that faster, more powerful computers can provide performance gains without software productivity gains.

Even so, software development companies are constantly looking for ways to increase both developer productivity and code quality. The first step toward improving either is to establish productivity and quality metrics and benchmarks as described above. After benchmarks have been established, areas for improvement can be determined and action plans put in place to improve performance. This could be as simple as rearranging developer work areas. Studies have shown that the design of the developers’ workspace can have a large effect on their productivity (Hamilton 7).

Leveraging, or code reuse, is perhaps the best technique for increasing software productivity. Leveraging is reusing or porting application software across multiple business sites (Green 8). Sometimes the highest real productivity is the result of finding how to reuse programs already written – possibly for a quite different looking purpose – or in finding how to solve problems with past existing programs, revising them as subprograms (Mills 2).

Minimizing rework is another excellent way to increase software productivity. This means catching mistakes and problems as early in the software life cycle as possible. A mistake caught in one phase can reduce the work required in subsequent phases by a factor of three. That is, a good requirements analysis can reduce the design job by a factor of three, a good design can reduce the implementation job by a factor of three, and a good implementation can reduce the maintenance job by a factor of three (Mills 2). Software reliability can be enhanced by applying various different analysis methods, which include software verification and testing (Sharygina 9).

Of course, a substantial part of software productivity involves the skill and personal behavior of the software developers themselves. It has been noted that there is a 10 to 1 difference in productivity among programmers, brought about in large part by their differing levels of problem-solving skills and programming knowledge (Mills 2). Individual developer productivity can be enhanced by providing software developers with adequate training in, and insisting they adhere to, disciplined processes, such as structured analysis, top-down design, modular design, design reviews, code inspections, and quality assurance programs (Grady 10).

What Factors Inhibit Software Productivity?

For each factor that enhances software productivity, it is logical to believe that the absence of that factor inhibits productivity. We will accept this postulate with one caveat, which is that implementing formal methods to improve productivity offer great benefits, but often at a heavy price. For everyday software development, in which the pressures of the market do not allow full-scale formal methods to be employed, a more lightweight approach is called for (Jackson 11).

The methods implemented should match the resources and experience of the organization and project team. It should also be noted that establishing an applied measurement program for software requires sensitivity to cultural and social issues. The normal reaction to a measurement program by both project management and staff is apprehension, and only when it is shown that the data will be used for beneficial purposes rather than punitive purposes will the apprehension subside (Jones 1).

Other factors that inhibit productivity do not have a corresponding enhancing factor. For example, changes in machine architecture have the effect of keeping the programming state-of-the-art “off balance” making it more difficult to manage code development (Mills 2). Another example is the fact that a substantial number of development activities behave like fixed costs (e.g. requirements gathering and user training) that do not decline when productivity increases through code reuse and similar techniques (Jones 1). These factors underscore the importance of understanding software productivity.

Implications for Systems Analysis

Software productivity’s implications for systems analysis are two-fold. To begin with, the selection of a sound systems analysis methodology is an important step toward productive software development. It has been shown that the use of standard methodologies increases software productivity. Paradoxically, the productivity of the software development project is a leading measure of the success or failure of the systems analysis effort. The key lies in realizing that productivity is built into the process and the measurement tools are just that – tools to measure productivity, not create it.

Conclusion

Software productivity is the ratio between the functional value of software produced to the labor and expense of producing it. The tools we use to measure software productivity take into account the functionality delivered to the software consumer, the complexity of the program being developed, and the time and effort involved. In addition to the measurement tools associated with developing software, metrics associated with program performance take into account the costs of running and maintaining the software.

Software productivity can be enhanced by reusing code to leverage existing programs, minimizing rework through reliability initiatives, and adopting sound development practices and standards. However, even when sound practices are adhered to, software productivity may not increase because of circumstances outside the control of the development team. This includes rapidly changing technologies and the fixed-cost behavior of significant parts of the software development process.

Understanding software productivity becomes important in systems analysis when you consider that good systems analysis enhances software productivity and software productivity is a success measure of systems analysis.

Software Measurement Websites

Association for Computing Machinery

http://www.acm.org/

The Association for Computing Machinery (ACM) is an international scientific and educational organization dedicated to advancing IT arts, sciences, and applications. The site features news and publications, conference listings, and a library.

Computer Measurement Group Inc.

http://www.cmg.org/

The Computer Measurement Group (CMG) is a nonprofit, worldwide organization of data processing professionals committed to the measurement and management of computer systems.

Formal Methods Europe

http://www.fmeurope.org/

Formal Methods Europe (FME) is an organization with the mission of promoting and supporting the industrial use of formal methods for computer systems development. It is not allied to any single organization or group of organizations. Its members come from different industrial, academic, and government bodies.

International Council on Systems Engineering

http://www.incose.org/

The International Council on Systems Engineering (INCOSE) is a not-for-profit membership organization founded in 1990 to develop, nurture, and enhance the system engineering approach to multidisciplinary system product development. The Web site lists conferences, workshops, seminars and courses, and features bulletins, technical journals, and electronic bulletin boards on systems engineering.

International Function Point Users' Group

http://www.ifpug.org/

The International Function Point Users' Group (IFPUG) is a non-profit organization committed to increasing the effectiveness of its members' IT environments through the application of function point analysis and other software measurement techniques.

Practical Software and Systems Measurement Support Center

http://www.psmsc.com/

Practical Software and System Measurement is a U.S. Army site-sponsored by the Department of Defense (DoD). The goal of the project is to provide project managers with the objective information needed to successfully meet cost, schedule, and technical objectives on programs.

Society for Software Quality

http://www.ssq.org/

The Society for Software Quality (SSQ) promotes increased knowledge and interest in quality software development and maintenance technology. The SSQ is a federally recognized public benefit corporation organized and operated exclusively for educational purposes. It is dedicated to improving software quality and to providing communication between academia, industry, and software professionals.

Software Engineering Institute

http://www.sei.cmu.edu/

The Software Engineering Institute (SEI) is a federally funded research and development center sponsored by the U.S. Department of Defense through the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. The SEI's core purpose is to help others make measured improvements in their software engineering capabilities

Software Productivity Consortium

http://www.software.org/

The Software Productivity Consortium is a unique, nonprofit partnership of industry, government, and academia. It develops processes, methods, tools, and supporting services to help members and affiliates build high-quality, component-based systems, and continuously advance their systems and software engineering maturity pursuant to the guidelines of all of the major process and quality frameworks.

Software Technology Support Center

www.stsc.hill.af.mil/

The Software Technology Support Center (STSC) is a US Air Force sponsored site that focuses on the proactive application of software technology in weapon, command and control, intelligence and mission-critical systems. The STSC helps organizations identify, evaluate and adopt technologies that improve software product quality, production efficiency and predictability.

References

1)        Jones, Capers. Applied Software Measurement: Assuring Productivity and Quality. 2 ed. McGraw‑Hill, 1996.

2)        Mills, Harlan. Software Productivity. Little, Brown & Co., 1983.

3)        Bordoloi, Bijoy, and Joe Luchetski. Software Metrics: Quantifying and Analyzing Software for Total Quality Management. Systems Development Handbook (P. Tinnirello, ed). 4 ed. CRC Press LLC, 2000.

4)        Garmus, David, and David Herron. Measuring the Software Process: a Practical Guide to Functional Measurements. Prentice Hall PTR, 1996.

5)        Shaw, Mary. “The Tyranny of Transistors: What Counts about Software.” Institute for Software Research, International. Carnegie Mellon University, Mar. 2002. Available at: www-2.cs.cmu.edu/~Compose/ftp/shaw-sw-measures-fin.pdf

6)        Jalics, Paul J., and Santosh K. Misra. Measuring Program Performance. Systems Development Handbook (P. Tinnirello, ed). 4 ed. CRC Press LLC, 2000.

7)        Hamilton, Mark. Software Development: Building Reliable Systems. Prentice Hall PTR, 1999.

8)        Green, Hal H., and Ray Walker. Leveraging Developed Software: Organizational Implications. Systems Development Handbook (P. Tinnirello, ed). 4 ed. CRC Press LLC, 2000.

9)        Sharygina, Natasha and Doron Peled. “A Combined Testing and Verification Approach for Software Reliability.” Formal Methods for Increasing Software Productivity. Proc. of International Symposium of Formal Methods Europe. Berlin: FME, 2001.

10)    Grady, Robert B., and Deborah L. Caswell. Software Metrics: Establishing a Company-wide Program. Prentice Hall, 1987.

11)    Jackson, Daniel. “Lightweight Formal Methods.” Formal Methods for Increasing Software Productivity. Proc. of International Symposium of Formal Methods Europe. Berlin: FME, 2001.

References Not Cited

12)    Grady, Robert B. Practical Software Metrics for Project Management and Process Control. Prentice Hall, 1992.

13)    Reifer, Donald J. “Estimating Web Development Costs: There are Differences.” Reifer Consultants, June 2002. Available at: www.stsc.hill.af.mil/crosstalk/2002/06/reifer.html

14)    Scacchi, Walt. “Qualitative Techniques and Tools for Measuring, Analyzing, and Simulating Software Processes.” Experimental Software Engineering Issues: Critical Assessment and Future Directions. Proc. of International Workshop. Dagstuhl Castle, Germany: IBFI, 1992.