October 25, 2016 | Author: Mert Erkol | Category: N/A
SOFTWARE QUALITY MANAGEMENT Software quality assurance professionals believe that a higher quality level of software development process yields higher quality performance, and they seek quantitative evidence based on empirical findings. The few available journal and conference papers that present quantitative findings use a methodology based on a comparison of “before-after” observations in the same organization. A limitation of this before-after methodology is the long observation period, during which intervening factors, such as changes in products and in the organization, may substantially affect the results. The authors’ study employed a methodology based on a comparison of observations in two organizations simultaneously (Alpha and Beta). Six quality performance metrics were employed: 1) error density, 2) productivity, 3) percentage of rework, 4) time required for an error correction, 5) percentage of recurrent repairs, and 6) error detection effectiveness. Key words: CMM level effects, CMM level appraisal, software development performance metrics
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared Daniel Galin Ruppin Academic Center Motti Avrahami Verifone
SQP References Sustaining Best Practices: How Real-World Software Organizations Improve Quality Processes vol. 7, issue 3 Diana Mekelburg Making Effective Use of the CMM in an Organization: Advice from a CMM Lead Assessor vol. 2, issue 4 Pat O’Toole
INTRODUCTION Software quality assurance (SQA) professionals believe that a higher quality level of software development process yields higher quality performance. SQA professionals seek evidence for positive results of investments in SQA systems to achieve improved quality performance of the software development process. Journal and conference papers provide such evidence by presenting studies that show SQA investments result in improved software development processes. Most of these studies are based on comparison of “before-after” observations in the same organization. Only parts of these papers quantify the performance improvement achieved by SQA system investments, presenting percentages of productivity improvement and percentages of reduction of defects density, and so on. Of special interest are papers that quantify performance improvement and also measure software process quality level advancement. Capability Maturity Model (CMM®) and CMM IntegrationSM (CMMI®) level are the tools used for measuring the software process quality level common in all of these papers. According to this approach, the improvement of the quality level of the software process is measured by attaining a higher CMM (or CMMI) level in the organization. For example, Jung and Goldenson (2003) found that software maintenance projects from higher CMM-level organizations typically report fewer schedule deviations than those from organizations www.asq.org 27
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared assessed at lower CMM levels. For U.S. maintenance projects the results are: • Mean deviation of 0.464 months for CMM level 1 organizations • Mean deviation of 0.086 months for CMM level 2 organizations • Mean deviation of 0.069 months for CMM level 3 organizations A variety of metrics are applied to measure the resulting performance improvement of the software development process, relating mainly to quality, productivity, and schedule keeping. Results of this nature are presented by McGarry et al. 1999; Diaz and King 2002; Pitterman 2000; Blair 2001; Keeni 2000; Franke 1999; Goldenson and Gibson 2003; and Isaac, Rajendran, and Anantharaman 2004a; 2004b. Galin and Avrahami (2005; 2006) performed an analysis of past studies (meta analysis) based on results presented in 19 published quantitative papers. Their results, which are statistically significant, show an average performance improvement according to six metrics that range from 38 percent to 63 percent for one CMM level advancement. Another finding of this study is an average return on investment of 360 percent for investments in one CMM level advancement. They found similar results for CMMI level advancement, but the publications that present findings for CMMI studies do not provide statistically significant results. Critics may claim that the picture portrayed by the published papers is biased by the tendency not to publish negative results. Even if one assumes some bias, the multitude of published results proves that a significant contribution to performance is derived from SQA improvement investments, even if its real effect is somewhat smaller. The papers mentioned in Galin and Avrahami’s study, which quantify performance improvement and rank software process quality level improvement, were formulated according to the before-after methodology. An important limitation of this before-after methodology is the long period of observations during which intervening factors, such as changes in products, the organization, and interfacing requirements, may substantially affect the results. In addition, the gradual changes, typical to implementation of software process improvements, cause changing performance achievements during the observation period that may affect the study results and lead to inaccurate conclusions. 28 SQP VOL. 9, NO. 4/© 2007, ASQ
An alternative study methodology that minimizes these undesired effects is a methodology based on comparing the performance of several organizations observed at the same period (“comparison of organizations” methodology). The observation period, when applying this methodology, is much shorter, and the observed organization is not expected to undergo a change process during the observation period. As a result, the software process is relatively uniform during the observation period and the effects of uncontrolled software development environment changes are diminished. It is important to find out whether the results obtained by research applying the comparison of organizations methodology support findings of research that applied the before-after methodology of empirical studies. Papers that report findings of studies that use the comparison of organizations methodology are rare. One example is Herbsleb et al. (1994), which presents comparative case study results for two projects that have similar characteristics performed at the same period by Texas Instruments. One of the projects was performed by applying “old software development methodology,” while the other used “new (improved) software development methodology.” The authors report a reduction of the cost per software line of code by 65 percent. Another result was a substantial decrease in the defect density, from 6.9 to 2.0 defects per 1,000 lines of code. In addition, the average costs to fix a defect were reduced by 71 percent. The improved software development process was the product of intensive activities of software performance improvement (SPI), and was characterized by an entirely different distribution of resources invested during the software development process. However, Herbsleb et al. (1994) provide no comparative details about the quality level of the software process, that is, by appraisal of the CMM level for the two projects. The authors’ study applies the comparison of organizations methodology, which is based on empirical data of two software developing organizations (“developers”) with similar characteristics, collected in the same period. The empirical data that became available to the authors enabled them to process comparative results for each of the two developers, which include: 1) quantitative performance results according to several software process performance metrics; and 2) a CMM appraisal of the developer’s quality level of its software processes. In addition, the available data enable them to provide an explanation for the performance differences based on the differences in resource investment preferences during the software development phases.
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
THE CASE STUDY ORGANIZATIONS The authors’ case study is based on records and observations of two software development organizations. The first organization, Alpha, is a startup firm that implements only basic software quality assurance practices. The second organization, Beta, is the software development department in an established electronics firm that performs a wide range of software quality assurance practices that are employed throughout the software development process. Both Alpha and Beta develop C++ real-time embedded software in the same development environment: Alpha’s software product serves the telecommunication security industry sector, while Beta’s software product serves the aviation security industry sector. Both organizations employ the waterfall methodology; however, during the study Alpha’s implementation was “crippled” because the resources invested in the analysis and design stage were negligible. While the Alpha team adopted no software development standard, Beta’s software development department was certified according to the ISO 9000-3 standard (ISO 1997) and according to the aviation software development standard for the aviation industry DO-178B Software Considerations in Airborne Systems and Equipment Certification (RTCA 1997). The Federal Aviation Administration (FAA) accepts use of the standard as a means of certifying software in avionics. Neither software development organization was CMM certified. During the study period Beta developed one software product, while Alpha developed two versions of the same software product. The software process and the SQA system of Beta were stable during the entire study period. The SQA system of Alpha, however, experienced some improvements during the study period that became effective for the development of the second version of its software product. The first and second parts of the study period dedicated to the development of the two versions lasted six and eight months, respectively. A preliminary stage of the analysis was done to test the significance of the results of the improvements performed in Alpha during the second part of the study period.
The Research Hypotheses The research hypotheses are: • H1: Alpha’s software process performance metric for its second product will be similar to that of its first product.
• H2: Beta, as the developer of a higher quality level of its software process, will achieve software process performance higher than Alpha according to all performance metrics. • H3: The results for the differences in performance achievements of the comparison of organizations methodology will support the results of studies performed according to the before-after methodology.
METHODOLOGY The authors’ comparative case study research was planned for both a preliminary stage and a two-stage comparison: • Preliminary stage: Comparison of software process performance for Alpha’s first and second products (first part of the study period vs. the second part). • Stage one: Comparison of software process performance of Alpha and Beta. • Stage two: Comparison of the first stage findings (of comparison of organizations methodology) with the results of earlier research performed according to the before-after methodology.
The Empirical Data The study was based on original records of software correction processes that the two developers made available to the study team. The records cover a period of about one year for each developer. The following six software process performance metrics (“performance metrics”) were calculated: 1. Error density (errors per 1,000 lines of code) 2. Productivity (lines of new code per working day) 3. Percentage of rework 4. Time required for an error correction (days) 5. Percentage of recurrent repairs 6. Error detection effectiveness The detailed records enabled the authors to calculate these performance metrics for each developer. The metrics were calculated on a monthly basis for the first five performance metrics. For the sixth metric, only a global metric calculated for the entire period could be processed for each developer. www.asq.org 29
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared Table 1 presents a comparison of the organization characteristics and a summary of the development activities of Alpha and Beta.
The CMM Appraisal Since the studied organizations were not CMM certified, the authors used an official SEI publication, “Maturity Questionnaire for CMM based appraisal of internal process improvement - CBA IPI” (Zubrow, Hayes, and Goldenson 1994), to prepare an appraisal of Alpha and Beta’s software process quality level. The appraisal yielded the following: CMM level 1 for Alpha and CMM level 3 for Beta. A summary of the appraisal results for Alpha and Beta is presented in Table 2.
The Statistical Analysis For five of the six performance metrics, the calculated monthly performance metrics for the two organizations were compared and statistically tested applying t-test procedure. For the sixth performance metric, error detection effectiveness, only one global detection effectiveness metric (calculated for the entire study period) was available, 90.3 percent and 99.7 percent for Alpha and Beta, respectively.
THE FINDINGS The Preliminary Stage A comparison of Alpha’s performance metrics for the two parts of the study period is shown in Table 3.
Alpha’s performance results for the second study period show some improvements (compared with the results of the first study period) for all five performance metrics that were calculated on a monthly basis. However, the performance achievements of the second study period were found statistically insignificant for four out of five performance metrics. Only for one performance metric, namely the percentage of recurrent repairs, did the results show a significant improvement. Accordingly, H1 was supported for four out of five performance metrics. H1 was rejected only for the metric of the percentage of recurrent repairs.
Stage 1: The Organization Comparison - Alpha vs. Beta As Beta’s software process quality level was appraised to be much higher than that of Alpha, according to H2, the quality performance achievements of Beta were expected to be significantly higher than Alpha’s. The comparison of Alpha and Beta’s quality performance results is presented in Table 4. The results of the statistical analysis show that for three out of the six performance metrics the performance of Beta is significantly better than that of Alpha. It should be noted that for the percentage of recurrent repairs, where Alpha demonstrated a significant performance improvement during the second part of the study period, Beta’s performance was significantly better than Alpha’s
Table 1 Comparison of the organization characteristics and summary of development activities of Alpha and Beta
a) The organization characteristics Type of software product Industry sector Certification according to software development quality standards CMM certification CMM level appraisal b) Summary of development activities Period of data collection Team size Man-days invested New lines of code Number of errors identified during development process Number of errors identified after delivery to customers
30 SQP VOL. 9, NO. 4/© 2007, ASQ
The developer Alpha
Beta
Real-time embedded C++ software Aviation electronics 1. ISO 9001
Real-time embedded C++ software Telecommunication security None
2. DO-178B None CMM level 1
None CMM level 3
Jan. 2002 – Feb. 2003 14 2,824 56K 1,032 111
Aug. 2001 – July 2002 12 2,315 62K 331 1
© 2007, ASQ
Subject of comparison
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared Table 2 Summary of the maturity questionnaire detailed appraisal results for Alpha and Beta No. Key Process Area Alpha grades Beta grades 1. Requirements 1.67 10 management 2. Software project planning 4.28 10 3. Software project tracking 5.74 8.57 and oversight 4. Software subcontract 6.25 10 management 5. Software quality 3.75 10 assurance (SQA) 6. Software configuration 5 8.75 management (SCM) Level 2 Average 4.45 9.55 7. Organization process 1.42 10 focus 8. Organization process 0 8.33 definition 9. Training program 4.28 7.14 10. Integrated software 0 10 management 11. Software product 1.67 10 engineering 12. Intergroup coordination 4.28 8.57 13. Peer reviews 0 8.33 Level 3 Average 1.94 9.01 14. Quantitative process 0 0 management 15. Software quality 4.28 8.57 management Level 4 Average 2.14 4.29 16. Defect prevention 0 0 17. Technology change 2.85 5.71 management 18. Process change 1.42 4.28 management Level 5 Average 1.42 3.33
© 2007, ASQ
for each of the two parts of the study period. For the fourth performance metric, productivity, Beta’s results were 35 percent better than those of Alpha, but no statistical significance was found. Somewhat surprising results were found for the fifth metric, the time required for error correction, where the performance of Alpha was 14 percent better than Beta, but the difference was found to be statistically insignificant. The explanation for this finding is probably in the much lower quality of Alpha’s software product. The lower quality of Alpha could be demonstrated by the much higher percentages of recurrent repairs, where Alpha’s results were found significantly higher than Beta’s, fivefold and threefold higher for Alpha’s first part and second part of the study period, respectively. Alpha’s lower quality is especially evident when referring to the sixth performance metric. For the sixth performance metric, the error detection effectiveness, although only global performance results for the entire study period are available, a clear inferiority of Alpha is revealed, where the error detection effectiveness of Alpha is only 90.3 percent compared with Beta’s error detection effectiveness of 99.7 percent. In other words, 9.7 percent of Alpha’s errors were discovered by its customers compared with only 0.3 percent of Beta’s errors. To sum up stage 1, H2 was supported by statistically significant results for the following performance metrics: 1) error density, 2) percentage of rework, and 3) percentage of recurrent repairs. For an additional metric, error detection effectiveness, though no statistical testing is possible—the global results that clearly indicate performance superiority of Beta support hypothesis H1. For two metrics H1 was not supported statistically. As for the productivity metric, the results show substantially better performance for Beta. As for the time required for an error correction—Alpha’s
Table 3 Alpha’s performance comparison for the two parts of the study period Alpha
SQA metrics
First part of the study period
Second part of the study period
1. 2. 3. 4. 5. 6.
(6 months) Mean s.d. 17.9 3.8 16.8 11.8 35.4 12.7 35.9 33.3 26.7 11.8 90.3%
(8 months) Mean s.d. 15.8 4.2 21.9 18.4 28.5 19.8 16.9 8.4 13.8 8.7 99.7%
Error density (errors per 1,000 lines of code) Productivity (lines of code per working day) Percentage of rework Time required for an error correction (days) Percentage of recurrent repairs Error detection effectiveness (global performance metric for the entire study period)
t 0.05
Statistical significance of differences t 0.05
t=0.964 Not significant t=-0.585 Not significant t=0.746 Not significant t=1.570 Not significant t=2.200 Significant Statistical testing is not possible
www.asq.org 31
© 2007, ASQ
Alpha
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared results are a little better than Beta’s, with no statistical significance. To sum up, as the results for four of the performance metrics support H1 and no result rejects H1, one may conclude that the results support H1. The authors would like to note that their results are typical case study results, where a hypothesis when clearly supported is followed by some inconclusive results.
As Alpha’s and Beta’s SQA systems were appraised as similar to CMM levels 1 and 3, respectively, their quality performance gap is compared with Galin and Avrahami’s mean quality performance improvement for a CMM level 1 organization advancing to CMM level 3. The comparison for the four performance metrics is shown in Table 5. The results of the comparison support hypothesis H3 regarding all four performance metrics. For two of the performance metrics (error density and percentage of rework) this support is based on statistically significant results for the current test case. For the productivity metric the support is based on substantial productivity improvement, which is not statistically significant. The comparison results for the four metrics reveal similarity in direction, where size differences in achievement are expected when comparing multiproject mean results with case study results. To sum up, the results of the current test case performed according to the comparison of organizations methodology conform to the published results obtained by using the before-after methodology.
Stage 2: Comparison of Methodologies—The Comparison of Organizations Methodology vs. the Before-After Methodology In this stage the authors compared the results of the current case study that was performed according to the comparison of organizations methodology with results of the commonly used before-after methodology. They discovered that for this purpose, the work of Galin and Avrahami (2005; 2006), which is based on combined analysis of 19 past studies, is the suitable “representative” of results obtained by applying the before-after methodology. The comparison is applicable to four software process performance metrics that are common to the current case study and the findings of the combined past studies analysis carried out by Galin and Avrahami. These common performance metrics are:
DISCUSSION The reason for the substantial differences in software process performance achievement between Alpha and Beta is the main subject of this discussion. The two developers claimed to use the same methodology. The authors assume that substantial differences in software process performance result from the actual implementation differences by the developers. To investigate the causes for the quality performance gap, the authors first examine the available data relating to the differences between Alpha and Beta’s distributions
• Error density (errors per 1,000 lines of code) • Productivity (lines of code per working day) • Percentage of rework • Error detection effectiveness
Table 4 Quality performance comparison—Alpha vs. Beta
1. 2. 3. 4. 5.
Error density (errors per 1,000 lines of code) Productivity (lines of code per working day) Percentage of rework Time required for an error correction (days) Percentage of recurrent repairs – part 1 Percentage of recurrent repairs – part 2
6. E rror detection effectiveness - % discovered by the customer (global performance metric for the entire study period)
32 SQP VOL. 9, NO. 4/© 2007, ASQ
Alpha
Beta
(14 months)
(12 months)
t 0.05
Statistical significance of differences t 0.05
Mean 16.8 19.7 31.4 25.0 26.7
s.d. 4.0 15.5 16.9 23.7 11.8
Mean 5.0 26.7 17.9 29.0 4.8
s.d. 3.0 16.6 8.0 15.6 8.1
8.225 -1.111 2.532 -0.497 4.647
Significant Not significant Significant Not significant Significant
13.8
8.7
4.8
8.1
2.239
Significant
9.7
—
0.3
—
—
No statistical analysis was possible
© 2007, ASQ
SQA metrics
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared of error identification phases along the development process. Table 6 presents for Alpha and Beta the percentages of error identification for the various development phases. Table 6 reveals entirely different distributions of the error identification phases for Alpha and Beta. While Alpha identified only 11.5 percent of its errors in the requirement definition, analysis, and design phases, Beta managed to identify almost half of the total errors during the same phases. Another delay in error identification is noticed in the unit testing phase. In the unit testing phase Alpha identified fewer than 4 percent of errors identified by testing while Beta identified in the same phase more than 20 percent of the total errors (almost 40 percent of errors identified by testing). The delay in error identification by Alpha is again apparent when comparing the percentage of errors identified during the integration and system tests: 75 percent for Alpha compared to 35 percent for Beta. However, the most remarkable difference between Alpha and Beta is in the rate of errors detected by the customers: 9.7 percent for Alpha compared to only 0.3 percent for Beta. This enormous difference in error detection efficiency, as well as the remarkable difference in error density, is the main contribution to the higher quality level of Beta’s software process.
Further investigation of the causes of Beta’s higher quality performance leads one to data related to resources distribution along the development process. Table 7 presents the distribution of the development resources along the development process, indicating noteworthy differences between the developers. Examination of the data presented in Table 7 reveals substantial differences in resource distribution between Alpha and Beta. While more than a third of the resources are invested by Beta’s team in the requirement definition, analysis, and design phases, Alpha’s team investments during the same phases are negligible. Furthermore, while Alpha invests about half of the development resources in software testing and the consequent software corrections, the investments of Beta in these phases are less than a quarter of the total project resources. It may be concluded that the shift of resource invested “downstream” by Alpha resulted in a parallel shift downstream of the distribution of error identification phases (see Table 6). The very low resource investments of Beta in correction of failures identified by customers as compared with Alpha’s investments in this phase correspond well to the differences in error identification distribution between the developers. It may be concluded that the enormous difference in the error detection efficiency as well as the
Table 5 Quality performance improvement results—methodology comparison
SQA metrics
Before-after methodology CMM level 1 advancement to CMM level 3 Mean performance improvement *
% 1. 2. 3. 4.
% 76% reduction 72% increase 65% reduction 84% reduction
Error density (errors per 1,000 lines of code) 70% reduction (Significant) Productivity (lines of code per working day) 36% increase (Not significant) Percentage of rework 43% reduction (Significant) Error detection effectiveness 97% reduction (Not tested statistically)
© 2007, ASQ
Comparison of organizations methodology Beta’s performance compared with Alpha’s
* According to Galin and Avrahami (2005; 2006)
Table 6 Error identification phase—Alpha vs. Beta
Requirement definition Design Unit testing Integration and system testing Post delivery
% 5.8 5.7 3.8 75.0 9.7
Alpha Identified errors— cumulative % 5.8 11.5 15.3 90.3 100.0
Beta Identified errors % 33.8 9.0 22.3 34.6 0.3
Identified errors— cumulative % 33.8 42.8 65.1 99.7 100.0
www.asq.org 33
© 2007, ASQ
Development phases
Identified errors
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared remarkable difference in error density are the product of the downstream shift of the distribution of the software process resource. In other words, they are the demonstration of the results of a “crippled” implementation of the development methodology that actually begins the software process at the programming phase. This crippled development methodology yields a software process of a substantially lower productivity, followed by a remarkable increase in error density and a colossal reduction of error detection efficiency. At this stage it would be interesting to compare the authors’ findings regarding the differences of resources distribution between Alpha and Beta with those of Herbsleb et al.’s study. A comparison of findings regarding resource distribution along the development process for the current study and Texas Instruments projects is presented in Table 8. The findings by Herbsleb et al. related to Texas Instruments projects indicate that the new (improved) development methodology focuses on upstream devel-
opment phases, while the old methodology led the team to invest in coding and testing. In other words, while in the improved development methodology project 40 percent of the development resources were invested in the requirement definition and design phases, only 8 percent of the resources of the old methodology project were invested in these development phases. Herbsleb et al. also found a major difference in resource investments in unit testing: 18 percent of the total testing resources by the old methodology project compared to 90 percent by the improved methodology project. Herbsleb et al. believe that the change of development methodology, as evidenced by the change in resource distribution along the software development process, yielded the significant reduction in error density (from 6.9 to 2.0 defects per thousand lines of code) and to a remarkable reduction in resources invested in customer support after delivery (from 23 percent of total project resources to 7 percent). These findings by Herbsleb et al. closely resemble the current case study findings.
Table 7 Project resources according to development phase—Alpha vs. Beta
Requirement definition and design Coding Software testing Error corrections according to testing results Correction of failures identified by customers
22.5
95.0
9.5
99.5
5.0
100.0
0.5
100.0
© 2007, ASQ
Development phase
Resources invested Alpha Beta Resources invested Resources invested— Resources invested Resources invested— cumulative cumulative % % % % Negligible 0 34.5 34.5 46.5 46.5 41.5 76.0 26.0 72.5 14.0 90.0
Table 8 Texas Instruments project resources distribution according to development phase—“Old development methodology” project vs. “New (improved) development methodology” project. Source: Herbsleb et al. (1994)
Requirement definition Design Coding Unit testing Integration and system testing Support after delivery
34 SQP VOL. 9, NO. 4/© 2007, ASQ
4% 4% 47% 4% 18% 23%
% 4% 8% 55% 59% 77% 100.0%
13% 27% 24% 26% 3% 7%
% 13% 40% 64% 90.0% 93% 100.0%
© 2007, ASQ
Development phase
Resources invested Old development methodology project New (improved) development methodology project Resources invested Resources invested— Resources invested Resources invested— cumulative cumulative % %
Benefits of a Higher Quality Level of the Software Process: Two Organizations Compared
CONCLUSIONS The quantitative knowledge of the expected software process performance improvement is of great importance to the software industry. The available quantitative results are based solely on studies performed according to the before-after methodology. The current case study supports these results by applying an alternative methodology—the comparison of organizations methodology. As the examination of the results obtained by the use of an alternative study methodology is important, the authors recommend performing a series of case studies applying the comparison of organizations methodology. The results of these proposed case studies may support the earlier results and add substantially to their significance. The current case study is based on existing correction records and other data that became available to the research team. Future case studies applying the comparison of organizations methodology that will be planned at earlier stages of the development project may participate in the planning of the project management data collection, and enable collection of data for a wider variety of software process performance metrics. REFERENCES Blair, R. B. 2001. Software process improvement: What is the cost? What is the return on investment? In Proceedings of the Pittsburgh PMI Conference, April 12. Diaz, M., and J. King. 2002. How CMM impacts quality, productivity, rework, and the bottom line. Crosstalk 15, no. 1: 9-14. Franke, R. 1999. Achieving Level 3 in 30 months: The Honeywell BSCE Case. Presentation at the 4th European Software Engineering Process Group Conference, London. Galin, D., and M. Avrahami. 2005. Do SQA program work – CMM works. A meta analysis. In Proceedings of the IEEE International Conference on Software –Science, Technology & Engineering, Herzlia, Israel, 22-23 February. IEEE Computer Society Press, Los Alamitos, Calif.: 95-100. Galin, D., and M. Avrahami. 2006. Are CMM programs beneficial? Analyzing past studies. IEEE Software 23, no. 6: 81-87. Goldenson, D. R., and D. L. Gibson. 2003. Demonstrating the impact and benefits of CMMI: An update and preliminary results (CMU/SEI-2003-009). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Herbsleb, J., A. Carleton, J. Rozum, J. Siegel, and D. Zubrow. 1994. Benefits of CMM-based software process improvement: Initial results (CMU/SEI-94-TR-013). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Available at: http://www.sei.cmu.edu/ publications/documents/94.reports/94.tr.013.html.
Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004a. Does quality certification improve software industry’s operational performance. Software Quality Professional 5, no. 1: 30-37. Isaac, G., C. Rajendran, and R. N. Anantharaman. 2004b. Does quality certification improve software industry’s operational performance – supplemental material. Available at http://www.asq.org. ISO. 1997. ISO 9000-3 Guidelines for the application of ISO 9001:1994 to the development, supply, installation and maintenance of computer software. Geneva, Switzerland: International Organization for Standardization. Jung, H. W., and D. R. Goldenson. 2003. CMM-based process improvement and schedule deviation in software maintenance (CMU/SEI-2003-TN-015). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Keeni, G. 2000. The evolution of quality processes at Tata Consultancy Services. IEEE Software 17, no. 4: 79-88. Available at: http:// www.stsc. hill.af.mil/crosstalk/1999/05/oldham.pdf. McGarry, F., R. Pajerski, G. Page, S. Waligora, V. Basili, and M. Zelkowitz. 1999. Software process improvement in the NASA Software Engineering Laboratory (CMU/SEI-94-TR-22). Pittsburgh: Software Engineering Institute, Carnegie Mellon University. Available at: http://www.sei.cmu. edu/publications/documents/94.reports/94.tr.022.html. Pitterman, B. 2000. Telcordia Technologies: The journey to high maturity. IEEE Software 17, no. 4: 89-96. RTCA. 1997. DO-178B Software considerations in airborne systems and equipment certification, Radio Technical Commission for Aeronautics, U.S. Federal Aviation Agency, Washington. Zubrow D., W. Hayes, and D. Siegel Jand Goldenson. 1994. Maturity questionnaire (CMU/SEI-94-SR-7). Pittsburgh: Carnegie Mellon University, Software Engineering. ® Carnegie Mellon, Capability Maturity Model, CMMI, and CMM are registered trademarks of Carnegie Mellon University. CMM Integration and SEI are service marks of Carnegie Mellon University. SM
BIOGRAPHIES Daniel Galin is the head of information systems studies at the Ruppin Academic Center, Israel, and an adjunct senior teaching fellow with the Faculty of Computer Science, the Technion, Haifa, Israel. He has a bachelor’s degree in industrial and management engineering, and master’s and doctorate degrees in operations research from the Israel Institute of Technology, Haifa, Israel. His professional experience includes numerous consulting projects in the areas of software quality assurance, analysis, and design of information systems and industrial engineering. He has published many papers in professional journals and conference proceedings. He is also the author of several books on software quality assurance and on analysis and design of information systems. He can be reached by e-mail at
[email protected]. Motti Avrahami is VeriFone Global supply chain quality manager. He has more than nine years of experience in software quality process and software testing. He received his master’s degree in quality assurance and reliability from the Technion, Israel Institute of Technology. He can be contacted by e-mail at
[email protected]. www.asq.org 35