(This page has been faithfully recreated to preserve the information for the given issue)    

Home | JOIN | RENEW | Search | Members Only | Shop
ASQ Logo Graphic Header
Publications Information Certification Education Standards Networking
Current Issue
About
Subscribe
Past Issues
Author Guidelines
Report a Missing Issue
Quality Progress Quality Press

Software Quality Professional

Volume 4 · Issue 4 · September 2002

Contents

Resource Reviews

GENERAL KNOWLEDGE, CONDUCT, AND ETHICS

CMMI Distilled—A Practical Introduction to Integrated Process Improvement
By Dennis M. Ahern, Aaron Clouse, and Richard Turner

Application Service Providers: A Manager’s Guide
By John Harney

SOFTWARE QUALITY MANAGEMENT

Computing Calamities: Lessons Learned from Products, Projects, and Companies That Failed
By Robert L. Glass

Dare To Be Excellent: Case Studies of Software Engineering Practices That Worked
By Alka Jarvis and Linda Hayes

Winning with Software
By Watts S. Humphrey

SOFTWARE ENGINEERING PROCESSES

Evaluating Software Architectures
By Paul Clements, Rick Kazman, and Mark Klein

Developing Applications with JAVA and UML
By Paul R. Reed Jr.

PROGRAM AND PROJECT MANAGEMENT

New Directions in Project Management
By Paul C. Tinnerello

New Directions in Internet Management
By Sanjiv Purba

Quality Software Project Management
By Robert T. Futrell, Donald F. Shafer, and Linda I. Shafer

Software Project Management in Practice
By Pankaj Jalote

SOFTWARE VERIFICATION AND VALIDATION

A Practical Guide to Testing Object-Oriented Software
By John D. McGregor and David A. Sykes

Peer Reviews in Software: A Practical Guide
By Karl E. Wiegers

Security Fundamentals for E-Commerce
By Vesna Hassler

High Quality Low Cost Software Inspections
By Ronald A. Radice

From the Resource Reviews Editor:
This issue marks the end the fourth year of Software Quality Professional. Many thanks to Taz Daughtrey, editor, for making it all come together. I'm looking forward to many future issues.

There are 15 reviews covering four of the seven areas of the body of knowledge in this issue provided by five publishing companies. There are two new reviewers—one recommended by his father. If you know of anyone who likes to read books about software quality please have him or her contact me about doing reviews. Please note that the reviewer biographies are now online only . We have so many books to review that we need to save printed space. Thanks for the great support that makes this possible.

If you have any comments about Resource Reviews, please send them to me. I’d like to know if the reviews are useful to you. You can contact me at Sue_carroll@bellsouth.net .


GENERAL, KNOWLEDGE, CONDUCT, AND ETHICS

CMMI Distilled—A Practical Introduction to Integrated Process Improvement

Dennis M. Ahern, Aaron Clouse, and Richard Turner. 2001. Boston: Addison-Wesley. 306 pages. ISBN 0-201-73500-8.

(CSQE Body of Knowledge areas: General Knowledge, Conduct, and Ethics)

Reviewed by Pieter Botman

Most software professionals are by now aware of the Software Engineering Institute’s (SEI) Capability Maturity Model for software organizations (SW-CMM). Despite misinterpretation and outright hostility on the part of some in the industry, the SW-CMM has served as an important benchmark and tool for more than a decade.

The SEI wanted to integrate various process models (say for systems engineering, software acquisition, security engineering, and even people management), some of which originated outside the SEI. This meant developing a framework, or general process improvement model, that spans disciplines. This makes sense since real-world engineering organizations often face process improvement challenges across disciplines.

For those already acquainted with the software CMM, the emergence and broadening of CMMI to include systems engineering and integrated product and process development (IPPD) is not hard to accept. Many practices of systems engineering overlap with (or at least interact with) those of software engineering. The CMMI defines IPPD as:

“A systematic approach to product development which increases customer satisfaction through a timely collaboration of necessary disciplines throughout the product life cycle.”

IPPD aspects of the model are reflected in two new process areas, dealing with integrated teams and an organizational environment for integration, as well as some additions to the existing integrated project management process area. This is consistent with the goals of the CMMI, and facilitates cross-functional process improvement within organizations.

The authors review the concepts and successes of the previously separate CMMs for software, systems engineering, and integrated process development. The primary systems engineering model used as input for the CMMI was EIA/IS 731 (EIA Interim Standard, Systems Engineering Capability). The IPPD and software CMMs used as inputs for the CMMI were published by the SEI.

The basic architecture of the CMMI model is presented. CMMI, like the SW-CMM, consists of process areas, each with its own goals and practices. In total there are 24 process areas, 54 goals (some specific to a process area, some generic across all process areas), and 186 practices. These numbers might seem to indicate a large model, but surprisingly the CMMI contains fewer goals and practices than some of the antecedent models, and certainly fewer than the three models added together. The CMMI team evidently has integrated these models well.

Like the original SW-CMM, the CMMI has a “staged” representation, where certain process areas are grouped into maturity levels (stages). As users of the SW-CMM know, this approach lets organizations evaluate, focus on, and improve the essential processes before addressing optimizing processes.

Unlike the SW-CMM, however, the CMMI model also has a “continuous” representation. This type of model is familiar to users of ISO 15504 (SPiCE) and EIA/IS 731. In this representation, process areas remain separate, allowing organizations to focus on measurement and improvement within the process areas of their choosing. Because the concept of maturity levels does not apply, in the continuous representation, each process area is measured separately, and achievement of its associated goals is reflected in a capability level. There are similarities between maturity levels and capability levels. As an organization improves its performance in a specific process area, it evolves from a “performed” process, to a “managed” process, and then to an “optimizing” process. The set of capability levels for all process areas for a given organization is referred to as its capability profile. The authors discuss the advantages of working with either the staged or the continuous representation, and briefly discuss the “mapping” of capability profiles to maturity levels.

Much space is devoted to summarizing the process areas, their goals, and their practices (both specific and generic). The practices are not covered in detail. Included in the description of each process area is a context diagram, which summarizes the goals, key practices, major artifacts/deliverables, and connections to other process areas.

In the next three chapters, the authors try to provide readers with additional guidance on selecting disciplines with CMMI scope, selecting an appropriate representation, selecting process areas, and assessments. While these are important topics for CMMI users, readers should not expect detailed guidance.

The fact that the government and the defense industry have spurred much of CMMI development is not surprising, considering the nature of the SEI’s work and the active involvement of the National Defense Industrial Association. Large defense contractors have large, integrated organizations, engaged in developing complex systems. They have been helping the SEI develop the model, and have participated in pilot evaluations. As with the SW-CMM, some critics will charge that this model represents overkill at the practice level, that it is too heavy-handed and prescriptive. But like all models, the CMMI is a tool, to be tailored or adapted by the ultimate users as they see fit. The CMMI will continue to evolve, adding more disciplines to its scope. The antecedent SW-CMM and EIA/IS 731 are scheduled for gradual retirement after 2003, so the CMMI will gain increasing importance. Perhaps it will help reduce the number of process models and simplify the “framework quagmire” (see URL http://www.software.org/quagmire).

The book meets its stated goals. It introduces the CMMI, and the benefits of an integrated approach to process improvement. Some guidance on the use of the CMMI is provided. Readers are referred to the SEI Web site at http://www.sei.cmu.edu/cmmi for detailed CMMI documents, and for information concerning the status of the current CMMI release(s). This work contains a mix of introductory and more involved material. While it covers a lot of ground, it is best suited for software or systems engineers who have already been exposed to the software CMM or systems engineering CMM. Process improvement and assessment specialists will likely turn directly to the detailed CMMI specifications.

CMM, CMMI, and Capability Maturity Model are trademarks/service marks of Carnegie Mellon, registered in the U. S. Patent and Trademark Office.

Pieter Botman (p.botman@ieee.org) is a professional engineer (software) registered in the Province of British Columbia. With more than 27 years of software engineering experience, he is currently an independent consultant, assisting companies in the areas of software process assessment/ improvement, project management, quality management, and product management.

Return to top

Application Service Providers: A Manager’s Guide

John Harney. 2002. Boston: Addison-Wesley. 309 pages. ISBN 0-201-72659-9.

(CSQE Body of Knowledge area: General Knowledge, Conduct, and Ethics)

Reviewed by John D. Richards

Why a book about application service providers (ASPs)? The Gartner Group estimates “that worldwide revenues of ASPs will be $25.3 billion in 2004. Meanwhile, International Data Corporation predicts worldwide revenues for ASP market (ASPs, management services providers, managed security providers, and so forth) will quadruple from $106 billion in 2000 to more than $460 billion in 2005.”

This is a comprehensive reference for managers focusing on the higher-level requirements. The inside cover provides a list of the top 30 questions nontechnical personnel should ask when assessing an ASP’s suitability. The top five are:

  1. Do the ASP’s hardware, software, and network scale to your requirements?
  2. Do you need extensive application customization?
  3. Do you need accelerated deployment?
  4. Can the ASP provision manage and maintain the servers in its data center to your requirements and within your budget?
  5. Do you need hardware “capacity-on-demand” from your ASP?

This book has 13 chapters, six appendices, a bibliography, a very useful glossary, and an index. The glossary is a key item, as the world of information technology (IT) is replete with acronyms, some more common than others. Chapters of particular importance to managers include: security issues for ASPs, ASP service-level agreements, ASP pricing models, ASP customer service and technical support, and what’s ahead for ASPs. One of the appendices lists the location of the useful and complete case studies in the body of the volume. These are of value to nontechnical managers. Each chapter is well organized, beginning with an overview and concluding with a summary of the major points. These items make this book a good candidate for a college IT text.

Overall, this is a valuable book. While the glossary was handy, the plethora of acronyms did slow the reading for those not intimately familiar with them. I highly recommend this book for managers trying to gain a high-level working knowledge of ASPs or for IT students who need a good reference.

John D. Richards (john_richards@sra.com) is an account and project manager for SRA International in San Antonio, Texas. He has more than 30 years’ experience as a manager and leader. He is a certified quality engineer and auditor and a Senior member of ASQ. He has a doctorate and an advanced master’s degree in education from the University of Southern California, and master’s and bachelor’s degrees in psychology. He serves as an adjunct professor at the University of the Incarnate Word teaching courses in statistics, quantitative analysis, management, and psychology.

Return to top

 

SOFTWARE QUALITY MANAGEMENT

Computing Calamities: Lessons Learned from Products, Projects, and Companies That Failed

Robert L. Glass. 1999. Upper Saddle River, N. J.: Prentice Hall. 294 pages. ISBN 0-201-32564-0.

(CSQE Body of Knowledge area: Software Quality Management)

Reviewed by Milt Boyd

Robert Glass has done it again. Like a Rikker mount of butterflies, Glass’ latest book is a display of a variety of computing calamities, neatly organized according to types. Why read about computing calamities? When bridges collapse, when airplanes fall out of the sky, or factories explode, then experts gather around, try to discover why, and attempt to formulate the lessons learned so as to reduce the chances of repetition. Clearly Glass believes that there are lessons to be learned.

But beyond that, Glass believes that success is transient, while failure is forever. More than that, failure (at least the failure of another) is fun to read about.

Finally, “Failure is poignant. It is captivatingly sad. …[Software professionals are] Like the miners [of the Gold and Silver Rushes] caught up in the most powerful movement of their time. The failures these stories tell us are far more exciting than the equivalent stories of those who did not choose to get caught up in the computing rush.”

Glass is not interested in the mundane problems of projects that run behind schedule or over budget. His topic is the calamity that brings down entire companies, especially the company that founded an industry and is the dominant player. Or, the project that eats resources for a decade, without gaining an enthusiastic customer for the product.

This book is organized into six chapters, beginning with an introduction, an overview, then corporate failures, failures of projects and products, and failures of the brightest and best, and a summary. In each chapter on failures, Glass describes the thread that unites the selected calamities and provides his views on what went wrong. He then presents the best accounts he has found.

Glass has selected the various accounts and has written the glue that binds them together, but the accounts themselves are uneven in point of view, treatment, and detail. Each account draws its lessons from one calamity, but there is little structure to tie all the lessons together.

At one level, this is a fun read. But careful reflection on what has gone wrong in the past can help the software quality professional. In some stories, management was immune to the facts, working in its own world. No skill in metrics, audits, testing, quality management, or any other body of knowledge area would allow a software quality professional to make a difference. But not always. It can be instructive to ask, “If I had been there, could I have averted this disaster?”

Buy this book, read it thoroughly, and try to get organizations to embrace failure and learn from it.

Milt Boyd (miltboyd@arczip.com) is a member of ASQ’s Software, Reliability, and Quality Management Divisions. He is an ASQ Certified Quality Manager and is certificated by IRCA as a Lead Auditor of Quality Management Systems.

Return to top

Dare To Be Excellent: Case Studies of Software Engineering Practices That Worked

Alka Jarvis and Linda Hayes, eds. 1999. Upper Saddle River, N. J.: Prentice Hall. 333 pages. ISBN 0-13-081156-4.

(CSQE Body of Knowledge areas: Software Quality Management, Software Engineering Processes)

Reviewed by Ray Schneider

All too often the notion of software engineering processes is limited to a one-size-fits-all perspective in pursuit of the latest silver bullet or consultant-driven faddishness promising future success and relief from suffering if companies just embrace the system. Dare To Be Excellent is a breath of fresh air. It is a relatively concise book of accounts from the world of work and industry. Tom Gilb opens the Foreword with “Welcome to the real world!”

The title is reminiscent of the mega-hit In Search of Excellence by Tom Peters and Robert Waterman, which a generation ago called attention to the practices of the excellent companies. That goal of becoming excellentk, which was a challenge then and a challenge still unmet, is addressed in Dare To Be Excellent. Progress through the stages of the Capabilities Maturity Model (CMM), ISO 9000, and W. Edwards Deming and the Shewart cycle are among the themes invoked throughout the book as the search for excellence within the companies on display is described.

Each chapter is written by individuals from different companies, detailing software engineering practices and experiences put in place to enhance their software development performance. Appropriately, the emphasis is on management. Coding is a nonstarter in this book, which focuses instead on requirements, project planning, management, and support. Instead the processes showcased focus on change management, inspections, software reliability, guidance on software release planning, and metrics. Guidance for creating a software development process handbook is provided in the form of excerpts from the Phoenix Technology Software Development Handbook and PKS Information Services offers a manual describing its technology project management process (TPMP).

The companies featured in this book are Texas Instruments, Intel, PKS Information Systems, Royal Bank Financial Group, Primark Investment Management Services Limited, Digital Technology International, Cisco Systems, TANDEM Telecom Network Solutions, Phoenix Technologies Limited, and International Business Systems. It is surprising that a coherent book could arise from such disparate sources. Perhaps one can explain this by the thematic cultural commitments that, while not featured, are nevertheless on display throughout the book. Some of these features are a commitment to excellence. Each chapter is devoted to a company’s struggle to achieve excellence in a particular aspect of the software development life cycle.

One chapter features Texas Instrument’s implementation of SAP in the context of its digital imaging (DI) venture project. SAP is an enterprise-level application suite that integrates all phases of operations from manufacturing to sales and distribution and finance. Implementing such a process usually takes years. Texas Instruments accomplished a “bare bones” implementation in just 12 weeks. Such an ambitious undertaking focuses a glaring light on exactly what one thinks requirements are. Strict adherence to principles was essential. Three of the principles that caught my eye were no blame, embracing mistakes, and think from scratch. The tough-time constraints meant that a time-box approach and a rolling release strategy were essential to have any promise of success. This chapter emphasized the pressure-cooker atmosphere in which real-world requirements have to be generated, and the feverish pace necessary to meet critical schedules.

Beyond commitment to excellence was a pervasive pursuit of excellence through codified processes. Intel and the use of project planning were explained. A focusing principle was illustrated by Moore’s Law—Gordon Moore, a cofounder of Intel’s prediction that microchips would double in power and halve in price every 18 months. In 1985 the 386 had 275,000 transistors; by 1997 the Intel Pentium II had 7.5 million transistors. Explosive growth in capability like this requires not only the ability to make such massive changes but the awareness that with greater power comes the ability to engage greater challenges, an observation embodied in Grove’s Law: “…we will continually find new things for microchips to do that were scarcely imaginable a year or two earlier” (according to Walter Isaacson in 1997 in Time). Intel, in the context of its LANDesk Virus Protect (LDVP) product development, presents its product life cycle stages: Stage 1: Marketing Business Plans; Stage 2: Marketing Product Requirements; Stage 3: Engineering Design; Stage 4: Development.

Process codification means that the process must be written down and embraced by the organization, starting with management. “If you do what you’ve always done,” the saying goes, “you’ll get what you always got.” So any pursuit of excellence means you have to seize it where you find it and create the cultural change necessary to make it work.

PKS Information Services struggled to define and implement a TPMP. A key principle was to involve the customer throughout the process by promoting customer involvement, awareness, and concern. Exhibits are a good way to show and tell, and PKS Information Services provides a sample TPMP document (pp. 91-126).

Each subsequent chapter begins with a company profile, followed by some description of the problem being addressed. This is followed by reasons to implement, and often a blow-by-blow account of the implementation, including a treatment of the cultural issues or changes that had to be accommodated to make it all work. Results are then summarized. It gave me joy to see a section titled “Lessons Learning” preceding nearly every conclusions section. A commitment to process is of little account if one is not committed to fixing what is broken. That commitment required continual vigilance.

Dare To Be Excellent is a valuable addition to my library. It is about, as Tom Gilb suggested in the Foreword, the real world. This is a series of case studies from the trenches. They are often bureaucratic big business trenches, which may make those committed to the small, agile, and extreme cultures wince a little, but no matter. Big things call for codification and control, and there is a proper place for things that work at all levels of scale. Professionals who want to benchmark their own methods would do well to get a copy of Dare To Be Excellent.

Ray Schneider (schneirj@adelphia.net) has worked for more than 35 years in both hardware and software research and development for government and the defense industry developing sensors and signal processing software and for small business where his teams have developed many portable instruments with embedded software solutions. A member of the IEEE and the ACM he is a licensed professional engineer in the state of Virginia. He holds a bachelor’s degree in physics, a master’s degree in engineering science, and a doctorate in information technology. He is an assistant professor in the Mathematics and Computer Science Department of Bridgewater College in Bridgewater, Va.

Return to top

Winning with Software

Watts S. Humphrey. 2002. Boston: Addison-Wesley. 209 pages. ISBN 0-201-77639-1.

(CSQE Body of Knowledge area: Software Quality Management)

Reviewed by Joe Zec

Consider the following: Richard wishes to build his own house. He has only a vague notion of how large it should be, and therefore, a poor grasp of how much effort will be required. He has very little understanding of material and labor costs, and in fact has no idea how many builders he should hire for the job. Because of this lack of information, Richard cannot create a schedule for the project nor devise a plan of action. Yet he promises his beautiful bride-to-be, Wendy, that the house will be ready when they return from their honeymoon, two months from now.

Most people would consider Richard to be unrealistic. Others might consider him to be irrational. However, replace Richard with a software executive or senior manager, replace the house with a software development project, replace Wendy with a customer, and one has a scenario that is all too familiar in the software quality industry. Why isn’t the executive considered unrealistic? Good question. To gain some insight into this behavior, and what can be done about it, I recommend Watts S. Humphrey’s latest book, Winning with Software (An Executive Strategy).

This book hits the nail squarely on the head when it puts responsibility for project success, quality software, and on-schedule and within-budget deliveries right where it belongs—with executive leadership. When leadership commits to a delivery date that is not backed up by a plan based on fact and logic, the project is sure to be plagued by problems as the development team struggles to meet this date.

Seasoned software quality professionals won’t find anything new in this book. It’s a handbook for executives, directors, and senior managers on how to manage rationally; create software products faster, better, and cheaper; change behavior; and build motivated teams. The final chapter proposes a seven-step program to achieve these goals.

This book does have some flaws. Humphrey’s credentials are, of course, stellar. References to his career with IBM and work with the Software Engineering Institute, however, are sprinkled throughout the text, when a single reference in the author’s biography at the end of the book might have been more appropriate. The strong focus on the team software process also left me with the impression that this book was trying to sell me something. This commercial-like quality detracts from the material’s relevance. Also, the overuse of the word “quality” (50 times in the first chapter alone) eventually nullifies its impact.

Yet the importance of the book’s primary message cannot be overemphasized. Executives must personally lead the organizational change to better software by enabling training and providing support. Only managers can change their teams, and only team members can change themselves. Recognizing this fact is the first step to winning with software.

Joe Zec (Jzec@Avidyne.com) obtained his bachelor’s degree in economics from Harvard University in Cambridge, Mass. In his 20 years in the high-tech industry, he has worked mainly in software testing, software test management, and software development process engineering. He is the quality (process) manager at Avidyne Corp.

Return to top

 

SOFTWARE ENGINEERING PROCESSES

Evaluating Software Architectures

Paul Clements, Rick Kazman, and Mark Klein. 2002. Boston: Addison-Wesley. 348 pages. ISBN 0-201-70482-X.

(CSQE Body of Knowledge area: Software Engineering Processes)

Reviewed by Scott Duncan

The preface to this book says it is time for “architecture evaluation to become an accepted engineering practice because … architecture represents an enormous risk in a development project … [and] … architecture evaluation can be remarkably inexpensive.” Thus, this book is about evaluating architectures once designed, not designing them in the first place. Indeed, the preface goes on to say that the book “will not teach you how to become a good architect, nor does it help you become fluent in the issues of architecture” as it assumes the reader will “already have a good grasp of architectural concepts that comes from practical experience.” What the book does do is present some methods for evaluating existing/proposed architectures:

  • Architecture Tradeoff Analysis Method (ATAM), which is an outgrowth of combining the ideas in the SAAM, “the notion of architectural styles,” and influences from “the quality attribute analysis communities.”
  • Software Architecture Analysis Method (SAAM), which was “the first documented, widely promulgated architecture analysis method,” and which had as its goal being able to test the claims made by architecture designers for their system architectures “by replacing claims of quality attributes … with scenarios that operationalize those claims.”
  • Active Reviews for Intermediate Designs (ARID), which was developed as “an easy, lightweight evaluation approach that concentrates on suitability … and can be carried out in the absence of detailed documentation” so architectures can be evaluated during preliminary stages to expose “subdesign problems” without waiting for a whole architecture to be completed (and perhaps based on weaknesses in a subdesign).

The book’s 11 chapters cover:

  • What is software architecture (“a vehicle for communication among stakeholders”; “the manifestation of the earliest design decisions”; “a reusable, transferable abstraction of a system”)
  • Evaluating a software architecture (why; what; who; an introduction to quality attributes; why such attributes, without elaboration, are “too vague for analysis”)
  • The ATAM (covers the steps in the method’s four “groups”)
  • A case study in applying the ATAM
  • Understanding quality attributes (how to use the “ilities,” for example, reliability, availability, modifiability, portability, and so on, which can be found in other sources such as ISO 9126, especially to get to a more quantifiable, less qualitative, level for them)
  • Another ATAM case study
  • Using the SAAM (covers the steps in the SAAM approach)
  • The ARID (covers the phases and steps in the ARID approach)
  • Comparison of evaluation methods (what’s good about each in a head-to-head comparison)
  • Growing evaluation capability in an organization (how to go about setting up the structure to do formal evaluations of this kind)
  • Conclusions (mostly on why the ATAM “works”)

To expand a bit on the methods presented in the book, here are the steps in each one:

ATAM

  • Presentation
    • Present the ATAM (to stakeholders)
    • Present business drivers (which are the primary architectural drivers)
    • Present the architecture (especially how it addresses the business drivers)
  • Investigation and analysis
    • Identify the architectural approaches
    • Generate “the quality attribute utility tree” (a tool to break down and prioritize attributes with specific scenarios and quantifications needed for analysis)
    • Analyze the architectural approaches (based on the scenarios defined)
  • Testing
    • Brainstorm and prioritize scenarios (getting more scenarios and rankings from stakeholders)
    • Analyze the architectural approaches (using the expanded, ranked scenarios)
  • Reporting
    • Present the results (to the stakeholders)

SAAM

  • Develop scenarios
  • Describe architectures (iterating over this and the prior step)
  • Classify/prioritize scenarios (into direct and indirect types, that is, where the architecture directly supports the functionality being anticipated or must be modified to provide support (indirect))
  • Individually evaluate indirect scenarios (as a direct scenario shows how the architecture would execute the functionality, attention is directed to scenarios where this is not true focusing on how the architecture would need to change to supply the support, that is, address the “holes” in the architecture from a use-case perspective)
  • Assess scenario interaction (of two or more indirect scenarios requiring a change to the same architectural component)
  • Create overall evaluation (weighting scenario evaluations to get an overall ranking for the architecture)

ARID (remember this is for partial architectures, so it is an internal method not usually used for interacting with external stakeholders, that is, users of the system)

  • Rehearsal Phase
    • Identify the reviewers (“the software engineers who will be expected to use the design”; the design stakeholders)
    • Prepare the design briefing (approximately two hours of material, rehearsed)
    • Prepare the seed scenarios (used to illustrate the concept of a scenario to reviewers)
    • Prepare the materials (make copies, schedule meeting, invite stakeholders, and so on)
    • Review Phase
  • Present ARID (30-minute presentation of ARID)
    • Present the design (approximately two-hour overview of the architectural design during which “no questions concerning implementation or rationale are allowed, nor are suggestions about alternate designs”)
    • Brainstorm and prioritize scenarios (“stakeholders suggest scenarios for using the design to solve problems they expect to face”)
    • Apply the scenarios (in priority order, designers “craft code that uses the design services to solve the problem posed by the scenario” without help from the designer)
    • Summarize (recount issues and poll participants for opinions)

The chapter comparing ATAM, SAAM, and ARID ends up, more or less, recommending ATAM as a “hybrid technique” and calling it “not just a method but rather a framework for architecture evaluation.” The conclusion, while noting a place for the ASAM and ARID approaches, calls the ATAM “arguably the most sophisticated of the three.”

If a book describing formal methods for conducting architecture evaluations is what you are looking for, this book does a good job covering the issues in evaluation through the use of three methods as examples. The reference section is a general bibliography of books and articles on software architecture. As such, some of the material referenced may be of more interest to those looking for material on architecture design than evaluation.

Scott Duncan (softqual@mindspring.com) has 30 years of experience in all facets of internal and external product software development with commercial and government organizations. For the last nine years he has been an internal/external consultant helping software organizations achieve international standard registration and various national software quality capability assessment goals. He is a member of the IEEE-CS, ACM, the current Standards chair for ASQ’s Software Division, and the division’s representative to the U. S. Technical Advisory Group for ISO/IEC JTC1/SC7 and to the Executive Committee of the IEEE SW Engineering Standards Committee.

Return to top

Developing Applications with JAVA and UML

Paul R. Reed Jr. 2002. Boston: Addison-Wesley. 463 pages. ISBN 0-201-70252-5.

(CSQE Body of Knowledge areas: General Knowledge, Conduct, and Ethics, Software Engineering Processes, Program and Project Management)

Reviewed by Gordon W. Skelton

If you are developing applications in Java and want to improve the quality of your software and the process by which you are developing it, this book is for you. Paul Reed has done an excellent job of presenting the topic of software engineering using Rational’s Unified Process (RUP) and the Unified Modeling Language (UML).

Starting with a solid discussion of the problems faced when working on software development projects, he gives readers a good overview of the software development life cycle and presents the case for using an iterative and incremental development process. With this background, readers are quickly introduced to Java, object-oriented analysis and design, and how these elements can mesh together with UML. Reed examines Java from an object-oriented view, illustrating how Java supports polymorphism, encapsulation, and abstraction. The author presents his justification for why Java and UML should be the elements of choice when developing applications.

Chapters 4 through 12 focus on the elements that constitute a software development life cycle. Here Reed examines the individual components of the process, as well as the process model in total.

Use-cases are an important component in RUP, as well as in the object-oriented paradigm in general. The author spends time introducing the concept of use-cases and provides examples illustrating use-cases. Use-cases are a major stumbling block that new inductees into the object-oriented world encounter.

Reed believes that UML is the “best artifact repository for documenting the analysis and design of an application today.” There are others, especially proponents of eXtreme Programming (XP), who would beg to differ. However, from my experience of developing both complex and high-risk projects I agree with Reed’s conclusion that a more formal development process must be employed. I find this to be particularly true when working with a large software development team.

The book assumes that the reader has experience in the use of Java and the development of Java-beans and perhaps, servlets. In addition to this experience, the reader should have been exposed to the concepts of the object-oriented model. Experience or knowledge beyond recognition of the RUP and UML is not expected.

Having developed applications using UML and having studied RUP, I find Reed’s book to be a thorough overview of the analysis and design phases of the software development life cycle. Understanding how to properly perform analysis, requirements elicitation, and system design certainly aids one in achieving quality software implementation.

Chapters 6, 11, and 12 focus on the construction of system prototypes. Within these chapters is that portion of the book where knowledge of Java is most critical. Still, I believe that readers knowledgeable of another object-oriented language will have only limited difficulty in understanding the concepts and code examples presented.

I recommend this book for a variety of individuals. Programmers with experience in creating Java-based systems will gain from being exposed to the overall software development life cycle. Individuals managing software development teams can learn how RUP, as well as other structured processes, can aid in organizing and managing the development process. Persons wanting to examine their own processes will find the material very beneficial. Finally, those who currently do not have a standardized development life cycle can learn from Reed’s discussion and use RUP to design and implement their own. One important thing to remember about RUP and UML, they are both extensible and can be molded to fit the needs and nuisances of different software development groups and projects.

Overall, I found the book to be both educational and well organized. It provides:

  • A good overview and introduction to the software development life cycle
  • An appropriate overview of the core components of the UML
  • An examination of all aspects of the system architecture with particular emphasis on user interfaces and persistent store
  • A good pedagogical format with checkpoints at the end of each chapter reviewing what has been discussed and where the focus is going next

Even if the reader is without a solid understanding of Java, I can still recommend this book to anyone wanting to learn more about the software development life cycle, RUP, UML, and process improvement. It is only in those chapters that are specifically directed toward building prototypes in Java that the reader may have to refer to a more thorough text on Java and Java-based development.

Gordon Skelton (gwskelton@mvt.com) is vice president for information services for Mississippi Valley Title Insurance Company in Jackson, Miss. In addition, Skelton is on the faculty of the University of Mississippi, Jackson Engineering Graduate Program. He is an ASQ Certified Software Quality Engineer and an IEEE Certified Software Development Professional. He is a member of ASQ, IEEE Computer Society, ACM, and AITP. Skelton's professional areas of interest are software quality assurance, software engineering, process improvement, software testing, and wireless application development.

Return to top

 

PROGRAM AND PROJECT MANAGEMENT

New Directions in Project Management

Paul C. Tinnirello, ed. 2002. Boca Raton, Fla.: Auerbach Publications. 541 pages. ISBN 0-8493-1190-X.

(CSQE Body of Knowledge areas: Program and Project Management and General Knowledge)

Reviewed by Jayesh G. Dalal

This 500-plus-page book has 45 chapters contributed by almost as many authors. Some authors contributed more than one chapter and some chapters have more than one author. The presentations focus on information technology/ information systems (IT/IS) professionals and projects, and only the last chapter directly addresses the software process. Project management (PM) practices are presented in a variety of formats, including the cookbook approach, application, mock case study, and straightforward presentation. The quality of presentations varies greatly and in one case the presentation appears to be a veiled commercial for services provided by the author. The book claims to present “practices that have been determined by measurable results, not by vague ideologies.” However, I could not find a presentation that was supported with results.

This book has six sections. The first covers basic project management concepts and contains nine chapters. Good presentations of strategies for addressing IT project risks and requirements management are given in Chapters 4 and 9, respectively.

Section 2 is titled “Critical Factors for Project Quality” and contains seven chapters. It is perhaps the poorest section in the book. Someone saw a need to include in the book ISO 9000, SEI CMM, Six Sigma, and quality. Chapter 10 addressing ISO 9000 is dated, as it is based on the ISO 9000:1994 version of the standards. Chapter 13 provides additional guidance for managing risks, and Chapter 15 describes use of the Six Sigma DMAIC model to IT systems analysis.

The seven chapters in the third section address human and business relationships. Management of IT priorities, strategic alliances, self-directed teams, steering committees, end-user needs, and culture change are addressed in this section. Section 4 is perhaps the strongest section of the book, and its seven chapters comprehensively address outsourcing, and use of management service providers and consultants.

Section 5 is another strong section with seven chapters. Project management issues related to knowledge management, client/server development, and IT help desk building projects are discussed. Also included is the management of issues associated with large complex systems, leveraging developed software, and legacy assets. The concept of using pay-for-performance to improve IT project success in Chapter 33 is interesting. This is one of the chapters where inclusion of results would have been valuable.

The final section is titled, “Measuring and Improving Project Management Success,” and includes eight chapters. Three classes of approaches are discussed; structured approaches of balanced scorecard and process assessment (for example, CMM), establishment and use of a project management office, and reduction/control of project complexity and “time wasters.” Chapter 41 identifies cost items associated with a major system change and provides suggestions for estimating these costs. Inclusion of results would have greatly added to the value of this section.

One would expect to learn something from a book of this size with contributions by more than 40 authors, and I believe readers will. I did. I got an impression that, encouraged by the success of the original edition of Project Management, the publishers decided to produce a sequel. In response the editor gathered available material, making sure that the current “hot” topics were mentioned, agglomerated the material into several sections, and submitted that for publication. If, as claimed on the book’s back cover, the editor had integrated the collected material and ensured that the presentations were supported with results, then readers would have been better served.

Dr. Jayesh G. Dalal (jdalal@worldnet.att.net) is the past chair of the ASQ Software Division, an ASQ Fellow, and National Baldrige Award examiner. He has more than 30 years of experience as an internal consultant and trainer in the manu-facturing, software, and service industries. He has an independent practice that offers management systems and process design, assessment, and improvement services to businesses.

Return to top

New Directions in Internet Management

Sanjiv Purba, ed. 2002. Boca Raton, Fla.: Auerbach Publications. 804 pages. ISBN 0-8493-1160-8.

(CSQE Body of Knowledge area: Program and Project Management)

Reviewed by Hillel Glazer

Imagine trying to wrap your arms around the entire Internet. That would be pretty difficult That’s exactly what it must have been like for Sanjiv Purba when he set out to pull this work together. Not only is the realm vast, but the Internet is never going to sit still long enough for anyone to pin it down. Any attempt to describe the technology of the Internet is at best a snapshot in time.

New Directions in Internet Management attempts to consider nearly every fundamental technology and component important to managing Internet-based projects. The book is an Internet encyclopedia, user guide, handbook, and reference manual wrapped into one thick package. It is written to help managers get up to speed in order to effectively manage many of the decisions that feed into what we all know to be a ubiquitous technology. Purba has pulled together the works of scores of authors contributing 61 chapters over 10 sections (not including an 11th appendix section).

The sections cover the following topics:

  • Internet trends
  • Strategic and business issues
  • Internet infrastructure from the ground up
  • System integration
  • Internet variations and applications
  • Wireless and mobile solutions: Expanding the Internet infrastructure
  • Web-site management
  • Managing information on the Internet
  • Internet security
  • Operations and post-implementation considerations

Despite the challenges, most of the chapters are very well done and include timeless, if not priceless, information.

Section 9 on Internet security alone is worth the entire volume. While several chapters deal with speculative technologies and a few technologies that have been overtaken by events, it is a must read for every manager on the Internet. Unlike many other technologies, security technology is timeless, because it is cumulative. Readers need to know about the security issues and technologies of the past that led to the security issues of today. Section 9 discusses security matters that are still very much in the “need to know” category.

Chapters that address transitioning between legacy systems and “information age” machines will prove particularly helpful to managers caught in that world. Several chapters, such as one on integrating information systems onto the Web and others on Web-site performance and Web-site design provide helpful processes to follow and checklists to use. Several chapters were proponents of checklists, each useful in its own right.

Overall, chapters that deal with technologies that are not yet widely implemented and not yet “settled” are helpful for their value in explaining the technology and its use. Chapters that address a specific technology for a specific purpose have narrower, but nonetheless helpful ideas. The chapters that focus on the business use of technology were of greatest value, as they demonstrate the most practical information in terms that managers tend to think in.

Purba faced several challenges when compiling and editing this book. The first type of challenge is that the technology will change by the time it is printed. Several chapters in New Directions in Internet Management suffered this fate. For instance, one chapter focuses on Java as though it is the sole enabler of Web-based applications. Today, Web-based applications are not restricted to Java.

Similarly, section 3 on Internet infrastructure has many useful chapters, though just as the technology of personal computing has become “plug-n-play” (that is, connectivity between components handled by the operating system) so too can one expect the technology of the Internet to become more like “plug-n-play.”

Another obstacle to a work of this kind is one of scope and detail. How much detail should a chapter get into? What level of understanding on the part of the reader is assumed? Is the chapter supposed to make an expert out of a novice or just give a manager a flavor of what he or she needs to turn around and ask from a subordinate?

A chapter on risks scoped in on log-in related risks in e-commerce situations, leaving out many other risk areas, while other chapters on local area networks and choosing network hardware, respectively, are perfect for the manager or executive who needs just enough information to understand the components of networking when faced with a purchasing or growth decision.

One frustrating chapter attempts to repaint business process reengineering (BPR) in such a way as to make BPR the fault of the poor implementation of Internet technology at a particular company. Even a casual analysis of the chapter reveals that BPR was not the problem, but how the company implemented BPR, combined with the company’s poor understanding of the technology at hand. That particular chapter is definitely a low point in the book. Not only does the author’s analysis leave much undone, but the content pushes an agenda.

To avoid the risk of printing dated material, perhaps Purba will consider taking an online approach to publishing his next edition of this work. The online edition, in the form of similar articles, could ensure that the topics are current, as well as take advantage of the speed of publication. It will also help avoid the painstaking effort it obviously required to pull everything together in a way that is both useful and connected.

New Directions in Internet Management is a good reference for anyone recently promoted to oversee some aspect of the Internet-based project. It is just as valuable to a manager or worker looking to get a good sense of what there is to manage with respect to Internet projects. The breadth of information makes it a worthwhile resource for brushing up on specific Internet technologies. At the very least, if New Directions in Internet Management helps managers explain to their customers the challenges of the Internet project, this book will have paid for itself.

Hillel Glazer (hillel@entinex.com) is the principal of Entinex, Inc. and a member of ASQ. He has a broad spectrum of experience in process engineering and technology management. The focus of his career is on the issues of business and technology process optimization. He specializes in management-driven engineering principles, merging these disciplines with business and operations strategies. He’s successfully adapted and evolved these disciplines across the Internet, software, and manufacturing industries, and has written and presented on the subject.

Return to top

Quality Software Project Management

Robert T. Futrell, Donald F. Shafer, Linda I. Shafer. 2002. Upper Saddle River, N. J.: Prentice Hall. 1639 pages. ISBN 0-13-091297-2.

(CSQE Body of Knowledge area: Program and Project Management)

Reviewed by Carolyn Rodda Lincoln

Quality Software Project Management is the textbook for certification in software project management issued by the Software Quality Institute at the University of Texas at Austin. Its hefty size means that it is both a thorough text and a complete reference for those who are not participating in a class. Each chapter is long enough to provide complete instruction on a topic.

The book has 33 chapters, which basically follow a waterfall software development life cycle. Among others, there are chapters on defining the goal of a project, identifying tasks for the project plan, estimating duration, scheduling the work, eliciting requirements, determining risks, software metrics, analysis and design methods, validation and verification, project tracking, continuous process improvement, communicating, configuration management, and legal issues. The seven appendices include templates, information on joint application design (JAD), a business plan, systems engineering, and how to manage a project from a distance. There is also a glossary and a bibliography of references and Web sites. The book refers to an Instructor’s Workbook that was not available for review.

Besides the software development life cycle, the book is also organized around 34 competencies that every project manager needs to know. They are divided into three groups. The first group is product development techniques, such as evaluating alternative processes and tracking product quality. The second is project management skills, such as documenting plans and estimating effort. Finally, there are people management skills, such as presenting effectively and recruiting. Each chapter begins by showing where that topic fits into the software development life cycle and which competencies it addresses. Since it is a textbook, there are also learning objectives.

At the end of each chapter, there is a summary, as well as problems, references, standards, and a section on the case study, the Chinese Railway Passenger Reservation System.

The main content of each chapter is based on well-recognized standard practices from the Project Management Institute, American Society for Quality, Institute of Electrical and Electronics Engineers, and the Software Engineering Institute. The information is integrated but not merged, that is, the authors do not advocate any particular technique over another. They explain several possible techniques and include the advantages and disadvantages of each so readers can decide for themselves. For example, in the section on estimating, they discuss both lines of code and function points.

Quality Software Project Management is an excellent overview of a broad range of project management topics. It provides enough detail to learn the techniques as well. It includes both “hard” and “soft” subjects, for example, project schedules and team dynamics. The material is well organized with extensive charts, tables, and bullet points. The authors are careful to fully define terminology so that no previous knowledge is assumed. For instructors, it makes an outstanding textbook for a project management course. For the general software engineering community, it belongs on the bookshelf of every project manager for both the clear explanations contained in it and the list of sources to obtain more information. Even though parts of the software field continue to change rapidly, principles of good project management will never go out of style.

Carolyn Rodda Lincoln (lincoln_c@bls.gov) is an ASQ certified quality manager and member of the DC Software Process Improvement Network. She is currently employed as a process improvement specialist for Titan Systems Corporation and is implementing software process improvement at the Bureau of Labor Statistics in Washington, D.C. She holds bachelor’s and master’s degrees in math and was previously a programmer and project manager.

Return to top

Software Project Management in Practice

Pankaj Jalote. 2002. Boston: Addison-Wesley. 273 pages.ISBN 0-201-73721-3.

(CSQE Body of Knowledge area: Program and Project Management)

Reviewed by Douglas A. Welton

In Software Project Management in Practice, Pankaj Jalote articulates a practical project management style that covers the strategic and tactical management needs of projects from inception to completion. Drawing on his experience during a 1996 sabbatical, at which time he became the vice president of quality for Infosys Technologies, the author details the project management approach used by Infosys and its more than 10,000 employees in 25 cities to successfully complete more than 500 projects per year.

From the beginning, the author documents the failure of current project management strategies, noting that about one-third of all projects have cost and schedule overruns of more than 125 percent. In trying to answer the question “Why do so many projects fail?” Jalote examines the project management process at Infosys and extracts the elements that have lead to their ongoing success.

These elements include:

  • Project management infrastructure: Capture as much data about each project as possible and use this information as the basis for a corporate knowledge base that will facilitate better project planning in the future. An effective infrastructure will capture data about completed projects and the capability of teams within the organization, as well as provide checklist, templates, and other project assets to proactively enhance productivity.
  • Process planning: When designing an optimum project management process, tailor a standard process (like the waterfall model) to fit the constraints of the particular project. Additionally, put in place a change management process to track the immediate and cumulative impact of change request during the project life cycle.
  • Effort estimation and scheduling: Using information from the completed projects database, adjust your estimates to fit the particulars of the current project keeping in mind that scheduling is a dynamic process, which may need to be repeated several times during the project.
  • Quality planning: Take a quantitative approach to quality, using defects as a metric, that proactively focuses on defect prevention and early detection as a strategy for achieving higher levels of project quality.
  • Risk management: Understand that risks are inherent in the project process. Identify and prioritize common risks and be mindful of how they may affect your project, preparing risk mitigation plans for these situations.
  • Measurement and tracking planning: Project monitoring is essential to ensure that the project is progressing toward the specified goals. Define the acceptable parameters within your monitored metrics and use these data to take corrective actions if the situation warrants.
  • Configuration management: Managing the evolution of a software product is an ongoing process that must allow the organization to make concurrent updates, undo changes, build multiple versions of the product, and track changes over time.
  • Project execution: As the project becomes an ongoing adventure, use a well-defined and structured review process to provide data regarding the organization’s progress toward certain goals and milestones. Monitor the project on a number of different levels. Use weekly status meetings and reports to disseminate information throughout your organization.
  • Project closure: Once software has been delivered and installed, use the project metrics as a learning tool for your organization.

Jalote writes clearly and concisely about each of these subjects. At the beginning of the book, he clearly states his bias toward the Software Engineering Institute’s Capability Maturity Model (CMM) and as one reads, he uses CMM as a touchstone to remind and reinforce points that will help readers develop their thinking in the broader context of enhancing the maturity of their software organization. Readers who are fans of CMM will find lots to cheer about in this book.

The influence of academia is also strongly felt in Software Project Management in Practice. The book features extensive footnotes and references to scholarly papers, books, and conference reports. These footnotes provide a clear path for readers who want to further explore a particular topic.

This book is written for project managers and individuals who aspire to become project managers. In this respect, the book is successful. The concepts are presented at a level that is appropriate for those who have a limited exposure to good project management. The project management style will work well for organizations that can support the infrastructure and overhead required for proper execution. Typically, these organizations are larger and have a more mature and predictable business model.

The only issue I find with Software Project Management in Practice is that it focuses so clearly on the process that it provides little guidance for the situations that the process does not handle. Project managers in smaller organizations focused on being innovative may find this book gets them in the ballpark, but does not teach them how to play the game very well.

Douglas A. Welton (dwelton@bellsouth.net) is a computer scientist and playwright. Over the course of his career, he has contributed innovation, vision, and excellence to leadership roles and product success at Digital Equipment Corp., HBO & Co., Bell + Howell, and Merant. One of his current projects is authoring the forthcoming book Insanely Great Object: A Guide to Software Development Using Objective C and Cocoa on Mac OS X.

Return to top

 

SOFTWARE VERIFICATION AND VALIDATION

A Practical Guide to Testing Object-Oriented Software

John D. McGregor and David A. Sykes. 2001. Boston: Addison-Wesley. 374 pages. ISBN 0-201-32564-0.

(CSQE Body of Knowledge areas: Software Verification and Validation, Software Engineering Processes)

Reviewed by Eva Freund

If you are a software development manager, a software developer, or a programmer who is involved with object-oriented development, then this is the book you need. The book is thoughtful and well laid out. If you are a tester, you may find this book a difficult read and find yourself wondering why you are reading it. My advice is “do not despair and continue with the book.” You may find, as I did, that a vast amount of learning will take place. By using text, diagrams, and code to convey information the authors show the relationships that exist throughout the software development life cycle.

The authors understand that there are seldom the resources to do all the testing one might desire (or all the testing described in this book). The authors have provided a multitude of approaches and techniques, and they merely ask that the reader select that which is useful and affordable. A tester cannot ask for more from an author.

Believing that object-oriented technologies bring changes not only to the programming languages but also to the software development process, the authors offer an opportunity to improve the test process by:

  • Changing attitudes toward testing by demonstrating that testing contributes to creating the right software, measuring progress, and keeping development on track
  • Changing where testing fits into the development process by demonstrating how testing and development activities can be intertwined and how one can contribute to a successful outcome of the other
  • Using new technology to test the models, develop unit test drivers, and reduce the coding needed to test software components

The first three chapters of A Practical Guide to Testing Object-Oriented Software are concerned with testing concepts and the testing process as they relate to object-oriented software. Chapters 4 through 10 detail the techniques for various kinds of testing that can be accomplished, and chapter 11 is a summary. Each chapter describes the concepts involved, shows how an interactive video game called Brickles implements the concepts, and shows how Brickles can be tested using the techniques described.

If you are looking for a book that is highly technical in its application of testing concepts, then this might be the book for you. Testing, as always, begins with a guided inspection of the requirements model and continues with a guided inspection of the domain analysis model. This is followed by an evaluation of the application analysis model against the requirements and domain analysis models. Then the architectural design and detailed design models are inspected against the use cases. Finally, the classes and the object interactions are tested. Only when these tests have been “passed” is the system/application tested.

In 1996 Shel Siegel wrote Object Oriented Software Testing: A Hierarchical Approach, the first book published on object-oriented testing. It was conceived as a specification for an object-oriented test system. The book, according to the author, was an object-oriented system and the diagrams were based on Rumbaugh’s Object Modeling Technique (OMT). It used object-oriented techniques to define and describe the testing of object-oriented applications. Siegel’s book, differently informative, serves as an interesting juxtaposition to this book. Siegel’s book identified and described testing through object-oriented design. The McGregor and Sykes book identifies and describes the testing of object-oriented software.

Eva Freund (efreund@erols.com) is an independent verification and validation consultant with 20 years of experience in software testing, standards, and project management. She offers IV&V and software process improvement services to private and public sector organizations. She is an ASQ Certified Software Quality Engineer and a Certified Software Development Professional of the IEEE Computer Society.

Return to top

Peer Reviews in Software: A Practical Guide

Karl E. Wiegers. 2002. Indianapolis: Addison-Wesley. 232 pages. ISBN 0-2-1-73485-0.

(CSQE Body of Knowledge areas: Software Quality Management, Software Engineering Processes, Software Verification and Validation)

Reviewed by Joel Glazer

Question: As a software developer, is your goal to produce a product that is full of bugs, as long as the product is delivered on time, or is your goal to produce the finest product for your customers’ needs? Software developers are faced with two choices. First, they could have the customer find and point out faults, errors, or failures in the delivered product. This could result in something as little as embarrassment to the organization or as much as a loss in market share(s), and the threat of being sued and devastated. The second choice would be to have the opportunity to clean up the product as much as possible before shipping it—regardless of any disclaimers in the fine print.

In his book, Peer Reviews in Software, Karl Wiegers provides a user-friendly practical guide to a misunderstood, underused, yet highly effective methodology to improve the quality of software products while under development. This approach avoids the embarrassment of costly downstream corrections. He establishes the case for peer reviews by answering:

  • What are peer reviews?
  • Why we need peer reviews
  • When to hold peer reviews
  • Who participates in peer reviews and who does not?
  • How to conduct peer reviews
  • Where to hold peer reviews (A question that might not be critical to most developing organizations).

Each successive chapter builds on the previous chapter. In the first two chapters the stage and case for peer reviews is set. The need to improve the quality of the product prior to leaving the development shop and the need to overcome resistance to peer reviews is presented (why). Then Wiegers explains the spectrum of formality that exists in peer reviews. Basically, not all reviews are peer reviews, and not all reviews need the same level of formality to be effective. Wiegers walks readers through the mechanics of peer reviews (how), the inspection process and the roles of the inspectors (who), the planning for the inspection (when), and the products to be inspected (what). The means for improvement of the peer reviews process based on data collection and analysis is discussed. Later chapters revisit in greater detail how to implement and overcome challenges to peer reviews. The book concludes with two appendices, one that connects peer reviews with the Software Engineering Institute’s Capability Maturity Model, and a long list of terms and references, as well as a Web site at http://www.processimpact.com/pr_goodies.shtml containing practical tools, spreadsheets, and guidelines for implementing peer reviews.

There are other books that explain reviews, but Wiegers’ book is by far the most comprehensive and understandable book on the market today, and it belongs in every software developer’s library.

Joel Glazer (joelglazer@ieee.org), current ASQ Software Division Region 5 councilor, has more than 30 years’ experience in the aerospace engineering, software engineering, and software quality fields. He has a master’s degree from The Johns Hopkins University in computer sciences and in management sciences. He is a member of IEEE and a Senior member of ASQ. He is an ASQ Certified SQE, Auditor, and Quality Manager. Glazer is a Fellow Engineer in the Software Quality Engineering Section at Northrop Grumman Electronic Systems in Baltimore, Md.

Return to top

Security Fundamentals for E-Commerce

Vesna Hassler. 2001. Norwood, Mass.: Artech House. 409 pages. ISBN 1-58053-108-3.

(CSQE Body of Knowledge area: Software Verification and Validation)

Reviewed by Eric Patel

This book highlights problems and solutions for those charged with perhaps one of the more challenging responsibilities of the digital economy: maintaining the security of electronic commerce (e-commerce) Web sites. Business transactions have moved to yet another medium, the computer network. How do you make your Web site secure from unauthorized use? Five areas of e-commerce security are covered in this book:

  • Information security
  • Electronic payment security
  • Communication security
  • Web security
  • Mobile security

In information security the author begins with an introduction to security and moves to security mechanisms, key management, and certificates. The section on encryption mechanisms is a common way to ensure data integrity and confidentiality. This section is rich in mathematics showing readers how these schemes work. There is also a discussion about digital signatures allowing the user to “sign” digital documents with the signer’s public key. Part 1 ends with an overview of management issues associated with public key algorithms.

The second part of the book focuses on the security needs of electronic payment (e-payment) systems, covering credit cards, e-money, smart cards, and more. The author points out that electronic payment systems have the same problems as traditional systems—and more. Basic security requirements for e-payment systems include:

  • Payment authentication: both payers and payees must prove their payment identities
  • Payment integrity: payment transaction data cannot be modifiable by unauthorized principals
  • Payment authorization: no money can be taken from a customer’s account without explicit permission
  • Payment confidentiality of the payment transaction data

This section ends with an example purchase transaction with digital signatures using the Internet Open Trading Protocol (IOTP).

Communication security is covered next, representing nearly a third of the book’s content. The infrastructure for exchanging information—the communication network—from a security point of view is analyzed. Four layers from the open systems interconnection seven-layer reference model (OSI 7-Layer) are covered: network access, Internet, transport, and application. Malicious programs such as a trapdoor, Trojan horse, virus, and worm are mentioned, as well as the many potential vulnerabilities and flaws that may be found in cryptographically secured systems.

In part 4 the author notes the Web’s own (as a client-server application) security issues, as well as the security issues of the new technologies that are added on top of the Web. Here readers learn about the many uses of digital watermarks to protect intellectual property of multimedia content including:

  • Ownership assertion to establish content ownership
  • Fingerprinting to discourage unauthorized duplication and distribution
  • Authentication and integrity verification to bind an author to content
  • Usage control to control copying and viewing of content
  • Content protection to disable illegal use

This part ends with an overview of Web-based e-commerce concepts based on XML, HTML, PEP, and Java Commerce.

Finally, the new world of security for mobile technologies, including mobile commerce (m-commerce), smart cards, and mobile agents, is covered in part 5. Benefits of using mobile agents for distributed applications include:

  • Reduction of network traffic
  • Elimination of network latency for real-time applications
  • Encapsulation of communication protocols
  • Enhanced processing capabilities for mobile devices
  • M-commerce (also known as wireless e-commerce) involves users using a mobile device such as a mobile phone, integrated PDA, or a smart phone.
  • Smart card security is the last chapter with discussions on Java Card, SIM Card, and biometrics.

As indicated by the title, the book’s content does not cover all aspects of e-commerce. Although detailed discussions are not always provided, there are plenty of references at the end of each chapter as well as numerous footnotes to URLs for readers to follow-up with.

Eric Patel (epatel@rapidsqa.com) is chief quality officer for RapidSQA, Software Quality Service Provider (SQSP) of training and consulting solutions. He holds three certifications: ASQ Certified Quality Manager, ASQ Certified Software Quality Engineer, and QAI Certified Software Test Engineer. Patel is also deputy regional councilor for the ASQ Software Division Region 1, is published, and serves as a reviewer for SQP and The Journal of Software Testing Professionals.

Return to top

High Quality Low Cost Software Inspections

Ronald A. Radice. 2002. Andover, Mass.: Paradoxicon Publishing. 479 pages. ISBN 0-9645913-1-6.

(CSQE Body of Knowledge area: Software Verification and Validation)

Reviewed by Ralph R. Young

There are good books, and there are great books. This is a great book, because it provides a thorough explanation of an easy-to-apply technique (inspections) that is among the most effective software technologies yet developed.

The author is an experienced software practitioner, having worked at IBM in various technical and management positions (including quality assurance) and having been a leader there in establishing software process and quality directions. He currently is principal partner in Software Technology Transition, a company that provides training, consulting services, assessment services, and software engineering method solutions.

The introductory chapter explains why inspections should be done, as well as why everyone isn’t using them. Radice draws on W. Edwards Deming to note that 94 percent of the causes of defects belong to the system and are the responsibility of management. He explains how to control defects. He answers common questions concerning inspections (including, “Is there a better business answer?”) and provides a discussion of the factors associated with effective implementation of the inspections technique.

In response to the question, “Is there only one way to do inspections?” the author emphasizes that the goal should be achieving maximum effectiveness in the review process, and that using the recommended approach (with some acceptable variations), we are now experiencing increasing occurrences of 90 percent or more removal of defects. Organizations and projects can learn how to make inspections both effective and low cost.

One issue is gaining management support for using new technologies, methods, and practices. Management often views these as an added cost. This is an impediment in gaining management support for quality improvement and process improvement initiatives. It seems that even when one can provide data that show that their use saves time and money and improves product quality, there is management resistance. The situation has improved with the increased use of the CMM® and the CMMI™. Radice suggests that the element lacking most in the software community today is process data. He views the time spent on inspections as an investment and suggests that the cost question needs to be removed from management’s thinking by helping them understand the payback. I agree with this notion. In practice, however, I have found that managers in many organizations are still unwilling to commit money.

Radice uses the Entry-Task-Validation/Verification-eXit (ETVX) as the process model for the book. He places emphasis on defect prevention to reduce the volume of defects injected into work products. He asserts that the following benefits will be seen as a result of good inspections practice:

  • Early removal of defects
  • Improved schedule predictability
  • Improved quality delivered to the users
  • Cost reduction in test and maintenance
  • Earlier delivery of committed products
  • More satisfied customers
  • More satisfied employees
  • Education and spreading of knowledge
  • Improved processes
  • More business

Radice provides a detailed description of the inspection process and describes the roles of the moderator and inspectors. He provides a set of rules of behavior for inspection meetings that projects and organizations might well benefit from if used for all meetings. He advises that inspections can be tailored and used effectively on small projects. He provides detailed guidance concerning inspections data. He explains causal analysis (identifying the probable cause that led to a defect in a work product or process) and instructs how to do it (Pareto analysis, identification of root causes, use of a fishbone diagram, defect prevention). He discusses the value of reinspections and when they are appropriate.

Radice discusses the economics of inspections, provides an interesting discussion of the cost of defects, and suggests a model for evaluating the cost of quality as a percentage of sales, based on the work of Philip Crosby.

The author provides an important chapter concerning managing inspections, emphasizing that managers must take actions to achieve specified goals. He provides a detailed discussion of practical issues relating to inspections, and also describes what to inspect. One may be surprised to learn that he recommends inspections for any type of work product, for example, requirements specifications. The earlier in the process one can find and remove defects, the greater the leverage of the technique and the more one can save in downstream development and testing costs.

The author provides a taxonomy of review approaches based on a 1996 work by Wheller, Brykczynski, and Meeson (Software Inspections: An Industry Best Practice, IEEE). He provides useful appendices, including checklists, inspection materials, and inspection forms. The bibliography is an exhaustive compilation of related published materials.

Ralph R. Young (ralph.young@northropgrumman.com) is the director of software engineering, systems and process engineering, Defense Enterprise Solutions, Northrop Grumman Information Technology, a leading provider of systems-based solutions. He is the author of Effective Requirements Practices (Addison-Wesley, 2001).

Return to top

About ASQ | Calendar | Contact Us | Privacy | Copyright