Preview (13 of 41 pages)

This Document Contains Chapters 12 to 21 Chapter 12 Pattern-Based Design CHAPTER OVERVIEW AND COMMENTS This is the first edition of SEPA to have a chapter dedicated to pattern-based design. The key point to emphasize throughout this chapter is that over the 60-odd years that software has been developed, software engineers have encountered a vast array of problems and have developed effective solutions for many of them. Over the past 20 years, the problems and solution have been codified as “patterns” that provide a modern designer with detailed guidance when he or she encounters a design problem. 12.1 Design Patterns Be sure your students understand the concept of a pattern as a “three part rule” and that they appreciate that every poroblem and solution must be considered in context. If this is the first course in which patterns are introduced, I would recommend that you present a series of increasingly complex patterns examples from a variety of different engineering disciplines. The discussion of different kinds of patterns in Section 12.1.1 is of academic interest, but may be skipped if time is short. The design pattern template (sidebar in Section 12.1.3) should be considered in detail with an example or two. Be sure that your students understand what “forces” mean in this context. 12.2 Pattern-Based Software Design The point of emphasis in this section should be the discussion presented in subsection 12.2.2 and 12.2.3. Use additional examples if you feel that they will help your students better understand how to think in patterns and apply pattern-based design tasks. 12.3 Architectural Patterns Spend some time considering example patterns from the domains discussed in this section. Many students have a bit of trouble absorbing the notion of an architectural pattern, so visits to one or more of the repositories noted in the sidebar can be helpful. 12.4 Component-Level Design Patterns Be certain to discuss the difference between a reusable software component and a component-level design pattern. Use additional examples as time permits. 12.5 User Interface Design Patterns There are probably more patterns for user interface design problems than any other category. If you choose and time permits, you can expand upon one of the examples presented in this section and develop the appropriate solution all the way to the code level. It might also be a good idea to have your students present one or more interface design patterns (not discussed in SEPA) to the class so the breadth of solution can be appreciated. 12.6 WebApp Design Patterns Have your student visit one or more the patterns repositories listed in the sidebar and present a representative WebApp pattern to the class. Chapter 13 WebApp Design CHAPTER OVERVIEW AND COMMENTS When most people think of “Web design,” they envision the aesthetic layout of a WebApp. But this is only one part of design in Web engineering. The intent of this chapter is to introduce students to the other important components of WebApp design. Because students are so familiar with the Web, you should have little trouble getting a discussion of design started. Virtually everyone has an opinion on what constitutes a “good design” but few people understand the process through which a good design is achieved. 13.1 WebApp Design Quality The quality requirements shown in Figure 13.1 are a good catalyst for a discussion of WebApp design quality. Each of the five attributes presented in the figure should be discussed at some length. Poll your students for their interpretation of each attribute and ask for examples of actual WebApp that exhibit (or don’t exhibit) each attribute. Spend a bit of time on the quality checklist presented in the sidebar. 13.2 Design Goals The design goals presented in this section should be discussed at some length. Again, examples of WebApps that meet (and don’t meet) these goals will help to solidify understanding. 13.3 The WebE Design Pyramid WebE design encompasses six design activities, and each of these is introduced later in this chapter. Spend a bit of time discussing Figure 13.2 as an introduction. 13.4 WebApp Interface Design If you haven’t already done so, have your students review Chapter 11. All of the information presented there is applicable to this discussion. 13.5 Aesthetic Design Although this is an important topic, you may choose to cover it only peripherally if time is short. For major WebApps, aesthetic design is performed by graphic arts professional, not Web engineers. However, for smaller WebApps, the Web engineer may have to do it all. I recommend spending time on this subject to sensitize your students to the importance of aesthetics. At a minimum, have them visit one or more of the Web sites noted in the sidebar entitled “Well-Design Websites.” 13.6 Content Design Although the notion of a content object (Section 13.6.1) should be relatively easy for your students to grasp, be certain that you provide additional examples to solidify the concept. 13.7 Architecture Design If you haven’t already done so, have your students review Chapter 9. Much of the information presented there is applicable to this discussion. The content structures presented in Section 13.7.1 should be covered during lecture. Use actual WebApps to illustrate these structures. The MVC architecture (Section 13.7.2) has been discussed widely in the literature and should be emphasized during your discussion of architectural design. Use Figure 13.8 as a point of departure for your comments. 13.8 Navigation Design Navigation design is pivotal to the success of a WebApp and yet, navigation for many Web sites sort of “just happens.” Emphasize to your students that navigation design must be explicit. One way of accomplishing this is the use of NSUs (Section 13.8.1). This is a difficult concept for some students, therefore, it’s worth spending some time in lecture being certain that the NSU concept is understood. Students will be quite familiar with navigation syntax (Section 13.8.2) and will undoubted have strong opinions on the best design approach in this area. Be certain to tie syntax to semantics, indicating how syntax can aid or hurt a user’s understanding of navigation. 13.9 Component-Level Design If you haven’t already done so, have your students review Chapter 10. Much of the information presented there is applicable to this discussion. 13.10 Object-Oriented Hypermedia Design Method (OOHDM) OOHDM is a reasonably comprehensive design method that is well-worth covering if time permits. I present only an overview in SEPA, so if you intend to present this topic in some detail, you’ll need supplementary materials and reading. See the SEPA Web site for OOHDM resources. Part 3 Quality Management Chapter 14 Quality Concepts CHAPTER OVERVIEW AND COMMENTS This intent of this chapter is to discuss quality within the context of software and software engineering, setting the stage for the chapters that present a variety of software verification and validation issues. 14.1 What is Quality? Spend some time discussing the overall impact of software quality using real world examples. Also, spend time discussing the indirect cost of quality. That is, the costs associated with customer dissatisfaction, increased support and reduction in internal morale. If time permits, you might have your student read excepts from Crosby’s Quality is Free or Persig’s Zen and the Art of Motorcycle Maintenance. Each contains many useful insights on quality. 14.2 Software Quality Be sure that you emphasize the definition presented in the first paragraph. For long-time SEPA adopters, you’ll note that this definition has evolved through the many editions of the book. This one reflects the modern (and I think, most appropriate) view of software quality. The quality dimensions and factors presented in Sections 14.2.1 through 14.2.4 should be covered. Each represented a slightly different view of software quality, but all are valuable in their own right. Be sure that your students note that although these dimensions and factors were developed many years ago, they have stood the test of time and are as valid today as they were when they were developed. 14.3 The Software Quality Dilemma The discussion of “good enough” software is very important in a modern context. You should spend some class time on it, making sure that your students understand the ramifications. Similarly, the cost of quality should be emphasized. Most undergraduates haven’t given the issue much thought and it’s important to introduce them to the key facets of the problem. Spend time on each of the costs discussed in Section 14.3.2 and on the meaning of Figure 14.2. In our litigious world, the topic presented in Section 14.3.4 becomes increasingly important. If time permits, present a real-life example of a “software lawsuit.” 14.4 Achieving Software Quality This section serves as an introduction for other chapters in Part 3 of SEPA. Chapter 15 Review Techniques CHAPTER OVERVIEW AND COMMENTS In this edition of SEPA, I’ve created a chapter dedicated to review techniques. I continue to believe that this important quality assurance approach is under utilized in the software world and that increased emphasis in the classroom is worthwhile. The notion of reviews as a “filter’ is well received by most students and should be mentioned in your classroom discussions. You should note that it can be overdone, but that non “filtration” is not an option. 15.1 Cost Impact of Software Defects Be sure to spend some time discussing the distinction between bugs, errors, and defects (sidebar). 15.2 Defect Amplification Model In discussion the model, be sure to note the important difference that occur when reviews are used and when they are not. If time permits, you can assign a mini project that has students build a web-based defect amplification model in which you can plug in various values and examine the various outputs from the model. It can then be used term after term to illustrate important concepts. 15.3 Review Metrics and Their Use The metrics presented in this section are interesting, but can be skipped if time is tight. You should note in class, however, that the review process can and should be measured so that it can be improved over time. 15.4 Reviews: A Formality Spectrum This section should be used to emphasize the notion that there is no “standard” approach to software reviews. You should also note that you get what you pay for. In almost every instance, the more time and effort (cost) spent on a review approach, the more errors are detected. This must be traded against project schedule, costs, and the psyche of the software team. 15.5 Informal Reviews The topic to emphasize here is pair programming. If you covered pair programming in your presentation of XP (Chapter 3), I would review and emphasize that it can be adapted as a review technique regardless of whether XP is the process model that is chosen. Also emphasize that although it appears to be wasteful of time, it can actually save time due to a reduction in rework. 15.6 Formal Technical Reviews The mechanics of conducting a formal technical review (FTR) are described in this section. Students should pay particular attention to the point that it is the work product that is being reviewed not the producer. During lecture, you might want to do a bit of role-playing to emphasize the points made in this section. Be sure to discuss the guidelines presented in section 15.6.3 and note that they tend to apply to all technical meeting, not just reviews. Encouraging the students to conduct formal reviews of their development projects is a good way to make this section more meaningful. Requiring students to generate review summary reports and issues lists also helps to reinforce the importance of the review activities. In addition to the review checklists contained within the SEPA Web site, I have also included a small sampler in the special section that follows. Review Checklists Formal technical reviews can be conducted during each step in the software engineering process. In this section, we present a brief checklist that can be used to assess products that are derived as part of software development. The checklists are not intended to be comprehensive, but rather to provide a point of departure for each review. System Engineering. The system specification allocates function and performance to many system elements. Therefore, the system review involves many constituencies that may each focus on their own area of concern. Software engineering and hardware engineering groups focus on software and hardware allocation, respectively. Quality assurance assesses system level validation requirements and field service examines the requirements for diagnostics. Once all reviews are conducted, a larger review meeting, with representatives from each constituency, is conducted to ensure early communication of concerns. The following checklist covers some of the more important areas of concern. 1. Are major functions defined in a bounded and unambiguous fashion? 2. Are interfaces between system elements defined? 3. Have performance bounds been established for the system as a whole and for each element? 4. Are design constraints established for each element? 5. Has the best alternative been selected? 6. Is the solution technologically feasible? 7. Has a mechanism for system validation and verification been established? 8. Is there consistency among all system elements? Software Project Planning. Software project planning develops estimates for resources, cost and schedule based on the software allocation established as part of the system engineering activity. Like any estimation process, software project planning is inherently risky. The review of the Software Project Plan establishes the degree of risk. The following checklist is applicable. 1. Is software scope unambiguously defined and bounded? 2. Is terminology clear? 3. Are resources adequate for scope? 4. Are resources readily available? 5. Have risks in all important categories been defined. 6. Is a risk management plan in place? 7. Are tasks properly defined and sequenced? Is parallelism reasonable given available resources? 8. Is the basis for cost estimation reasonable? Has the cost estimate been developed using two independent methods? 9. Have historical productivity and quality data been used? 10. Have differences in estimates been reconciled? 11. Are pre-established budgets and deadlines realistic? 12. Is the schedule consistent? Software Requirements Analysis. Reviews for software requirements analysis focus on traceability to system requirements and consistency and correctness of the analysis model. A number of FTRs are conducted for the requirements of a large system and may be augmented by reviews and evaluation of prototypes as well as customer meetings. The following topics are considered during FTRs for analysis: 1. Is information domain analysis complete, consistent and accurate? 2. Is problem partitioning complete? 3. Are external and internal interfaces properly defined? 4. Does the data model properly reflect data objects, their attributes and relationships. 5. Are all requirements traceable to system level? 6. Has prototyping been conducted for the user/customer? 7. Is performance achievable within the constraints imposed by other system elements? 8. Are requirements consistent with schedule, resources and budget? 9. Are validation criteria complete? Software Design. Reviews for software design focus on data design, architectural design and procedural design. In general, two types of design reviews are conducted. The preliminary design review assesses the translation of requirements to the design of data and architecture. The second review, often called a design walkthrough, concentrates on the procedural correctness of algorithms as they are implemented within program modules. The following checklists are useful for each review: Preliminary design review 1. Are software requirements reflected in the software architecture? 2. Is effective modularity achieved? Are modules functionally independent? 3. Is the program architecture factored? 4. Are interfaces defined for modules and external system elements? 5. Is the data structure consistent with information domain? 6. Is data structure consistent with software requirements? 7. Has maintainability considered? 8. Have quality factors (section 17.1.1) been explicitly assessed? Design walkthrough 1. Does the algorithm accomplishes desired function? 2. Is the algorithm logically correct? 3. Is the interface consistent with architectural design? 4. Is the logical complexity reasonable? 5. Have error handling and "anti-bugging" been specified? 6. Are local data structures properly defined? 7. Are structured programming constructs used throughout? 8. Is design detail amenable to implementation language? 9. Which are used: operating system or language dependent features? 10. Is compound or inverse logic used? 11. Has maintainability considered? Coding. Although coding is a mechanistic outgrowth of procedural design, errors can be introduced as the design is translated into a programming language. This is particularly true if the programming language does not directly support data and control structures represented in the design. A code walkthrough can be an effective means for uncovering these translation errors. The checklist that follows assumes that a design walkthrough has been conducted and that algorithm correctness has been established as part of the design FTR. 1. Has the design properly been translated into code? [The results of the procedural design should be available during this review.] 2. Are there misspellings and typos? 3. Has proper use of language conventions been made? 4. Is there compliance with coding standards for language style, comments, module prologue? 5. Are there incorrect or ambiguous comments? 6. Are data types and data declaration proper? 7. Are physical constants correct? 8. Have all items on the design walkthrough checklist been re-applied (as required)? Software Testing. Software testing is a quality assurance activity in it own right. Therefore, it may seem odd to discuss reviews for testing. However, the completeness and effectiveness of testing can be dramatically improved by critically assessing any test plans and procedures that have been created. In the next two Chapters, test case design techniques and testing strategies are discussed in detail. Test plan 1. Have major test phases properly been identified and sequenced? 2. Has traceability to validation criteria/requirements been established as part of software requirements analysis? 3. Are major functions demonstrated early? 4. Is the test plan consistent with overall project plan? 5. Has a test schedule been explicitly defined? 6. Are test resources and tools identified and available? 7. Has a test record keeping mechanism been established? 8. Have test drivers and stubs been identified and has work to develop them been scheduled? 9. Has stress testing for software been specified? Test procedure 1. Have both white and black box tests been specified? 2. Have all independent logic paths been tested? 3. Have test cases been identified and listed with expected results? 4. Is error-handling to be tested? 5. Are boundary values to be tested? 6. Are timing and performance to be tested? 7. Has acceptable variation from expected results been specified? In addition to the formal technical reviews and review checklists noted above, reviews (with corresponding checklists) can be conducted to assess the readiness of field service mechanisms for product software; to evaluate the completeness and effectiveness of training; to assess the quality of user and technical documentation, and to investigate the applicability and availability of software tools. Maintenance. The review checklists for software development are equally valid for the software maintenance phase. In addition to all of the questions posed in the checklists, the following special considerations should be kept in mind: 1. Have side effects associated with change been considered? 2. Has the request for change been documented, evaluated and approved? 3. Has the change, once made, been documented and reported to interested parties? 4. Have appropriate FTRs been conducted? 5. Has a final acceptance review been conducted to ensure that all software has been properly updated, tested and replaced? Chapter 16 Software Quality Assurance CHAPTER OVERVIEW AND COMMENTS This chapter provides an introduction to software quality management and software quality assurance (SQA). It is important to have the students understand that software quality work begins before the testing phase and continues after the software is delivered. The role of metrics in software management is reinforced in this chapter. 16.1 Background Issues In addition to the historical discussion presented in this section, you might also mention that an important software quality assurance concept is the control of variation among products. Software engineers are concerned with controlling the variation in their processes, resource expenditures, and the quality attributes of the end products. 16.2 Elements of SQA The definitions of many quality elements appear in this section. Students need to be familiar with these definitions, since their use in software quality work does not always match their use in casual conversation. Students also need to be made aware that customer and user satisfaction is every bit as important to modern quality work as is quality of design and quality of conformance. 16.3 SQA Tasks, Goals, and Metrics This section describes the activities performed by the SQA group involve quality planning, oversight, record keeping, analysis and reporting. SQA plans are discussed in more detail later in this chapter. Spend some time discussing the goals and attributes discussed in Figure 16.1, along with the metrics that can be used to assess whether goals have been achieved. 16.4 Formal Approaches to SQA This section introduces the concept of formal methods in software engineering. More comprehensive discussions of formal specification techniques and formal verification of software appear Chapter 21. 16.5 Statistical Quality Assurance A comprehensive discussion of statistical quality assurance is beyond the scope of a software engineering course. However, this section does contain a high level description of the process and gives examples of metrics that might be used in this type of work. The key points to emphasize to students are that each defect needs to be traced to its cause and that defect causes having the greatest impact on the success of the project must be addressed first. Because six-sigma is widely used in industry, you might spend some lecture time on it. 16.6 Software Reliability It is important to have the students distinguish between software consistency (repeatability of results) and reliability (probability of failure free operation for a specified time period). Students should be made aware of the arguments for and against applying hardware reliability theory to software (e.g. a key point is that, unlike hardware, software does not wear out so that failures are likely to be caused by design defects). It is also important for students to be able to make a distinction between software safety (identifying and assessing the impact of potential hazards) and software reliability. 16.7 The ISO 9000 Quality Standards The ISO 9000 quality standard is discussed in this section as an example of quality model that is based on the assessment of quality of the individual processes used in the enterprise as a whole. ISO 9001:2000 is described as the quality assurance standard that contains 20 requirements that must be present in any software quality assurance system. 16.8 The SQA Plan The major sections of a SQA plan are described in this section. It would advisable to have students write a SQA plan for one of their own projects sometime during the course. This will be a difficult task for them. It may be advisable to wait until the students review the material in Chapters 17 - 22 (testing, SCM, and product metrics) before beginning this assignment. Chapter 17 Software Testing Strategies CHAPTER OVERVIEW AND COMMENTS This chapter discusses a strategic approach to software testing that is applicable to most software development projects. The recommended process begins unit testing, proceeds to integration testing, then validation testing, and finally system testing. You should emphasize the spiral—I believe it is a useful metaphor for the software engineering process and the relationship of testing steps to earlier definition and development activities. The majority of testing that most students have done has been ad hoc. Therefore, the key concept for students to grasp is that testing must planned and assessed for quality like any other software engineering process. Students should use the Test Specification template from the SEPA web site as part of their term project. 17.1 A Strategic Approach to Software Testing This section describes testing as a generic process that is essential to developing high quality software economically. It is important for students to understand the distinction between verification (building a product correctly) and validation (building the right product). It is also important for students to be aware that testing cannot replace attention to quality during early software engineering activities. The role of software testing groups in egoless software development is another important point to stress with students. Section 17.1.3 and 17.1.4 introduce some of the similarities and difference between testing for traditional software and testing in an object-oriented environment. You might want to revisit the differences between traditional and OO components, because these difference have a strong bearing on testing strategy (and tactics). Section 17.1.5 discusses the issue of how to determine when testing has been completed. Which is an important issue to consider, if students buy into the argument that it is impossible to remove all bugs from a given program. This issue provides an opportunity to reconsider the role of metrics in project planning and software development. 17.2 Strategic Issues Several testing issues are introduced in this section. Planning is described as being an important part of testing. Students may need assistance in learning to write testing objectives that cover all portions of their software projects. Formal technical reviews of test plans and testing results are discussed as means of providing oversight control to the testing process. Students should be encouraged to review each other’s testing work products some time during the semester. 17.3 Test strategies for Traditional Software Testing for traditional software begins “in the small” and moves toward “testing “in the large.” Your students should understand the “big picture” reasons for this strategy. You might also discuss the “daily build and smoke test” strategy that is used by many software product builders and is an of encountered in agile process models. Section 17.3.1 discusses unit testing from a strategic viewpoint. Both black-box and white-box testing techniques have roles in testing individual software modules. It is important to emphasize that the white-box techniques to be introduced in Chapter 18 are most advantageous during this testing step. Students need to be aware that testing module interfaces is also a part of unit testing. Students need to consider of the overhead incurred in writing drivers and stubs required by unit testing. This effort must be taken into account during the creation of the project schedule. This section also contains lists of common software errors. Students should be encouraged to keep these errors in mind when they design their test cases. Section 17.3.2 focuses on integration testing issues. Integration testing often forms the heart of the test specification document. Don't be dogmatic about a "pure" top down or bottom up strategy. Rather, emphasize the need for an approach that is tied to a series of tests that (hopefully) uncover module interfacing problems. Be sure to discuss the importance of software drivers and stubs (as well as simulators and other test software), indicating that development of this "overhead" software takes time and can be partially avoided with a well thought out integration strategy. Regression testing is an essential part of the integration testing process. It is very easy to introduce new module interaction errors when adding new modules to a software product. It may be wise for students to review the role of coupling and cohesion in the development of high quality software. Don't gloss over the need for thorough test planning during this step, even if your students won't have time to complete any test documentation as part of their term projects. 17.4 Test Strategies for Object-Oriented Software This section clarifies the differences between OOT and conventional testing with regard to unit testing and integration testing. The key point to unit testing in an OO context is that the lowest testable unit should be the encapsulated class or object (not isolated operations) and all test cases should be written with this goal in mind. Given the absence of a hierarchical control structure in OO systems integration testing of adding operators to classes is not appropriate. Students should try writing an integration test plan for an OOD based on one of the three strategies described in this section (thread-based testing, use-based testing, and cluster testing). Similarly, students should try to write a plan for validating an OOD based on the use-case scenarios defined in the OOA model. 17.5 Test Strategies for WebApps This section serves as an introduction for the more detailed discussion of WebApp testing presented in Chapter 20. If you choose, you can delay any discussion of WebApp testing until you cover Chapter 20, but if you skip that chapter, be sure to address this topic now. 17.6 Validation Testing In this section validation testing is described as the last chance to catch program errors before delivery to the customer. Since the focus is on testing requirements that are apparent to the end-users, students should regard successful validation testing as very important to system delivery. If the users are not happy with what they see, the developers often do not get paid. It is sometimes worthwhile to have students test each other’s software for conformance to the explicitly stated software requirements. The key point to emphasize is traceability to requirements. In addition, the importance of alpha and beta testing (in product environments) should be stressed. 17.7 System Testing System testing is described as involving people outside the software engineering group (since hardware engineers, system engineers or network engineers are often involved). Several systems tests are mentioned in this section (recovery, security, stress, and performance). Students should be familiar with each of them. A thorough discussion of the problems associated with "finger pointing," possibly with excerpts from Tracy Kidder's outstanding book, The Soul of a New Machine, will provide your students with important insight. 17.8 The Art of Debugging This section reviews the process of debugging a piece of software. Students may have seen this material in their programming courses. The debugging approaches might be illustrated by using each to track down bugs in real software as part of a class demonstration or laboratory exercise. Students need to get in the habit of examining the questions at the end of this section each time they remove a bug from their own programs. To emphasize how luck, intuition, and some innate aptitude contribute to successful debugging, conduct the following class experiment: 1. Handout a 30 -50 line module with one or more semantic errors purposely embedded in it. 2. Explain the function of the module and the symptom that the error produces. 3. Conduct a "race" to determine: (a) error discovery time, and (b) proposed correction time 4. Collect timing results for the class; have each student submit his or her proposed correction and the clock time that was required to achieve it. 5. Develop a histogram with response distribution. It is extremely likely that you find wide variation in the students' ability to debug the problem. Chapter 18 Testing Conventional Applications CHAPTER OVERVIEW AND COMMENTS This intent of this chapter is to introduce a variety of black-box and white-box testing methods that can be used for conventional software. The vast majority of students will be unaware of even the most simple test case design method, viewing testing as an afterthought—something that has to be done after coding. Students should be encouraged to design and build test cases for their course projects, using several of the testing techniques presented here. 18.1 Software Testing Fundamentals Students need to be encouraged to look at testing as an essential part of the quality assurance work and a normal part of modern software engineering. Formal reviews by themselves cannot locate all software errors. Testing occurs late in the software development process and is the last chance to catch bugs prior to customer release. This section contains a software testability checklist that students should keep in mind while writing software and designing test cases to test software. The toughest part of testing for students is understanding the necessity of being thorough, and yet recognizing that testing can never prove that a program is bug free. 18.2 Internal and External Views of Testing This section discusses the differences between black-box and white-box testing. Another purpose of this section is to convince students that exhaustive testing is not possible (sidebar) for most real applications (too many logic paths and two many input data combinations). This means that the number of test cases processed is less important than the quality of the test cases used in software testing. Sections that follow discuss strategies that will help students to design test cases that will make both white-box and black-box testing feasible for large software systems. 18.3 White-Box Testing This section makes the case that white-box testing is important, since there are many program defects (e.g. logic errors) that black-box testing can not uncover. Students should be reminded that the goal of white-box testing is to exercise all program logic paths, check all loop execution constraints, and internal data structure boundaries. 18.4 Basis Path Testing This section describes basis path testing as an example of a white-box testing technique. Basis path testing is easiest for students to use if they will construct a program flow graph first. However, students should understand that cyclomatic complexity could be computed from the PDL representation of the program (or from source code itself). Be sure your students understand what an “independent path” is and why there are a limited number of them (as opposed to an extremely large number of program paths). Students should be encouraged to use the basis path example as a model and construct a set of test cases for one of their own programs. The term "graph matrix" is introduced in this section; students might have studied these as adjacency matrices in a discrete mathematics or data structures unit on graph theory. If your students are unfamiliar with graph theory, you may need to show them more examples of how to construct adjacency matrices with various types of graph edge weights. 18.5 Control Structure Testing Basis path testing is one form of control structure testing. This section introduces three others (condition testing, data flow testing, loop testing). The argument given for using these techniques is that they broaden the test coverage from that which is possible using basis path testing alone. Showing students how to build truth tables may be beneficial to ensure thorough coverage by the test cases used in condition testing. Students may need to see an example of building test cases for data flow testing using a complete algorithm implemented in a familiar programming language. Similarly students may benefit from seeing examples of building test cases for each of the loop types listed in Section 18.5.3. Students should be required to build a set of test cases to do control structure testing of one of their own programs sometime during the semester. 18.6 Black-Box Testing The purpose of black-box testing is to devise a set of data inputs that fully exercise all functional requirements for a program. Students should be reminded that black-box testing is complementary to white-box testing. Both are necessary to test a program thoroughly. Several black-box testing techniques are introduced in this section (graph-based testing, equivalence partitioning, boundary value analysis, comparison testing, orthogonal array testing). It is important to emphasize that in black-box testing the test designer has no knowledge of algorithm implementation. The test cases are designed from the requirement statements directly, supplemented by the test designer's knowledge of defects that are likely to be present in modules of the type being tested. It may be desirable to show students the process of building test cases from an actual program's requirements using several of these techniques. A worthwhile activity for students is devising test cases for another student's program from the software specification document without seeing the program source code. 18.7 Model-Based Testing It’s important to emphasize that tests that exercise the behavior of the software have a strong likelihood of uncovering errors that would not be found using other techniques. MBT exercises behavior. 18.8 Testing for Specialized Environments, Architectures, and Applications This section briefly discusses several specialized testing situations (GUI's, client/server architectures, documentation and help facilities, real-time systems). More extensive discussion of these testing situations appears elsewhere in the text or in the SEPA web site resource links. Testing of Web applications is considered in Chapter 20. 18.9 Patterns for Software Testing Like their counterparts in analysis and design, testing patterns describe situations that software testers may recognize as they approach the testing of some new or revised system. If time permits, ask your students to research testing patterns on the Web and present a pattern to the class. Chapter 19 Testing Object-Oriented Applications CHAPTER OVERVIEW AND COMMENTS This intent of this chapter is to introduce a variety of black-box and white-box testing methods that can be used for object-oriented software. The vast majority of students will be unaware of even the most simple OO test case design method, viewing testing as an afterthought—something that has to be done after coding. Be sure that they understand the specific techniques can be applied for OO software but that the overall objective of testing remains unchanged. 19.1 Broadening the View of Testing Because OO models and OO software are quite similar to one another, “testing” in the form of reviews can begin once models have been created. Discuss the benefits of “testing” at higher levels of abstraction and reintroduce the defect amplification model (Chapter 15) and its impact. 19.2 Testing OOA and OOD Models If time permits, apply the steps discussed in section 19.2.2 using actual CRC cards. 19.3 OO Testing Strategies Its important to address the similarities and differences in the testing strategy for OO and conventional software. Be sure that your students understand some of the subtlties. 19.4 Object-Oriented Testing Methods Test case design for OO software is directed more toward identifying collaboration and communication errors between objects, than toward finding processing errors involving input or data like conventional software testing. Fault-based testing and scenario-based testing are complementary testing techniques that seem particularly well-suited to OO test case design. White-box test case construction techniques are not well suited to OOT. Students should be encouraged to develop a set of test cases for an OO system of their own design. Students need to be reminded of the fact that the process of inheritance does not excuse them from having to test operators obtained from superclasses (their context has changed). Similarly, operators redefined in subclasses will need to be tested in scenarios involving run-time resolution of the operator calls (polymorphism). Students should spend some time discussing the differences between testing the surface structure (end-user view) and deep structure (implementation view) of an OO system. 19.5 Testing Methods Applicable at the Class Level This section discusses the process of testing at the individual class level. Students should be reminded of the haphazard nature of random testing and be urged to consider using the three operation partitioning techniques (state-based, attribute-based, category-based) to improve the efficiency of their testing efforts. 19.6 Interclass Test Case Design This section discusses the task of interclass test case design. Two techniques for conducting interclass testing are described (multiple class testing and tests derived from behavioral models). Students might be encouraged to develop a set of test cases for a real system using each technique and see which they prefer using. Chapter 20 Testing Web Applications CHAPTER OVERVIEW AND COMMENTS Of all Web engineering tasks, testing is arguably the most important. At the same time, many practitioners have a relatively weak understanding of WebApp testing. As a result, WebApp testing is conducted poorly. That is a cause for concern. The intent of this chapter is to introduce the important elements of testing for WebApps. Both testing strategy and tactics are considered in this chapter. 20.1 Testing Concepts for WebApps There has been much discussion of quality attributes and concepts throughout SEPA. I think it’s a very important topic. However, students my begin to roll their eyes when the subject is introduced yet again in this section. If time permits, it would be worthwhile to tie the “dimensions of quality” presented in Section 20.1.1 with other quality discussions presented earlier in the book. Where are the differences? Where are there similarities? Section 20.1.2 discusses the characteristics of WebApp errors. Emphasize to your students that they must understand the nature of WebApp errors before they can hope to design tests to uncover those errors. Be certain to emphasize the testing strategy discussed in Section 20.1.3 and the need to develop a test plan for any major WebApp. 20.2 The Testing Process—An Overview Use Figure 20.1 as a point of departure for your discussion of the WebApp testing process. This section serves as a TOC for sections that follow. 20.3 Content Testing Spend some time discussing the questions posed in Section 20.3.1. Ask you student what types of “tests” they would design to answer each of these questions. Database testing (Section 20.3.2) is an advanced topic and may be too specialized for inclusion in an introductory course. However, if time permits, an overview of the key issues is recommended. Use Figure 20.2 as a point of departure for your discussion. 20.4 User Interface Testing The interface testing strategy presented in Section 20.4.1 should be emphasized during lecture. It is important to note that it is sometimes difficult to make a clear distinction between interface testing, usability testing, and even navigation testing. Interface testing attempts to find errors in the syntax or semantics of user interaction. Interface mechanics (syntax) are tested by examining the interface mechanisms discussed in Section 20.4.2. Interface semantics (Section 20.4.3) examine how the interface achieved required user functionality and features. Usability tests address issues presented in Section 20.4.4. Use Figure 20.3 as a trigger for this discussion. 20.5 Component-Level Testing Component level testing uses techniques presented in Chapters 13 and 14. You may want to revisit black-box and white-box techniques as part of this discussion. 20.6 Navigation Testing Like interface testing, navigation testing attempts to find errors in the syntax or semantics of navigation. Discuss each of the navigation mechanisms noted in Section 20.6.1 and ask your student how they might test each in a generic sense. The NSU (Chapter 13) is the driver for testing navigation syntax, a topic discussed in Section 20.6.2. Review the questions posed in this section. 20.7 Configuration Testing It might be best to begin this discussion by considering the vagaries of an Internet-based client/server environment. The discussion presented in this section is fairly rudimentary, focusing on configuration compatibility issues on both the client and server sides. If your students have a solid background in client/server architectures, you might expand this discussion (as time permits) to cover more advanced topics. 20.8 Security Testing I only present an overview of this topic in SEPA. It’s likely that your students will find this fascinating, and if time permits, you might want to extend coverage a bit. If you intend to spend some time here, you’ll need to supplement SEPA content with outside sources. See the SEPA Web site for resource recommendations. 20.9 Performance Testing Performance testing addresses the questions posed in the introduction to Section 20.9.1. Specific testing methods—load testing and stress testing—are conducted to answer these questions. Be certain that your student understand the subtle difference between load and stress testing and the intent of each. If time permits, have your students investigate one or more testing tools suggested in the sidebar. Chapter 21 Formal Modeling and Verification CHAPTER OVERVIEW AND COMMENTS The intent of this chapter is to provide an overview of two important (but not widely used) methods for formal program verification—cleanroom software engineering and formal methods. The late Harlan Mills (one of the true giants of the first half century of computing) suggested that software could be constructed in a way that eliminated all (or at least most) errors before delivery to a customer. He argued that proper specification, correctness proofs, and formal review mechanisms could replace haphazard testing, and as a consequence, very high quality computer software could be built. His approach, called cleanroom software engineering, is the focus of this chapter. The cleanroom software engineering strategy introduces a radically different paradigm for software work. It emphasizes a special specification approach, formal design, correctness verification, “statistical” testing, and certification as the set of salient activities for software engineering. The intent of this chapter is to introduce the student to each of these activities. This chapter also presents an introduction to the use of formal methods in software engineering. The focus of the discussion is on why formal methods allow software engineers to write better specifications than can be done using natural language. Students without precious exposure to set theory, logic, and proof of correctness (found in a discrete mathematics course) will need more instruction on these topics than is contained in this chapter. The chapter contains several examples of specifications that are written using various levels of rigor. However, there is not sufficient detail for a student to learn the language (supplementary materials will be required). 21.1 The Cleanroom Strategy This section introduces the key concepts of cleanroom software engineering and discusses its strengths and weaknesses. An outline of the basic cleanroom strategy is presented. Students will need some additional information on the use of box specifications and probability distributions before they can apply this strategy for their own projects. 21.2 Functional Specification Functional specification using boxes is the focus of this section. It is important for students to understand the differences between black boxes (specifications), state boxes (architectural designs), and clear boxes (component designs). Even if students have weak understanding of program verification techniques, they should be able to write box specifications for their own projects using the notations shown in this section. 21.3 Cleanroom Design If you plan to have your students verify their box specifications formally, you may need to show them some examples of the techniques used later in this chapter. The key to making verification accessible to students at this level is to have them write procedural designs using only structured programming constructs in their designs. This will reduce considerably the complexity of the logic required to complete the proof. It is important for students to have a chance to consider the advantages offered by formal verification over exhaustive unit testing to try to identify defects after the fact. 21.4 Cleanroom Testing This section provides and overview of statistical use testing and increment certification. It is important for students to understand that some type of empirical data needs to be collected to determine the probability distribution for the software usage pattern. The set of test cases created should reflect this probability distribution and then random samples of these test cases may be used as part of the testing process. Some additional review of probability and sampling may be required. Students would benefit from seeing the process of developing usage test cases for a real software product. Developing usage test cases for their own projects will be difficult, unless they have some means of acquiring projected usage pattern data. Certification is an important concept. Students should understand the differences among the certification models presented in this section as well. 21.5 Basic Concepts This section discusses the benefits of using formal specification techniques and the weaknesses of informal specification techniques. Many of the concepts of formal specification are introduced (without mathematics) through the presentation of three examples showing how formal specifications would be written using natural language. It may be worthwhile to revisit these examples after students have completed the chapter and have them write these specifications using mathematical notation or a specification language (like OCL or Z). Note: If your students have not completed a good course in discrete mathematics, you may have to present a review of the mathematics needed for the remainder of the chapter appears in this section. Constructive set specification writing is a very important concept for your students to understand, as is work with predicate calculus and quantified logic. Formal proofs of set theory axioms and logic expressions is not necessary, unless you plan to have your students do correctness proofs for their specifications. Work with sequences may be less familiar to your students, if they have not worked with files and lists at an abstract level. 21.6 Applying Mathematical Notation for Formal Specification This section uses mathematical notation to refine the block handler specification from Section 21.5. It may be desirable to refine the other specification examples from Section 21.5 using similar notation. If your students are comfortable with mathematical proofs, you may wish to present an informal correctness proof for these three specifications. Having students write specifications for some of their own functions, using notation similar to that used in this section may be desirable. 21.7 Formal Specification Languages This section discusses the properties of formal specification languages from a theoretical perspective. The next two sections use OCL and the Z specification language to rewrite the block handler specification more formally. You might have students try writing the specifications for their own functions using a pseudocode type notation embellished with comments describing semantic information. Section 21.7.1 presents a brief overview of OCL syntax and semantics and then applies OCL to the block handler example. The intent is to give the student a feel for OCL without attempted to teach the language. If time and inclination permit, the material presented here can be supplemented with additional OCL information from the UML specification or other sources. Section 21.7.2 presents a brief overview of Z syntax and semantics and then applies Z to the block handler example. The intent is to give the student a feel for Z without attempted to teach the language. If time and inclination permit, the material presented here can be supplemented with additional Z information. A Detailed Example of the Z Language To illustrate the practical use of a specification language, Spivey considers a real-time operating system kernel and represents some of its basic characteristics using the Z specification language [SPI88]. The remainder of this section has been adapted from his paper (with permission of the IEEE). ************* Embedded systems are commonly built around a small operating-system kernel that provides process-scheduling and interrupt-handling facilities. This article reports on a case study made using Z notation, a mathematical specification language, to specify the kernel for a diagnostic X-ray machine. Beginning with the documentation and source code of an existing implementation, a mathematical model, expressed in Z, was constructed of the states that the kernel could occupy and the events that could take it from one state to another. The goal was a precise specification that could be used as a basis for a new implementation on different hardware. This case study in specification had a surprising by-product. In studying one of the kernel's operations, the potential for deadlock was discovered: the kernel would disable interrupts and enter a tight loop, vainly searching for a process ready to run. This flaw in the kernel's design was reflected directly in a mathematical property of its specification, demonstrating how formal techniques can help avoid design errors. This help should be especially welcome in embedded systems, which are notoriously difficult to test effectively. A conversion with the kernel designer later revealed that, for two reasons, the design error did not in fact endanger patients using the X-ray machine. Nevertheless, the error seriously affected the X-ray machine's robustness and reliability because later enhancements to the controlling software might reveal the problem with deadlock that had been hidden before. The specification presented in this article has been simplified by making less use of the schema calculus, a way of structuring Z specifications. This has made the specification a little longer and more repetitive, but perhaps a little easier to follow without knowledge of Z. About the Kernel The kernel supports both background processes and interrupt handlers. There may be several background processes, and one may be marked as current. This process runs whenever no interrupts are active, and it remains current until it explicitly releases the processor, the kernel may then select another process to be current. Each background process has a ready flag, and the kernel chooses the new current process from among those with a ready flag set to true. When interrupts are active, the kernel chooses the most urgent according to a numerical priority, and the interrupt handler for that priority runs. An interrupt may become active if it has a higher priority than those already active and it becomes inactive again when its handler signals that it has finished. A background process may become an interrupt handler by registering itself itself as the handler for a certain priority. Documentation Figures 9.15 and 9.16 are diagrams from the existing kernel documentation, typical of the ones used to describe kernels like this. Figure 9.15 shows the kernel data structures. Figure 9.16 shows the states that a single process may occupy and the possible transitions between them, caused either by a kernel call from the process itself or by some other event. In a way, Figure 9.16 is a partial specification of the kernel as a set of finite-state machines, one for each process. However, it gives no explicit information about the interactions between processes—the very thing the kernel is required to manage. Also, it fails to show several possible states of a process. For example , the current background process may not be ready if it has set its own ready flag to false, but the state "current but not ready" is not shown in the diagram. Correcting this defect would require adding two more states and seven more transitions. This highlights another deficiency of state diagrams like this: their size tends to grow exponentially as a system complexity increases. Instructor Manual for Software Engineering: A Practitioner's Approach Roger S. Pressman 9780071267823, 9789355325044

Document Details

Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right