Preview (5 of 14 pages)

Preview Extract

Online Chapter B – The Traditional Approach to Requirements Table of Contents Chapter Overview Learning Objectives Notes on Opening Case and EOC Cases Instructor's Notes (for each section) Key Terms Lecture notes Quick quizzes Classroom Activities Troubleshooting Tips Discussion Questions Chapter Overview Data flow diagrams (DFDs) are used in combination with the use cases and entity-relationship diagram (ERD) to model system requirements. DFDs model a system as a set of processes, data flows, external agents, and data stores. DFDs are relatively easy to read because they graphically represent key features of the system by using a small set of symbols. Because there are many features to be represented, many types of DFDs are developed, including context diagrams, DFD fragments, subsystem DFDs, event-partitioned DFDs, and process decomposition DFDs. Learning Objectives After reading this chapter, the student should be able to: Explain how the traditional approach and the object-oriented approach differ when modeling the details of a use case List the components of a traditional system and the symbols representing them on a data flow diagram Describe how data flow diagrams can show the system at various levels of abstraction Develop data flow diagrams, data element definitions, data store definitions, and process descriptions Develop tables to show the distribution of processing and data access across system locations Notes on Opening Case and EOC Cases Opening Case San Diego Periodicals: Following the Data Flows: The case describes a realistic interaction scenario between a system analyst and a user. The case depicts the heart of the analysis task—eliciting user requirements and accurately representing them in analysis models. Key points to emphasize about the case include: Systems analysis requires communication with people. Success as an analyst depends heavily on interpersonal communication skills (unlike many other computer-related jobs). Developing an accurate model requires iterations of eliciting requirements, documenting requirements in models, and reviewing the models for accuracy. There is no one-step “cookbook” approach to developing complete and accurate analysis models. Part of the analyst’s job is to train the users to read and understand the models. If the user can’t understand the model, he or she can’t assist the analyst in identifying errors. Developing accurate models requires close attention to detail (note the filled notepad, the large number of corrections, and the “drained brains” of the participants). It is worth mentioning to students that the above points are equally applicable to documenting requirements with object-oriented models. EOC Cases Sandia Medical Devices: Sandia Medical Devices is the same case that runs throughout all of the chapters in the printed text. Since this chapter focuses on the structured methods and techniques to develop requirements, the students are asked to develop appropriate DFDs for this case. The three problems require a Context diagram, an Event-Partitioned diagram, and a DFD fragment. A use case diagram and class diagram are provided for input. Instructor's Notes The Traditional and Object-Oriented Views of Activities and Use Cases Key Terms none Lecture Notes The traditional and object-oriented approaches to software development differ in how a system’s response to an event is modeled and implemented. The traditional approach views data and processes as two separate and distinct things. Data is a passive thing that is stored in data stores (one per entity on the ERD) and is passed back and forth between processes and external agents. Processes represent activities. The OO approach views data entities and their related activities as a single thing called a class or object. Objects are containers for both data and activities (processes). Objects interact with one another by sending and responding to messages. Figure B-1 summarizes the differences between traditional and OO approaches to systems. Figure B-2 summaries the requirements models for the traditional and OO approaches. Quick Quiz Q: How is a system viewed with the traditional approach? A: A system is viewed as a collection of processes performed by people and by computers. Q: How is a system viewed with the object-oriented approach? A: A system is viewed as a collection of interacting objects. Q: What is involved (and modeled) with the traditional approach? A: Processes, stored data, inputs, and outputs. Q: What is involved (and modeled) with the object-oriented approach? A: Models that show objects, their behavior, and their interactions with other objects. Data Flow Diagrams Key Terms data flow diagram (DFD) – a diagram that represents system requirements as processes, external agents, data flows, and data stores external agent – a person or organization, outside the system boundary, that supplies data inputs or accepts data outputs process – a symbol on a DFD that represents an algorithm or procedure by which data inputs are transformed into data outputs data flow – an arrow on a DFD that represents data movement among processes, data stores, and external agents data store – a place where data is held pending future access by one or more processes level of abstraction – any modeling technique that breaks the system into a hierarchical set of increasingly more detailed models context diagram – a DFD that summarizes all processing activity within the system in a single process symbol DFD fragment – a DFD that represents the system response to one event within a single process symbol event-partitioned system model (diagram 0) – a DFD that models system requirements by using a single process for each event in a system or subsystem physical DFD – a DFD that includes one or more assumptions about implementation technology logical DFD – a DFD developed under the assumption of perfect internal technology perfect internal technology – an assumption that includes such technology capabilities as instant processing and data retrieval, infinite storage and network capacity, and a complete absence of errors information overload – difficulty in understanding that occurs when a reader receives too much information at one time rule of 7 ± 2 (Miller’s number) – the rule of model design that limits the number of model components or connections among components to no more than nine minimization of interfaces – a principle of model design that seeks simplicity by limiting the number of connections among model components balancing – equivalence of data content between data flows entering and leaving a process and data flows entering and leaving a process decomposition DFD black hole – a process or data store with a data input that is never used to produce a data output miracle – a process or data store with a data element that is created out of nothing Lecture Notes A data flow diagram (DFD) represents the flow of data among internal processes and external agents. Five symbols are used in a DFD (see Figure B-3): Process – A symbol in a DFD that represents an algorithm or procedure by which data inputs are transformed into data outputs. Data flow – An arrow in a DFD that represents the movement of data among processes, data stores, and external agents. External agent – A person or organization, outside the boundary of a system, that supplies data inputs or accepts data outputs. Data store – A place where data is held pending future access by one or more processes. Real-time link – A symbol on a data flow that represents data exchange between a process and an external agent as the process is executing. A DFD is a graphical representation of information from the event table (if you teach about event tables) and ERD. The correspondence is summarized as follows (see Figure B-4): Source – An external agent Trigger – A data flow from an external agent to a process Activity – A process (the use case) Response – A data flow from a process to an external agent Destination – An external agent Data entity from ERD – A data store Data Flow Diagrams and Levels of Abstraction A DFD can model a system or part of a system’s various levels of detail. Different levels of detail are sometimes called levels of abstraction. Data flow diagrams can show either higher-level or lower-level views of the system. The high-level processes on one DFD can be decomposed into separate lower-level, detailed DFDs. Processes on the detailed DFDs can also be decomposed into additional diagrams to provide multiple levels of abstraction. A single DFD can represent either extreme or any level of abstraction in between. DFDs at different levels of abstraction have different names and are used for various purposes. Context Diagrams: A context diagram describes an entire system at the highest level of abstraction (see Figure B-6). A context diagram is a DFD that summarizes all processing activity within the system in a single process symbol. The primary role of the context diagram is to show the relationship (data flows) between the system and external agents. A context diagram is a useful tool for describing the scope of a system to an external party (for example, an IS steering committee member). Context diagrams are often created for a single subsystem, especially when the entire system is very large. Figure B-9 shows the RMO customer support system and Figure B-11 shows the context diagram for the order-entry subsystem. All RMO examples in the rest of the chapter are drawn from the order-entry subsystem. DFD Fragments: A DFD fragment (see Figure B-12) is a DFD that represents the system response to one event within a single process symbol. Therefore, one DFD fragment is created for each event/use case in the event table. Figure B-12 shows five separate DFD fragments. DFD fragments are the most direct link between the entity-relationship diagram and the other parts of a traditional analysis model. DFD fragments are “bite-sized” pieces of the analysis model that can be constructed, analyzed, validated, and dissected independently. The first pass at modeling a large system is often performed bottom up, with the DFD fragments being the bottom-most layer. A single DFD fragment can hide a lot of complexity (for example, a process named Prepare Federal Income Tax Return!). Just as the event-partitioned system model is decomposed into a set of DFD fragments, a single DFD fragment can, if necessary, be decomposed into one or more DFDs at lower levels of abstraction (that is, higher levels of detail). The simplest form of decomposition is a DFD that represents the details of a single DFD fragment. A hierarchical numbering scheme relates detailed DFDs to DFD fragments and processes on an event-partitioned system model. For example, the detailed DFD of Process 5 on an event-partitioned system model is called Diagram 5. Processes within Diagram 5 are numbered 5.1, 5.2, and so forth. If any process (for example, Process 5.3) in the DFD requires further decomposition, the corresponding detailed DFD is called Diagram 5.3 and its processes are number 5.3.1, 5.3.2, and so forth. Decomposition can continue for any number of levels. The Event-Partitioned System Model: An event-partitioned model (diagram 0) combines all of a system’s or subsystem’s DFD fragments on a single DFD (see Figure B-8). Each context diagram can have a corresponding event-partitioned model (see Figure B-13 and Figure B-11). Diagram 0 is primarily a presentation tool. Analysts often avoid creating Diagram 0 because it can be very complex in large systems and because its information content is redundant with the DFD fragments. RMO Data Flow Diagrams Figure B-9 is a context diagram for the existing RMO customer support system. When a system responds to many events, it is commonly divided into subsystems, and a context diagram is created for each subsystem. Figure B-10 divides the RMO customer support system into subsystems based on use case similarities, including interactions with external agents, interactions with data stores, and similarities in required processing. Figure B-11 shows the context diagram for the order-entry subsystem. Figure B-12 shows the DFD fragments for the RMO order-entry subsystem. Note that there are five DFD fragments—one for each order-entry subsystem use case listed in Figure B-10. Similarly, Figure B-13 shows the RMO order-entry subsystem diagram 0. Decomposition to See One Activity's Detail: Some DFD fragments involve a lot of processing that the analyst needs to explore in more detail. As with any modeling step, further decomposition helps the analyst learn more about the requirements while also producing needed documentation. Figure B-14 shows an example of a more detailed diagram for the CSS DFD fragment 2: Create new order. Physical and Logical DFDs A physical DFD models one particular implementation of a system (see Figure B-15). A logical DFD models the system without bias to any particular implementation details. When looking at a logical DFD, the reader shouldn’t be able to tell whether the system is automated or manual, centralized or distributed, or how its various pieces are distributed among locations, organizations, computer hardware, or programs. With respect to DFDs, the terms physical and logical describe two ends of a continuum. It is difficult to create a DFD that is completely logical. Many assumptions about technology or other implementations are subtle and difficult to identify or remove from a DFD. Specific things to look for when trying to identify physical DFD features include: Technology-specific processes Actor-specific process names Technology- or actor-specific process orders Redundant processes, data flows, and data stores Physical DFDs are usually avoided during the analysis phase. By documenting system requirements in a logical way, the analyst leaves all design and implementation options open. Creating physical DFDs during analysis limits thinking about technology and other implementation choices. Perfect technology is a useful concept for separating physical and logical requirements. Perfect technology is a hypothetical technology (or set of technologies) that is error-free and has infinite processing speed and data transmission/storage capacity. If a requirement “disappears” when perfect implementation technology is assumed, then it is a physical (not a logical) requirement. Note that perfect technology is only assumed for things inside the system. Evaluating DFD Quality Once a first draft of a DFD is prepared, it must be evaluated for quality. (Quality control can also be performed while the DFD is being prepared.) DFD quality checks fall into two primary categories: Readability/intelligibility Logical consistency (or lack thereof) Minimizing Complexity: Readability and intelligibility are a matter of information or cognitive overload. The analyst always walks a fine line between presenting too much information in a single chunk and having too many chunks to keep track of. The rule of seven plus or minus two (also known as Miller’s Number) and the principle of minimization of interfaces are useful but imperfect attempts to define the proper balance. Miller’s Number is a limit on the number of information chunks that a typical human can accurately “process” at one time. Some applications of the rule of 7 ± 2 to DFDs include the following: A single DFD should have no more than 7 ± 2 processes. No more than 7 ± 2 data flows should enter or leave a process, data store, or data element on a single DFD. Minimization of interfaces is related to Miller’s Number. The principle controls complexity by limiting the number of interfaces between DFD components. One application is to limit the number of data flows into and out of each process to 7 ± 2. Ensuring Data Flow Consistency: Black holes and miracles are logical inconsistencies. A black hole (see Figure B-16) is a process or data store that data enters but never leaves. A miracle (see Figure B-17) is a data flow that leaves a process or data store without ever having entered that process or data store. Both conditions can be discovered by a straightforward (but tedious) application of logical rules of DFD construction (for example, every outflow must have a corresponding inflow). Figure B-18 and Figure B-19 show other examples of unnecessary data and impossible data conditions. Quick Quiz Q: List at least three different types of data flow diagrams (DFDs). A: Types of DFDs include context diagrams, event-partitioned system models, subsystem DFDs, diagram 0, DFD fragments, process decomposition, physical DFDs, and logical DFDs. Q: Describe a physical data flow diagram (DFD). A: A physical DFD is any DFD that shows the implementation specifics of one particular way of implementing a system. Q: Describe a logical data flow diagram (DFD). A: A logical DFD is any DFD that shows system requirements under the assumption of perfect technology. Documentation of DFD Components Key Terms structured English – a method of writing process specifications that combines structured programming techniques with narrative English decision table – a tabular representation of processing logic containing decision variables, decision variable values, and actions or formulas decision tree – a graphical description of process logic that uses lines organized like branches of a tree data flow definition – a textual description of a dataflow’s content and internal structure data dictionary – a repository for definitions of data flows, data elements, and data stores Lecture Notes This section describes various techniques for further documenting the details of DFD components. Subsections cover methods for describing processes, data flows, and data stores. Process Descriptions The three primary techniques for documenting process logic and detail are: Process descriptions (structured English) Decision tables Decision trees Regardless of which method is used, a process description must be specific enough to allow programs to be written. A process description, decision table, or decision tree should also fit on one page (otherwise, a process decomposition DFD is drawn). With each method, the analyst chooses the most appropriate presentation format by determining which is most compact, readable, and unambiguous. The best format will vary from process to process. The analyst may have to prepare process descriptions in multiple formats to determine which is the most appropriate. A structured English process description (see Figures B-20, B-21, and B-22) is similar to a program. It uses a precise subset of English to describe an algorithm or procedure. As with program source code, indentation is used to enhance readability and emphasize control structure. Structured English is well-suited to processes containing many simple sequential steps and those with relatively simple control structures (for example, a single if statement or loop). Decision tables (see Figure B-23) and decision trees (see Figure B-24) are graphical representations of a process. They are well-suited to representing processes with complex decision logic and large numbers of decision variables or decision variables values in a relatively compact space. Data Flow Definitions Data flow definitions describe the content and structure of data flows. The two most common formats of data definitions are: Data element lists (see Figure B-26). Algebraic definitions (see Figures B-27 and B-28). Data flows without a complex internal structure (for example, embedded repeating groups) can be represented by simply listing their component data element names. Data flows with a complex internal structure require a specific notation to represent that structure. The notation uses the following symbols: + to represent concatenation of data elements. {} to represent a repeating group of data elements. Data Store Definitions Data store definitions are generally omitted because they are redundant with the ERD. Data Element Definitions Data elements can be defined by short textual descriptions (see Figure B-30). The description usually specifies the content type (for example, numeric or alphanumeric), maximum length (if necessary), and allowable values (if the data element contains coded data). The analyst should avoid excessively detailed data element definitions because defining appropriate format and content is usually a design and implementation decision. DFD Summary Together, DFDs, process descriptions, data definitions, and the ERD form an interlocking set of models (see Figure B-31). An ideal set of analysis models is mutually exclusive and collectively exhaustive—that is, each analysis “fact” is represented by only one of the modeling techniques, and all relevant facts are (or can be) represented. Quick Quiz Q: Why might an analyst describe a structured process with a decision table or tree instead of structured English? A: Decision tables and trees are used when they improve readability more effectively than structured English. This is often the case for processes with a large number of decision variables and relatively simple processes for each combination of decision variable values. Locations and Communication through Networks Key Terms location diagram – a diagram or map that identifies all the processing locations of a system activity-location matrix – a table that describes the relationship between processes and the locations in which they are performed activity-data matrix – a table that describes stored data entities, the locations from which they are accessed, and the nature of the accesses CRUD – acronym for create, read, update, and delete Lecture Notes Structure techniques represent information about process and data storage locations using three different models: Location diagrams Activity-location matrices Activity-data matrices The location diagram (see Figure B-32) is simply a map (or set of maps) that shows each location where data is stored or processed. The activity-location matrix (see Figure B-33) is a table that shows each location where an activity on the event table is performed. The activity-data matrix (see Figure B-34) shows how data in each data store (entity) is used at each location. Possible uses include create (abbreviated as C), read (abbreviated as R), update (abbreviated as U) and delete (abbreviated as D). The table is sometimes called a CRUD matrix based on the abbreviations. Quick Quiz Q: What is an activity-location matrix? A: An activity-location matrix is a table that shows the location(s) where each processing activity is performed. Q: What is an activity-data matrix? A: An activity-data matrix is a table that shows each processing location and what items of stored data are accessed from that location. The matrix is only loosely related to data flow diagrams. Classroom Activities Instructors can use the end-of-chapter cases and the Instructor’s Manual discussion questions to reinforce concepts and skills covered in this chapter. For this chapter the best in-class activities are to go through the many examples provided in the chapter. Troubleshooting Tips For this chapter, the troubleshooting tips were interspersed throughout the chapters. See the Teaching Tips boxes. Discussion Questions 1. Data Flow Diagrams Data flow diagrams (DFDs) are graphical system models that show the main requirements for an information system in one diagram. An advantage of DFDs is that end users, management, and information systems workers typically can read and interpret the DFD with minimal training. Describe the training process for end users. Also describe the training process for management. When should training take place? Can end users and management be trained together? Training end users and management on data flow diagrams (DFDs) requires a tailored approach to ensure understanding and effective utilization of the diagrams in their respective roles. Here's a breakdown of the training process for each group: Training Process for End Users: 1. Introduction to DFD Concepts: Start with an overview of what DFDs are and their importance in understanding system requirements and processes. 2. Explanation of Symbols: Teach end users the meaning of symbols used in DFDs, such as processes, data stores, data flows, and external entities. Provide examples to illustrate each symbol's purpose. 3. Interpretation of DFDs: Guide end users through the interpretation of DFDs by demonstrating how to read and understand the flow of data within a system. Highlight the relationships between different components and how they contribute to system functionality. 4. Use Cases: Present real-life scenarios or use cases represented by DFDs and walk end users through the process of analyzing and interpreting them to understand system requirements and processes. 5. Hands-On Practice: Provide opportunities for hands-on practice with creating and analyzing DFDs using sample datasets or mock systems. Encourage end users to apply their knowledge to specific situations relevant to their roles. Training Process for Management: 1. Overview of DFDs: Provide management with a high-level overview of DFDs and their role in system analysis and design. Emphasize the importance of DFDs in capturing and communicating system requirements and processes. 2. Strategic Importance: Explain the strategic importance of DFDs in decision-making processes, resource allocation, and system planning. Highlight how DFDs can help identify opportunities for process improvement and optimization. 3. Review of System Architecture: Discuss the system architecture represented by DFDs and how it aligns with organizational goals and objectives. Address any questions or concerns management may have regarding system functionality and performance. 4. Decision Support: Demonstrate how DFDs can serve as a valuable tool for management in making informed decisions about system investments, upgrades, and enhancements. Showcase examples of how DFDs can facilitate strategic planning and resource allocation. 5. Integration with Business Processes: Illustrate the integration of DFDs with existing business processes and workflows. Show how DFDs can help management understand the impact of system changes on day-to-day operations and organizational efficiency. Timing and Integration of Training: • Training for end users and management should ideally take place during the initial stages of system implementation or when significant changes are introduced to existing systems. • While end users and management may have different training needs and perspectives, there may be opportunities for joint training sessions or collaborative discussions to foster alignment and understanding between the two groups. • Training sessions should be scheduled at convenient times for participants and tailored to accommodate their respective roles, responsibilities, and levels of expertise. By providing targeted training for end users and management on data flow diagrams, organizations can empower both groups to effectively interpret, utilize, and leverage DFDs to support informed decision-making, system planning, and process improvement initiatives. 2. DFD Summary The components of a traditional analysis model (data flow diagrams, process definitions, entity-relationship diagrams, and data definitions) were developed in the 1970s and 1980s as part of the traditional structured analysis methodology. How well do the components fit together? Some analysts augment the structured models with additional models that are ‘borrowed’ from other methodologies. Why do analysts do this? Why not just revise the structured methodology to be consistent? The components of a traditional analysis model, including data flow diagrams (DFDs), process definitions, entity-relationship diagrams (ERDs), and data definitions, were indeed developed in the 1970s and 1980s as part of the traditional structured analysis methodology. While these components provide a structured framework for understanding and documenting system requirements, they may not always fit together seamlessly, and analysts often augment them with additional models borrowed from other methodologies. There are several reasons why analysts may choose to do this instead of revising the structured methodology: 1. Evolving Requirements: Over time, the nature of systems and their requirements has evolved, becoming more complex and interconnected. The structured analysis methodology, while effective for its time, may not fully address the needs of modern systems. Analysts may borrow additional models from other methodologies to supplement the structured approach and provide a more comprehensive understanding of system requirements. 2. Specialized Needs: Different projects and domains may have unique requirements and challenges that cannot be adequately addressed by the traditional structured analysis models alone. Analysts may borrow models from other methodologies to address specific needs, such as object-oriented analysis for complex software systems or business process modeling for workflow optimization. 3. Integration of Perspectives: Borrowing models from other methodologies allows analysts to integrate multiple perspectives into their analysis process. For example, combining structured analysis with user-centered design techniques can result in a more user-friendly and intuitive system design. 4. Methodological Flexibility: Rather than revising the structured methodology to accommodate new requirements and practices, analysts may prefer to maintain methodological flexibility by borrowing models from other methodologies as needed. This allows them to tailor their approach to the unique characteristics of each project and adapt to changing circumstances. 5. Continuous Improvement: Analysts recognize that no single methodology is perfect, and there is always room for improvement and innovation. By borrowing models from other methodologies, analysts can leverage the best practices and insights from a diverse range of approaches to enhance the effectiveness of their analysis process. In summary, while the components of the traditional structured analysis model provide a solid foundation for understanding and documenting system requirements, analysts often augment these components with additional models borrowed from other methodologies to address evolving requirements, specialized needs, integrate multiple perspectives, maintain methodological flexibility, and continuously improve their analysis process. Instead of revising the structured methodology to be consistent, analysts prefer to leverage the strengths of various methodologies to create a more comprehensive and adaptable approach to system analysis and design. Instructor Manual for Systems Analysis and Design in a Changing World
John W. Satzinger, Robert B. Jackson, Stephen D. Burd 9781305117204, 9781111951641

Document Details

Related Documents

Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right