Preview (7 of 21 pages)

Chapter 14 – Deploying the New System Table of Contents Chapter Overview Learning Objectives Notes on Opening Case and EOC Cases Instructor's Notes (for each section) Key Terms Lecture notes Quick quizzes Classroom Activities Troubleshooting Tips Discussion Questions Chapter Overview In this chapter we address the activities of the fifth and sixth Core Processes. The Fifth Core Process is “Build, test, and integrate system components.” The Sixth Core Process is “Complete system tests and deploy the solution.” Although the activities of these two core processes frequently overlap, the basic difference is that the Fifth Core Process is considered implementation or building the system, and the Sixth Core Process is considered deployment and putting the system into production. The first major section of this chapter addresses the many different types of testing that must be done on a new system. Unit testing and integration testing are considered as implementation activities. Then before deployment final testing, such as system tests, performance tests, and user acceptance tests, occurs. The division whether this final testing is part of implementation or deployment is somewhat arbitrary. The next section in the chapter covers the many activities that are required to actually deploy the system. A major focus of deployment is data conversion, or bringing the data from the old system to the new system. This is always a major effort. Other activities include user training, creating appropriate documentation and configuring the production environment. The third section discusses a few management topics that were not covered in Chapter 9. One primary consideration is exactly how to deploy the new system – direct cut over, run parallel, or phase it in over a period of time. These include, how to decide on the order of development of the components and subsystems of the new system, how to configure the test environment with multiple versions of the system, and how to integrate new requests for system capabilities. Finally the chapter includes a recap of RMO deployment decisions and directions. Learning Objectives After reading this chapter, the student should be able to: Describe implementation and deployment activities Describe the four types of software tests and explain how and why each is used Describe several approaches to data conversion List various approaches to system deployment and describe the advantages and disadvantages Explain the importance of configuration management, change management, and sourcecode control to the implementation, testing, and deployment of a system Notes on Opening Case and EOC Cases Opening Case Tri-State Heating Oil: Juggling Priorities to Begin Operation: The setting for this case is a status meeting of a project late in the project of a project that is behind schedule. The project schedule indicates that user training is supposed to start soon, but the system is not ready. Outside temporary staff has been scheduled to assist with daily work so that the permanent staff would have time to be trained. This situation illustrates the many activities and personnel that are involved in the development of a new system, and the impacts of delays and missed target dates. In addition to the testing and the training that are supposed to begin, other scheduled activities, such as new computer installation, data conversion, and network upgrades are also impacted with schedule slips. This case is an interesting case in project management in the face of a schedule that is behind. EOC Cases HudsonBanc Billing System Upgrade: In this case, two banks merge together. Consolidation of operations necessitated the purchase and deployment of a common credit card billing system. A new system was installed and verified against a 10% random sample of customer accounts for two months. After the test, the new system was turned on and the old system turned off. Immediately sever billing and processing problems occurred requiring several months to correct. The student is asked to describe and classify the type of conversion this was, and to discuss how it could have been done better to avoid the problems that occurred. Community Board of Realtors (running case): Community Board of Realtors is a professional organization that supports real estate offices and agents. In this chapter the new system, which utilizes Microsoft SQL Server, is replacing and older system, which uses Microsoft Access DBMS. The student is asked to research what tools are available to assist in the migration of the data, and to develop a data conversion plan and test. The student is asked to describe criteria or activities to verify that the test was successful so that the data conversion can move ahead. Spring Breaks 'R' Us Travel Services (SBRU) (running case): SBRU is an online travel services that books spring break trips to resorts for college students. The SBRU system is made up of four subsystems. In this chapter the student is asked to evaluate and determine if the subsystems are independent enough to be developed and deployed independently, and if so, which order the subsystems should be deployed. The student is asked to develop a deployment plan, either for independent deployment or combined deployment. On the Spot Courier Services (running case): On the Spot is a small, but growing, courier service that needs to track customers, package pickups, package deliveries, and delivery routes. This system is comprised of four subsystems that are being built one after the other (as indicated in Chapter 8 solution). The student is asked to develop a test plan, using the types of tests described in the chapter, for each subsystem, and an overall integration and system test. In addition to the test plan, the student is also asked to develop a data conversion and deployment plan. Finally, as part of the deployment plan, the student is asked to include the purchase and installation of new equipment. Sandia Medical Devices (running case): Sandia Medical Devices is a company that specializes in medical monitoring through remote, mobile telecommunication devices. In this chapter the student is asked to describe what types of integration and system testing will be required and how the testing should fit into the overall schedule. The student is also asked how to integrate user training into the project schedule for the new system. Finally, after the system has been deployed, it is projected that a new version of the system will need to be deployed for new mobile phone systems. The student is asked to develop an iteration plan for the development and deployment of this new version. Instructor's Notes Overview Lecture Notes Figure 14-1 illustrates the detailed activities of the fifth and sixth Core Processes. The first four core processes are the primary focus of this text, but additional processes and activities are needed to complete a system and put it to regular use. The fact that we are covering two core processes in a single chapter doesn’t mean that they are simple or unimportant. Rather, they are complex processes that you will learn about in detail by completing other courses and reading other books as well as through on-the-job training and experience. Testing Key Terms test case – a formal description of a starting state, one or more events to which the software must respond, and the expected response or ending state test data – a set of starting states and events used to test a module, group of modules, or entire system unit test – test of an individual method, class, or component before it is integrated with other software driver – a method or class developed for unit testing that simulates the behavior of a method that sends a message to the method being tested stub – a method or class developed for unit testing that simulates the behavior of a method that hasn’t yet been written integration test – a test of the behavior of a group of methods, classes, or components system test – an integration test of an entire system or independent subsystem build and smoke test – a system test that is performed daily or several times a week performance test or stress test – an integration and usability test that determines whether a system or subsystem can meet time-based performance criteria response time – the desired or maximum allowable time limit for software response to a query or update throughput – the desired or minimum number of queries and transactions that must be processed per minute or hour user acceptance test – a system test performed to determine whether the system fulfills user requirements Lecture Notes Foundation Testing is the process of examining a component, subsystem, or system to determine its operational characteristics and whether it contains any defects. To conduct a test, developers must have well-defined standards for functional and nonfunctional requirements. The developers can test software by reviewing its construction and composition or by designing and building the software, exercising its function, and examining the results. If the results indicate a shortcoming or defect, developers cycle back through earlier implementation or deployment activities until the shortcoming is remedied or the defect is eliminated. A test case is a formal description of the following: A starting state One or more events to which the software must respond The expected response or ending state The starting and ending states and the events are represented by a set of test data. Preparing test cases and data is a tedious and time-consuming process. At the component and method levels, every instruction must be executed at least once. Ensuring that all instructions are executed during testing is a complex problem. Test types, their related core processes, and the defects they detect and operational characteristics they measure are summarized in Figure 14-2. Each type of testing is described in detail later in this section. Unit Testing Unit testing is the process of testing individual methods, classes, or components before they are integrated with other software. The goal of unit testing is to identify and fix as many errors as possible before modules are combined into larger software units, such as programs, classes, and subsystems. Errors become much more difficult and expensive to locate and fix when many units are combined. There are three primary characteristics of a unit test: 1. It is done in isolation. 2. The test data and the test are done by the programmer who wrote the code. 3. It is done quickly without a large requirement for other resources. The first requirement is that the unit test be done in isolation. The objective of a unit test is to test a specific piece of software, and to be able to easily identify the misbehaving code when an error occurs. If the unit is too large, or if it is not isolated, the error could come from some code or an interface other than the specific code being tested. Unit testing often requires the implementation of a driver and/or a stub. A stub is simply a dummy class or method that can be called but usually does nothing more than return an appropriate typed constant. The second requirement is that pieces of code are tested by the programmer who writes the code. This is the fastest and easiest approach to unit testing. The programmer writing the code can quickly generate test data and run several tests on the class or method. In addition this places responsibility of writing solid, clean code right on the programmer. Finally, unit testing should not require elaborate test cases or complex texting configurations. Depending on the environment and the language being used, there are also unit test generators that can be used. Integration Testing An integration test evaluates the behavior of a group of methods, classes, or components. The purpose of an integration test is to identify errors that weren’t or couldn’t be detected by unit testing. Such errors may result from a number of problems, including: Interface incompatibility—For example, one method passes a parameter of the wrong data type to another method. Parameter values—A method is passed or returns a value that was unexpected, such as a negative number for a price. Unexpected state interactions—The states of two or more objects interact to cause complex failures. The complexity of integration testing increases rapidly as the system grows. When integration testing reaches a level where multiple programmers are involved, several procedures must be put in place: Building the component for integration test. Sometimes, particularly when the software is growing organically, this may be a natural result of adding new components. Creating test data. Integration test data is more complex and usually requires coordination between programmers. Conducting the integration test. Decisions must be made and assignments given about who will conduct the integration tests, how they are done, what resources are used, and how frequently they are executed. Evaluating the results. Often this requires involvement by all the programmers. Logging test results. An error log is usually kept at this point to ensure that errors are tracked and corrected. Figure 14-5 shows a sample error log. There are many commercially available error tracking systems. Remember that in an object-oriented system, functions are executed by a set of classes interacting together. This dynamic execution thread can make object-oriented testing rather complex. Some of the issues of OO testing include: Methods can be (and usually are) called by many other methods, and the calling methods may be distributed across many classes. Classes may inherit methods and state variables from other classes. The specific method to be called is dynamically determined at run time based on the number and type of message parameters. Objects can retain internal variable values (i.e., the object state) between calls. The response to two identical calls may be different due to state changes that result from the first call or occur between calls. System, Performance, and Stress Testing A system test is an integration test of the behavior of an entire system or independent subsystem. Integration testing is normally associated with the implementation core process, and system testing is normally associated with the deployment core process. In addition, there are various kinds of system tests that test various functional and nonfunctional aspects of the new system. Figure 14-6 illustrates the types of tests that can be included in system testing, including response time, stability, resource usage, throughput, speed, and of course all the business functions. A build and smoke test is a system test that is typically performed daily or several times per week. The system is completely compiled and linked (built), and a battery of tests is executed to see whether anything malfunctions in an obvious way (“smokes”). Build and smoke tests are valuable because they provide rapid feedback on significant integration problems. A performance test, also called a stress test, determines whether a system or subsystem can meet such time-based performance criteria as response time or throughput. Performance tests are complex because they can involve multiple programs, subsystems, computer systems, and network infrastructure. They require a large suite of test data to simulate system operation under normal or maximum load. Diagnosing and correcting performance test failures are also complex. User Acceptance Testing A user acceptance test (UAT) is a system test to determine whether the system fulfills user requirements. Acceptance testing may be performed near the end of the project or it may be broken down into a series of tests conducted at the end of each iteration. The UAT is normally the final stage in testing the system. Although the primary focus is usually on the functional requirements, the nonfunctional requirements are often also verified. Sometimes it is considered a formal milestone. At other times, it is simply an extension of integration testing. All too often, because the project is behind and the delivery date is fast approaching, UAT is minimized or partially skipped. However, minimizing the UAT is always a mistake and a source of problems and disagreements. Planning the UAT: The UAT should be included in the total project plan, and whether it will be included in specific iterations or have its own iterations toward the end of the project. Detailed plans for the UAT itself need to be developed early. There are important decisions and preparations that must be done throughout the project. Waiting until late in the project to plan the UAT causes serious difficulties and delays. The important point here is that as user stories are identified, as use cases are defined, and as nonfunctional requirements are documented, UAT test cases can be identified that will enable the verification of the specifications. Figure 14-7 illustrates a planning form for UAT. Preparation and Pre-UAT Activities: The process of entering data into Figure 14-7 only identifies the potential test cases. The other part of the effort required is to develop the test data. Creating test data can be complex and require substantial resources. There are two primary types of test data: data entered by users and internal data residing in the database. The planning and creation of test data necessitates more detailed planning of the UAT. Another area of pre-UAT activity is to set up the test environment. Management and Execution of the UAT: One of the important decisions is who will participate and who has responsibility for the UAT. Because the objective of the UAT is user acceptance, the system users have primary responsibility. Ideally, system users will take full responsibility for identifying test cases, creating test data, and carrying out the UAT. Quick Quiz Q: What is the difference between integration testing and system testing? A: Integration testing is done on a single subsystem and focuses on the communication between components. System testing is done on the complete system, or multiple subsystems and focuses on not just integration but other issues such as performance. Q: What is the difference between system testing and user acceptance testing? A: System testing does include the entire system, but focuses more on correctness of results. User acceptance testing does include correctness of results, but also includes usability testing and actually solving the business need. Deployment Activities Key Terms system documentation – descriptions of system requirements, architecture, and construction details, as used by maintenance personnel and future developers user documentation – descriptions of how to interact with and use the system, as used by end users and system operators Lecture Notes Figure 14-9 lists the six activities of the final core process, Complete system tests and deploy the solution. Converting and Initializing Data An newly operational system often requires a fully populated database to support ongoing processing. Data needed at system startup can be obtained from these sources: Files or databases of a system being replaced Manual records Files or databases from other systems in the organization User feedback during normal system operation Reusing Existing Databases: In the simplest form of data conversion, the old system’s database is used directly by the new system with little or no change to the database structure. Although old databases are often reused in new or upgraded systems, some changes to database content are usually required. Typical changes include adding new tables, adding new attributes, and modifying existing tables or attributes. Reloading Databases: More complex changes to database structure may require creating an entirely new database and copying and converting data from the old database to the new database. Whenever possible, utility programs supplied with the DBMS are used to copy and convert the data. This may include export and import programs and temporary files. In more complex conversions, implementation staff must develop programs to perform the conversion and transfer some or all of the data. In some instances the data is only in paper format and must either be entered manually or scanned using some type of document reader. In some cases, it may be possible to begin system operation with a partially or completely empty database. Adding data as it is encountered reduces the complexity of data conversion but at the expense of slower processing of initial transactions. Training Users Training two classes of users—end users and system operators—is an essential part of any system deployment project. End users are people who use the system from day to day to achieve the system’s business purpose. System operators are people who perform administrative functions and routine maintenance to keep the system operating. The nature of training varies with the target audience. Training for end users must emphasize hands-on use for specific business processes or functions. System operator training can be much less formal when the operators aren’t end users. Determining the best time to begin formal training can be difficult. On one hand, users can be trained as parts of the system are developed and tested, which ensures that they hit the ground running. On the other hand, starting early can be frustrating to users and trainers because the system may not be stable or complete Documentation and other training materials are usually developed before formal user training begins. Documentation can be loosely classified into two types—system documentation and user documentation. System Documentation: System documentation serves one primary purpose: providing information to developers and other technical personnel who will build, maintain, and upgrade the system. Once the system has been developed, separate descriptions of it, such as written text and graphical models, are redundant with the system itself. A modern integrated development environment provides automated tools to support all SDLC core processes and documentation requirements. When the software is altered at a later date, the development and documentation tools can produce appropriate changes to the system documentation. Due to these capabilities, system documentation is always complete and in sync with the deployed system, thus simplifying future maintenance and upgrades. User Documentation: User documentation for modern systems is almost always electronic and is usually an integral part of the application. Topics typically covered include: Software startup and shutdown Keystroke, mouse, or command sequences required to perform specific functions Program functions required to implement specific business procedures (e.g., the steps followed to enter a new customer order) Common errors and ways to correct them Developing good user documentation requires special skills and considerable time and resources. Configuring the Production Environment Modern applications are built from software components based on interaction standards. Figure 14-14 shows a typical support infrastructure for an application deployed using Microsoft .NET. Developers work closely with personnel who administer the existing infrastructure to plan the support for the new system. In either case, this deployment activity typically starts early in the project so software components can be developed, tested, and deployed as they are developed in later project iterations. Quick Quiz Q: What are the major sources of data that must be initialized when the new system is deployed? A: Databases from previous systems, manual records, files from other systems in the organization. Q: What is the best way to develop and maintain system documentation? A: Since the final program is the “official” repository of information about the system, system documentation is best maintained by tools which programmatically generate documentation. Q: What is the best way to deploy user documentation? A: User documentation is most useful if it is deployed and available electronically as part of the production system. Many systems provide “help” functions and online tutorials as part of the production system. Managing Implementation, Testing, and Deployment Key Terms input, process, output (IPO) development order – a development order that implements input modules first, process modules next, and output modules last top-down development – a development order that implements top-level modules first bottom-up development – a development order that implements low-level detailed modules first use-case-driven development – a development based on a selection of use cases to implement during project iterations source code control system (SCCS) – an automated tool for tracking source code files and controlling changes to those files direct deployment or immediate cut over – a deployment method that installs a new system, quickly makes it operational, and immediately turns off any overlapping systems parallel deployment – a deployment method that operates the old and the new systems for an extended time period phased deployment – a deployment method that installs a new system and makes it operational in a series of steps or phases support activities – the activities in the support phase whose objective is to maintain and enhance the system after it is installed and in use alpha version – a test version that is incomplete but ready for some level of rigorous integration or usability testing beta version – a test version that is stable enough to be tested by end users over an extended period of time production release, release version, or production release – a system version that is formally distributed to users or made operational for long-term use maintenance release – a system update that provides bug fixes and small changes to existing features production system – the version of the system used from day to day test system – a copy of the production system that is modified to test changes Lecture Notes The previous sections have discussed the implementation, testing, and deployment activities in isolation. In an iterative development project, activities from all core processes are integrated into each iteration and the system is analyzed, designed, implemented, and deployed incrementally. But how does the project manager decide which portions of the system will be worked on in early iterations and which in later iterations? And how does he or she manage the complexity of so many models, components, and tests? Development Order One of the most basic decisions to be made about developing a system is the order in which software components will be built or acquired, tested, and deployed. Some of the factors, in addition to the software requirements as discussed in earlier chapters. Include the need to validate requirements and design decisions and the need to minimize project risk by resolving technical and other risks as early as possible. There are many ways to decide how to develop software including: Input, process, output; Top-down; and Bottom-up. Input, Process, Output Development Order: The input, process, output (IPO) development order is based on data flow through a system or program. The key issue to analyze is dependency—that is, which classes and methods capture or generate data that are needed by other classes or methods? Dependency information is documented in package diagrams and may also be documented in a class diagram. The chief advantage of the IPO development order is that it simplifies testing. Because input programs and modules are developed first, they can be used to enter test data for process and output programs and modules. The IPO development order is also advantageous because important user interfaces (e.g., data entry routines) are developed early. A disadvantage of the IPO development order is the late implementation of outputs. Top-Down and Bottom-Up Development Order: The terms top-down and bottom-up have their roots in traditional structured design and structured programming. Top-down and bottom-up program development can also be applied to object-oriented designs and programs, although a visual analogy isn’t obvious with object-oriented diagrams. The key issue is method dependency—that is, which methods call which other methods. Within an object-oriented subsystem or class, method dependency can be examined in terms of navigation visibility, as discussed in Chapters 10 and 11. Method dependency is also documented in a sequence diagram. Method dependency is also documented in a sequence diagram. The primary advantage of top-down development is that there is always a working version of a program. Use-Case Driven: IPO, top-down, and bottom-up development are only starting points for creating implementation and iteration plans. Other factors that must be considered include use case–driven development, user feedback, training, documentation, and testing. Use cases deserve special attention in determining development order because they are one of the primary bases for dividing a development project into iterations. In most projects, developers choose a set of related use cases for a single iteration and complete analysis, design, implementation, and deployment activities. Use-case-driven development occurs when developers choose which use cases to focus on first based on such factors as minimizing project risk, efficiently using nontechnical staff, or deploying some parts of the system earlier than others. Source Code Control A source code control system (SCCS) is an automated tool for tracking source code files and controlling changes to those files. An SCCS stores project source code files in a repository, and it acts the way a librarian would—that is, implements check-in and checkout procedures, tracks which programmer has which files, and ensures that only authorized users have access to the repository. Source code control is an absolute necessity when programs are developed by multiple programmers. The repository also serves as a common facility for backup and recovery operations. Packaging, Installing, and Deploying Components Important issues to consider when planning deployment include the following: Incurring costs of operating both systems in parallel Detecting and correcting errors in the new system Potentially disrupting the company and IS operations Training personnel and familiarizing customers with new procedures Different approaches to deployment represent different trade-offs among cost, complexity, and risk. The most commonly used deployment approaches are: Direct deployment Parallel deployment Phased deployment Direct Deployment: In a direct deployment, the new system is installed and quickly made operational, and any overlapping systems are then turned off. Direct deployment is also sometimes called immediate cut over. Both systems are concurrently operated for only a brief time (typically a few days or weeks) while the new system is being installed and tested. The primary advantage of direct deployment is its simplicity. Figure 14-7 illustrates a timeline for direct deployment. Parallel Deployment: In a parallel deployment, the old and new systems are operated for an extended period of time (typically weeks or months). Ideally, the old system continues to operate until the new system has been thoroughly tested and determined to be error-free and ready to operate independently. The primary advantage of parallel deployment is relatively low operational risk. The primary disadvantage of parallel deployment is cost, such as hiring temporary personnel or having substantial overtime requirements, double computing requirements, and increased management expenses. Parallel operation is generally best when the consequences of a system failure are severe. Full parallel operation may be impractical for any number of reasons. When full parallel operation isn’t possible or feasible, a partial parallel operation may be employed instead. Figure 14-18 shows a parallel timeline. Phased Deployment: In a phased deployment, the system is deployed in a series of steps or phases. Each phase adds components or functions to the operational system. During each phase, the system is tested to ensure that it is ready for the next phase. Phased deployment can be combined with parallel deployment. The primary advantage of phased deployment is reduced risk because failure of a single phase is less problematic than failure of an entire system. The primary disadvantage of phased deployment is increased complexity. Figure 14-19 illustrates a phased deployment with direct and parallel deployment of individual phases. Support Activities after Deployment The objective of the support activities is to keep the system running productively during the years following its initial deployment. During the support activities, upgrades or enhancements may be carried out to expand the system’s capabilities, and these will require their own development projects. Three major activities occur during support: Maintaining the system Enhancing the system Supporting the users Most newly hired programmer analysts begin their careers working on system maintenance projects. Tasks typically include changing the information provided in a report, adding an attribute to a table in a database, or changing the design of Windows or browser forms. Change and Version Control Medium- and large-scale systems are complex and constantly changing. Change and version control tools and processes handle the complexity associated with testing and supporting a system through multiple versions. Tools and processes are typically incorporated into implementation activities from the beginning and continue throughout the life of a system. Versioning: Complex systems are developed, installed, and maintained in a series of versions to simplify testing, deployment, and support. It isn’t unusual to have multiple versions of a system deployed to end users and yet more versions in different stages of development. An alpha version is a test version that is incomplete but ready for some level of rigorous integration or usability testing. A beta version is a test version that is stable enough to be tested by end users over an extended period of time. A system version created for long-term release to users is called a production version, release version, or production release. A production version is considered a final product, although software systems are rarely “finished” in the usual sense of that term. Minor production releases (sometimes called maintenance releases) provide bug fixes and small changes to existing features. Keeping track of versions is complex. Each version needs to be uniquely identified for developers, testers, and users. Controlling multiple versions of the same system requires sophisticated version control software, which is often built into development tools or can be obtained through a separate source code and version control system. Modifications are saved under a new version number to protect the accuracy of the historical snapshot. Submitting Error Reports and Change Requests: To manage the risks associated with change, most organizations adopt formal control procedures for all systems under development and in operation. Similar tools can be used to report and manage errors and requests for new features in operational systems. Typical change control procedures include these: Standard reporting methods Review of requests by a project manager or change control committee For operational systems, extensive planning for design and implementation Implementing a Change: Change implementation follows a miniature version of the SDLC. In essence, a change for a maintenance release is an incremental development project in which the user and technical requirements are fully known in advance. Whenever possible, changes are implemented and tested on a copy of the operational system. The production system is the version of the system used day to day. The test system is a copy of the production system that is modified to test changes. Quick Quiz Q: What does IPO stand for and what does IPO development order mean? A: IPO = Input, process, output. It means the order of developing the system is to develop the portions of the system that accept input first, then those parts that do the processing, then those parts that produce outputs. Q: Iterative development is often considered risk based. What does that mean with regard to development order? A: Develop the higher risk portions of the system first – in the early iterations. Q: What does SCCS mean and why is it important? A: SCCS is Source code control system, and it is used to keep track of all of the classes or modules that are being programmed. It is used when there are multiple programmers working on the same system to protect the programs so that changes are not being made to the same programs at the same time. Q: What is the difference between parallel deployment and phased deployment? A: Parallel is when the old system and the new system are both being used at the same time. Phase deployment is when parts of the new system are put into production. It is deployed a subsystem at a time. Q: Which deployment method is the least expensive (out of pocket costs)? A: Direct deployment Q: Why are parallel and phased deployment used if they are more expensive? A: To reduce risk of failure or problems. Q: What is versioning and why is it used? A: Versioning is where a system is enhanced and changed by making a new version. It is used because for most widely used systems, there will be people who remain on an old version and other users who migrate to the new version. Versioning is a way to allow this flexibility in use of a system. Putting It All Together – RMO Revisited Key Terms none Lecture Notes Upgrade or Replace? It was decided that an upgrade would be too complex and insufficient due to the following reasons: The current infrastructure is near capacity. RMO expects to save money by having an external vendor host the CSMS. Existing CSS programs and Web interfaces are a hodgepodge developed over 15 years. Current system software is several versions out of date. Infrastructure that supports the current CSS can be re-purposed to expand SCM capacity. Phased Deployment to Minimize Risk To minimize deployment risks, the CSMS will be deployed in two versions. Version 1.0 will re-implement most of the existing CSS use cases with minimal changes. Version 2.0 will incorporate bug fixes and incremental improvements to version 1.0 and will add additional functionality not present in the CSS, including social networking, feedback/recommendations, business partners, and Mountain Bucks. The old system will also be maintained for a period of time after the new Version 1.0 is deployed. Database Development and Data Conversion A new CSMS database will need to be built, and data will need to be migrated from the CSS database prior to deploying version 1.0. Database development and migration prior to version 1.0 deployment will occur over multiple iterations. All data in the production CSS database will be migrated to the CSMS database near the end of the fourth iteration. At the end of the fifth iteration, all CSS database changes since the last full migration will be copied to the CSMS database. To minimize risk, additional data conversion routines will copy new data from the CSMS database back to the CSS database twice per day during the fifth iteration. If CSMS version 1.0 passes all user acceptance tests during the fifth iteration, the CSS will be turned off and data migration will cease. Development Order The IPO development order is the primary basis for the development plan. By starting with a copy of the CSS database, a set of test data will exist from the first iteration, thus enabling the highest risk use cases to be tackled first. Documentation and Training Training activities were spread throughout later project iterations for both production versions. Initial training exercises covered the highest-risk portion of the system prior to deployment. Quick Quiz Q: In summary, what is the main reason for building a new system for RMO? A: It would be too complex to try to upgrade the current system because of the high risk of trying to get all new technology into the old system. Q: Why did RMO decide to use a phased deployment approach? A: To protect itself against risk of failure. The system is the life blood of the company. Failure would be catastrophic. Q: What approach did RMO use to build the new database? A: It began building the new database as portions of the system were completed and deployed. Starting with iteration 4, the new production database was beginning to hold production data. Classroom Activities This chapter is loaded with information. But the information is primarily about activities that occur after systems analysis and design. And since it comes towards the end of the course, there will be very little opportunity for students to practice the concepts explained herein. Probably the best in-class activities would be to have some real case histories to discuss. Either an on-campus or off-campus development teams would be able to discuss approaches and problems with testing, data conversion, and installation. Other kinds of interesting classroom activities would be to have students, usually in pairs or their teams, to develop a fairly detailed test plan for one of the end of chapter cases. Many students will have experience doing unit test from their programming courses. However, most will have very little experience with integration testing, system testing, and user acceptance testing. It is often interesting to compare students' solutions and to raise questions that were not considered. Such questions might include: What test data is needed to do integration test? System tests? Stress tests? User acceptance test? Who creates this data? How much time will be required to create the tests? What is required in a piece of test data? (Valid inputs, invalid inputs, expected results of both.) How are errors tracked? How are they assigned to be fixed? How do you schedule fixes? How much time is needed? What kind of re-testing is required? How do you handle ripple effects in a test plan and process? Who conducts integration test? System tests? Stress tests? User acceptance test? Who needs to be involved? Troubleshooting Tips The topics in this chapter are fairly straightforward and the students should not have problems grasping the concepts. The difficulties with this material is that it really can only be internalized with personal experience, which is a problem because this course does not extend through programming and deployment. As mentioned above in classroom activities, one of the best ways to solidify this material is to see some real live implementation plans from real projects in actual organizations. Discussion Questions 1. Source Control and Versioning It is not uncommon for a development team to build the next generation of a software product while a different support team focuses on maintaining the current version of software. How do the teams monitor the modifications that are made to the software during a six-month development cycle? The support team could make a change to a module that is also being modified by the development team to add new functionality. How do managers and staff of the respective teams hand this situation? Managing source control and versioning in a scenario where a development team is simultaneously working on the next generation of a software product while a support team maintains the current version requires careful coordination, communication, and effective use of version control systems. Here's how teams can monitor modifications and handle conflicts during a six-month development cycle: 1. Version Control System (VCS): • Both the development and support teams should use a robust version control system (VCS) such as Git, SVN, or Mercurial to manage their codebase. This allows them to track changes, collaborate effectively, and maintain a history of modifications. 2. Branching Strategy: • Adopting a branching strategy is crucial for managing concurrent development and maintenance efforts. The teams can utilize different branches for the next generation development and the current version maintenance. For instance, the development team may work on feature branches, while the support team addresses bug fixes on a separate maintenance branch. 3. Communication and Collaboration: • Regular communication between the development and support teams is essential to ensure awareness of ongoing changes and potential conflicts. Teams can hold periodic sync meetings, use project management tools, or maintain documentation to track modifications and dependencies. 4. Change Management Process: • Establish a robust change management process to govern modifications made by both teams. This process should include guidelines for submitting, reviewing, and approving changes, as well as mechanisms for resolving conflicts and prioritizing tasks. 5. Code Reviews: • Implement code review practices to ensure that changes made by both teams are thoroughly evaluated for quality, correctness, and adherence to coding standards. This helps identify potential conflicts or compatibility issues early in the development cycle. 6. Conflict Resolution: • In cases where both teams make changes to the same module or functionality, managers and staff must collaborate to resolve conflicts effectively. This may involve prioritizing changes based on impact, coordinating merge efforts, or implementing feature toggles to enable/disable new functionality as needed. 7. Automated Testing: • Implement automated testing practices to validate changes made by both teams and detect regressions or compatibility issues. Continuous integration (CI) pipelines can automatically run tests on each code commit, providing timely feedback to developers. 8. Documentation: • Maintain comprehensive documentation of the software architecture, dependencies, and modification history. This helps teams understand the context of changes and facilitates knowledge transfer between team members. 9. Post-Release Monitoring: • Continuously monitor the performance and stability of the current version of the software after modifications are deployed. This allows teams to identify and address any issues or regressions promptly. By following these practices, managers and staff of both the development and support teams can effectively monitor modifications, collaborate, and manage conflicts during a six-month development cycle, ensuring the smooth evolution of the software product while maintaining stability and reliability for end users. 2. Installation Procedures What startup and training procedures should occur during the first three to six months of a new system’s implementation? How should training be handled for employees who are hired during the first three months? Installation of a new system can create stress in an organization. What steps should be taken to mitigate this impact? During the first three to six months of a new system's implementation, several startup and training procedures should occur to ensure a smooth transition and effective utilization of the system by employees. Here's a comprehensive approach: 1. System Setup and Configuration: • Ensure that the new system is properly installed, configured, and integrated with existing systems and workflows. This may involve working closely with the vendor or IT team to set up servers, databases, user accounts, and permissions. 2. Comprehensive Training Program: • Develop a comprehensive training program tailored to the needs of different user groups (e.g., administrators, managers, frontline staff). Training sessions should cover various aspects of the system, including functionality, navigation, data entry, reporting, and troubleshooting. • Provide hands-on training sessions, workshops, and online resources to accommodate different learning styles and preferences. • Offer refresher training sessions and ongoing support to reinforce learning and address any questions or challenges that arise. 3. Onboarding for New Employees: • Develop an onboarding process specifically for new employees hired during the first three months of the system implementation. This may include orientation sessions, access to training materials, mentorship programs, and buddy systems to facilitate knowledge transfer and integration into the team. 4. User Feedback Mechanism: • Establish a user feedback mechanism to gather input from employees regarding their experience with the new system. Encourage open communication and constructive feedback to identify areas for improvement and address any usability issues or concerns. 5. Change Management and Communication: • Implement a change management strategy to proactively address resistance to change and promote user adoption of the new system. Communicate the benefits of the system and how it aligns with organizational goals and objectives. • Provide regular updates and progress reports to keep employees informed about the status of the implementation and any upcoming changes or enhancements. 6. Technical Support and Troubleshooting: • Ensure that adequate technical support resources are available to assist employees with system-related issues and troubleshooting. Establish a helpdesk or support ticketing system to track and prioritize user requests effectively. • Provide documentation, FAQs, and self-service resources to empower users to troubleshoot common issues independently. 7. Stress Mitigation Measures: • Acknowledge the potential stress and disruption that the installation of a new system may cause within the organization. Implement stress mitigation measures such as: • Offering additional support and resources during the transition period. • Providing opportunities for employees to voice their concerns and receive reassurance. • Celebrating milestones and achievements throughout the implementation process to boost morale and motivation. • Encouraging work-life balance and wellness initiatives to help employees cope with increased workload or uncertainty. 8. Continuous Improvement and Evaluation: • Continuously evaluate the effectiveness of the training and implementation process through surveys, feedback sessions, and performance metrics. Use this feedback to identify areas for improvement and refine training programs and support resources accordingly. By implementing these startup and training procedures and taking proactive steps to mitigate stress and resistance to change, organizations can facilitate a successful transition to a new system and ensure that employees are equipped with the knowledge and support they need to adapt and thrive. 3. Annual Payroll Revisions Each tax year, the Internal Revenue Service (IRS) and the Department of Revenue make revisions to their respective tax codes that could impact payroll systems. Using a payroll system flowchart as a guideline, identify areas in your system flowchart that might need to be modified if the IRS were to change the tax rate percentages and the amount for each itemized deduction. If you implemented these changes on February 15 instead of January 1 (the start of the tax year), which additional installation steps would you need to consider? If the IRS were to change the tax rate percentages and the amount for each itemized deduction, several areas in the payroll system flowchart might need to be modified: 1. Calculation of Withholdings: • The calculation of federal income tax withholdings would need to be updated to reflect the new tax rate percentages. • Any calculations based on itemized deductions, such as deductions for mortgage interest or charitable contributions, would need to be adjusted according to the revised amounts. 2. Employee Records: • Employee records may need to be updated to ensure accurate tracking of tax-related information, including updated tax rates and deduction amounts. 3. Paycheck Generation: • Paycheck generation processes would need to incorporate the revised withholding calculations to ensure accurate deductions from employee paychecks. 4. Reporting: • Reporting mechanisms, both internal and external (e.g., tax reporting to the IRS), would need to reflect the changes in tax rates and deduction amounts. If these changes were implemented on February 15 instead of January 1, the following additional installation steps would need to be considered: 1. Retroactive Adjustments: • For employees who have already received paychecks in the current tax year (between January 1 and February 15), retroactive adjustments may need to be made to ensure that they are taxed correctly based on the revised rates and deductions. • Retroactive adjustments would involve recalculating withholdings for previous pay periods and potentially issuing additional payments or adjusting tax withholdings accordingly. 2. Communications: • Communicate the changes to employees to ensure they are aware of any adjustments to their paychecks and understand the reasons for the changes. 3. Compliance and Reporting: • Ensure that all retroactive adjustments are compliant with IRS regulations and accurately reported for tax purposes. • Update any tax reporting documentation to reflect the retroactive adjustments made to employee pay. 4. System Testing: • Conduct thorough testing of the system to ensure that retroactive adjustments are calculated accurately and that no errors occur during the adjustment process. By considering these additional installation steps, the payroll system can effectively incorporate changes to tax rates and deductions, whether implemented at the beginning of the tax year or partway through. 4. Methods of Deployment One of the most difficult questions faced by organizations is how to actually deploy the system. Should it a direct cut over deployment be attempted? It is the cheapest and most direct? Should the two systems be run in parallel? And for how long? Is there a way to do a phased implementation? Choosing the right deployment method for a new system is indeed a critical decision for organizations, as it impacts not only cost and efficiency but also user experience and business continuity. Here are some common methods of deployment, along with considerations for each: 1. Direct Cut-Over Deployment: • In a direct cut-over deployment, the new system replaces the old one entirely in a single, swift transition. • Pros: • It is often the quickest and simplest deployment method, minimizing downtime and disruption. • It requires fewer resources and may be more cost-effective in the short term. • Cons: • It carries a higher risk of failure or unexpected issues, as there is little room for testing or fallback options. • Users may experience a steep learning curve and potential productivity loss during the immediate switch. 2. Parallel Deployment: • In a parallel deployment, the new system runs alongside the old system for a period of time until the new system is fully tested and proven. • Pros: • It provides a safety net by allowing users to revert to the old system if necessary, reducing the risk of disruption to business operations. • It facilitates gradual user adoption and learning, as users can gradually transition to the new system while still relying on the familiar old system. • Cons: • It requires additional resources and infrastructure to maintain both systems simultaneously, which can increase costs. • There may be challenges in synchronizing data and ensuring consistency between the two systems. 3. Phased Implementation: • Phased implementation involves rolling out the new system in stages, with different modules or functionalities being deployed incrementally over time. • Pros: • It allows for a more controlled and gradual transition, reducing the risk of overwhelming users and minimizing disruption to business operations. • It provides opportunities for testing and refinement between each phase, improving the overall quality and reliability of the system. • Cons: • It requires careful planning and coordination to ensure that dependencies between different phases are managed effectively. • It may prolong the overall deployment timeline, potentially delaying the realization of benefits associated with the new system. Considerations for Deployment: • Business Needs: Consider the organization's specific requirements, objectives, and tolerance for disruption when selecting a deployment method. • Risk Management: Evaluate the risks associated with each deployment method and implement appropriate mitigation strategies to minimize potential negative impacts. • Resource Availability: Assess the availability of resources, including budget, personnel, and infrastructure, needed to support the chosen deployment approach. • User Experience: Prioritize user experience and ensure that the chosen deployment method minimizes disruption and supports effective user adoption and transition. In many cases, a hybrid approach combining elements of different deployment methods may be most suitable, depending on the complexity of the system, the organization's capabilities, and the preferences of key stakeholders. Regardless of the chosen method, thorough planning, testing, and communication are essential to ensure a successful deployment and smooth transition for all stakeholders involved. Instructor Manual for Systems Analysis and Design in a Changing World John W. Satzinger, Robert B. Jackson, Stephen D. Burd 9781305117204, 9781111951641

Document Details

Related Documents

Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right