Appendix C The University Lab: Conceptual Design, Verification, Logical Design, and Implementation Discussion Focus How is a database design verified, and why is such verification necessary? Use our detailed answer to question 1 to focus class discussion on database design verification. Stress that the verification process uses the initial ER model as a communication tool. The designer may begin the verification process by describing the organization's operations to its end users, basing the detailed description on the initial ER model. Next, explain how the operations will be supported by the database design. Stress that the design must support the end-user application views, outputs, and inputs. Points to be addressed include such questions as: • Is the description accurate? If not, what aspects of the description must be corrected? • Does the model support the end-user requirements? If not, what aspects of the end user requirements have not been addressed or have been addressed inadequately? Keep in mind that even a model that perfectly addresses all initially determined end user requirements is likely to need adjustments as those end users begin to understand the ramifications of the database design's capabilities. In many cases, the end users may learn what the organization's processes and procedures actually are, thus leading to new requirements and the perception of new opportunities. The database designer must keep such likely developments in mind, especially if (s)he works as a database design consultant. (Anticipation of such developments must be factored into the contract negotiations for consulting fees.) Discuss the role of the system modules. The use of system modules can hardly be overemphasized in a database design environment. Stress these module characteristics and features: • Modules represent subsets of the database model: Smaller "pieces" are more easily understood. • Modules are self contained and accomplish a specific system function; if such a system function must be modified, other functions remain unaffected. • Modules fit into a modular database design, which is more easily modified and adapted to new circumstances. Because modification efforts are focused on a database subset, productivity of both designers and application developers is likely to be enhanced. Module interfaces must be clear if the modules are expected to work well within the overall system. Answers to Review Questions 1. Why must a conceptual model be verified? What steps are involved in the verification process? The verification of a conceptual model is crucial to a successful database design. The verification process allows the designer to check the accuracy of the database design by: • Re-examining data and data transformations. • Enabling the designer to evaluate the design efficiency relative to the end user's and system's design goals. Keep in mind that, to a large extent, the best design is the one that serves the end user requirements best. For example, a design that works well for a manufacturing firm may not fit the needs of a marketing research firm, and vice versa. The verification process helps the designer to avoid implementation problems later by: • Validating the model's entities. (Remember the minimal data rule.) • Confirming entity relationships and eliminating duplicate, unnecessary, or improperly defined relationships. • Eliminating data redundancies. • Improving the model's semantic precision to better represent real world operations. • Confirming that all user requirements (processing, performance, or security) are met. Verification is a continuous activity in any database design. The database design process is evolutionary in nature: It requires the continuous evaluation of the developing model by examining the effect of adding new entities and by confirming that any design changes enhance the model's accuracy. The verification process requires the following steps: 1. Identify the database's central entity. The central entity is the most important entity in our database, and most of the other entities depend on it. 2. Identify and define each module and its components. The designer divides the database model into smaller sets that reflect the data needs of particular systems modules such as inventory, orders, payroll, etc. 3. Identify and define each of the module's processes. Specifically, this step requires the identification and definition of the database transactions that represent the module's real world operations. 4. Verify each of the transactions against the database. 2. What steps must be completed before the database design is fully implemented? (Make sure that you list the steps in the correct sequence and discuss each step briefly.) The DBLC, discussed in detail in Chapter 9, “Database Design,” constitutes a database's history, tracing it from its conceptual design to its implementation and operation. We highly recommend that the database designer follow the DBLC's steps carefully in order to ensure that the database will properly meet all user and system requirements. Before a database can be successfully implemented, the following steps must be completed: 1. Define the conceptual model's components: entities, attributes, domains, and relationships. 2. Normalize the database to ensure that all transitive dependencies are eliminated and that each entity's attributes are solely dependent on its key attribute(s). 3. Verify the conceptual model to ensure that the proposed database will meet the system's transaction requirements and that the end user and systems requirements will be met. The verification process will probably delete and/or create entities, attributes, and relationships. It may also refine existing entities, attributes, and relationships. 4. Create the logical design which requires the definition of the table structures, using a specific DBMS (relational, network or hierarchical). Logical design also includes, if necessary, appropriate indexes and views. 5. Create the physical design to define access paths, including space allocation, storage group creation, table spaces, and any other physical storage characteristic that is dependent on the hardware and software to be used in the system's implementation. 6. Implement the design. Somehow, this last step seems to suffer from planning neglect, to the detriment of the system's operation. Implementation, operation, and maintenance plans must (at least) include careful definition and description of the activities required to implement the database design: • loading and conversion • definition of database standards • system and procedures documentation: security, backup, and recovery • operational procedures to be followed by users • a detailed training plan • identification of responsibilities for operation and maintenance. 3. What major factors should be addressed when database system performance is evaluated? Discuss each factor briefly. Database systems performance refers to the system's ability to retrieve information within a reasonable amount of time and at a reasonable cost. Keeping in mind that "reasonable" means different things to different people, we must address at least these important performance factors: • Concurrent users For any given system, the more users connected to the system, the longer the data retrieval time. • Resource limits The fewer resources that are available to the user, the longer the access queues will be. • Communication speeds Lower communication speeds mean longer response times. • Query response time Queries must be tuned to provide optimum query response time. (See Appendix C, “Database Performance Tuning.”) Lack of query response tuning means slow response times. Depending on how good the design and the program code are, the query response time can vary from minutes to hours for the same query. Although the preceding discussion is focused on the speed aspect of performance, there are other equally important issues that must be considered. A successful database implementation requires a balanced approach to all database issues, including concurrency control, query response time, database integrity, security, backup and recovery, data replication, and data distribution. 4. How would you verify the ER diagram shown in Figure QC.4? Make specific recommendations. Figure QC.4 The ERD for Question 4 The verification process must include the following steps: 1. Identify and define the main entities, attributes, and domains. In this case, the main entities are PARTS, SUPPLIER, PRODUCT, and CUSTOMER. Identify proper primary keys and composite and multi valued attributes. 2. Identify and define the relationships among the entities. By examining the diagram, we may conclude that several M:N relationships exist: PARTS and SUPPLIER PARTS and PRODUCTS PRODUCT and CUSTOMER 3. Identify the composite entities and their primary and foreign keys. Each composite (bridge) entity creates the connection to maintain a 1:M relationship with each of the original entities. 4. Normalize the model. 5. Verify the model, starting with the identification of the central entity. Given the ER diagram's layout, we conclude that the central entity is PRODUCT. 6. Identify each module and its components. Three modules can be identified: • Inventory, containing PARTS and SUPPLIER • Production, containing PARTS and PRODUCT • Sales, containing PRODUCT and CUSTOMER 7. Identify each module's processes or transaction requirements. Start by listing known transaction descriptions by module. For brevity's sake, we will use the inventory module as an example. The inventory module supports the following transactions: • Add a new product to inventory • Modify an existing product in inventory • Delete a product from inventory • Generate a list of products by product type • Generate a price list with product by product type • Query the product database by product description Check the database model against these transaction requirements, verify the model's efficiency and effectiveness, and make the necessary changes. 5. Describe and discuss the ER model's treatment of the UCL's inventory/order hierarchy: a. Category b. Class c. Type d. Subtype The objective here is to focus student attention on the details of the UCL's approach to inventory management. Note that the UCL's ER model uses two closely related entities to manage items in inventory: ITEM and INVENTORY_TYPC. These two entities maintain a 1:M relationship: One item belongs to only one inventory type, but an inventory type can contain many items. Inventory types are classified through the use of a hierarchy composed of CATEGORY, CLASS, and TYPC. (We may even identify SUBTYPE for each TYPE!) Basically, the hierarchy may be described this way: A category has many classes, and a class has many types. For example, the category hardware includes the classes computer and printer. The class computer has many types that are defined by their CPU: 486 and Pentium computers. Similarly, the category supplies can have several classes: diskette, paper, etc. Each class can have many types: 3.5 DD diskette, 3.5 HD diskette, 8.5x11 paper, 8.5x14 paper, and so on. We may even identify subtypes: Each type can have many subtypes. For example, the class "paper" includes the types “single-sheet” and “continuous-feed”; the single-sheet type may be classified by subtype 8 x 11 inches or 11 x 14 inches. The following table summarizes some of the inventory types identified in the system. Note that the hierarchy may be illustrated as shown in Table QC.5A. Table QC.5A The Classification Hierarchy Category Class Type Subtype Hardware Computer Desktop Desktop Laptop P4 P3 P4 Printer Laser Laser Inkjet Inkjet Plotter 8 ppm 12 ppm Color Black 2 x 3 Supply Paper Continuous-feed Single sheet Single sheet 8 x 10 11 x 14 It is important to note that each item can belong to only one specific inventory type. Also, keep in mind that the ORDER_ITEM entity interfaces with the INVENTORY_TYPE, rather than with the ITEM entity. The reason for this interface is clearly based on the chapter's description of the UCL operations: "The CLD requests items without specifying a specific brand and/or vendor." Given this requirement, it is clear that the ITEM can't be identified in the request. (The ITEM's primary key is its serial number, which can't be identified until the ITEM is received!) However, to make the request, we must know the requested item's inventory type. Therefore, ORDER is related to the INVENTORY_TYPE, and not to the ITEM. The hierarchy shown here has led us to develop the classification scheme shown in the text's Inventory Classification Hierarchy, illustrated in table QC.5B: Table QC.5B An Inventory Classification Hierarchy GROUP CATEGORY CLASS TYPE SUBTYPE HWPCDTP5 Hardware (HW) Personal Computer (PC) Desktop (DT) Pentium (P5) HWPCLP48 Hardware (HW) Personal Computer (PC) Laptop (LT) Pentium IV HWPRLS Hardware (HW) Printer (PR) Laser (LS) Standard HWPRDM80 Hardware (HW) Printer (PR) Inkjet (IJ) 80-column SUPPSS11 Supply (SU) Paper (PP) Single Sheet (SS) 8.5" x 11" l HWEXHDID Hardware (HW) Expansion Board (EX) Video (VI) XX SWDBXXXX Software (SW) Database (DB) XX XX The classification hierarchy may also be illustrated with the help of the tree diagram shown in Figure QC.5: Figure QC.5 The INV_TYPE Classification Hierarchy As a Tree Diagram 6. Modern businesses tend to provide continuous training to keep their employees productive in a fast-changing and competitive world. In addition, government regulations often require certain types of training and periodic retraining. (For example, pilots must take semiannual courses involving weather, air regulations, and so on.) To make sure that an organization can track all training received by each of its employees, trace the development of the ERD segment in Figure QC.6 from the initial business rule that states: An employee can take many courses, and each course can be taken by many employees. Once you have traced the development of the ERD segment, verify it and then provide sample data for each of the three tables to illustrate how the design would be implemented. Figure QC.6 The ERD Segment for Question 6 Follow the verification steps described in the answer to question 4. Note that the composite TRAINING entity shown in Figure QC.6 reflects part of the verification process that began with the M:N relationship between EMPLOYEE and COURSC. (An employee can take many courses and many employees can take each course.) Part of the verification process involves the elimination of multi-valued attributes. For example, an EMPLOYEE table that contains an attribute EMP_TRAINING containing strings such as “fire safety, weather, air regulations” have already been eliminated by the composite TRAINING entity. The structure shown in Figure QC.6 allows us to add attributes to ensure that training details – such as dates, grades, training locations, etc. -- can be traced, too. One additional – and very important -- point is worth mentioning: at this point, Figure QC.6’s ERD cannot handle recurrent training requirements. That is, if some courses must be retaken periodically, as is common in many transportation businesses, the TRAINING entity’s PK – at this point composed of the EMP_NUM + COURSE_CODE – will not yield a unique value if the course is retaken from time to time. The solution to this problem can be found in either one of two ways: 1. Add the training date to the TRAINING entity’s composite PK to become EMP_NUM + COURSE_CODE + TRAIN_DATC. This approach is illustrated in the examples shown in Tables QC.6A through QC.6C. Note that employee 105 took the FAR-135-P course on 26-Sep-2013 and on 11-Feb-2014. Employee 101 took the WEA-01 course on 26-Sep-2013 and on 26-Mar-2014. Note that the addition of the TRAIN_DATE to the composite PK prevents the duplication of training records. For example, if you tried to enter the first TRAINING record twice, the combination of EMP_NUM+COURSE_CODE+TRAIN_DATE would not be unique and the DBMS would diagnose an entity integrity violation. Table QC.6A The EMPLOYEE Table Contents EMP_NUM EMP_LNAME 105 Ortega 101 Williams Table QC.6B The TRAINING Table Contents EMP_NUM COURSE_CODE TRAIN_DATE TRAIN_GRADE 105 FAR-135-P 26-Sep-2013 90 105 HM-01 18-Dec-2013 92 101 FAR-135-P 23-Nov-2013 93 105 WEA-01 10-Mar-2014 87 101 HM-01 15-Sep-2013 91 101 WEA-01 26-Sep-2013 85 105 FAR-135-P 11-Feb-2014 97 101 WEA-01 26-Mar-2014 89 Table QC.6C The COURSE Table Contents COURSE_CODE COURSE_DESCRIPTION FAR-135-P Aircraft charter regulations for pilots FAR-135-M Aircraft maintenance for charter operations HM-01 Hazardous materials handling WEA-01 Aviation weather – basic operations WEA-02 Aviation weather – instrument operations 2. Create a new PK attribute named TRAIN_NUM to uniquely identify each entity occurrence in the TRAINING entity, and then create a composite index composed of EMP_NUM + COURSE_CODE + TRAIN_DATE. This action will remove the weak/composite designation from the TRAINING, because the TRAINING entity’s PK is no longer composed of the PK attributes of the EMPLOYEE and COURSE entities. (And the “receives” and “is used in” relationships will no longer be classified as “identifying” – thus changing the relationship descriptions from “identifying” or “strong” to “non-identifying” or weak”). The composite index will prevent the duplication of records. Note the change in the structure and contents of the TRAINING table shown in Table QC.6D. Table QC.6D The Modified TRAINING Table Structure and Contents TRAIN_NUM EMP_NUM COURSE_CODE TRAIN_DATE TRAIN_GRADE 1203 105 FAR-135-P 26-Sep-2013 90 1204 105 HM-01 18-Dec-2013 92 1205 101 FAR-135-P 23-Nov-2013 93 1206 105 WEA-01 10-Mar-2014 87 1207 101 HM-01 15-Sep-2014 91 1208 101 WEA-01 26-Sep-2013 85 1209 105 FAR-135-P 11-Feb-2014 97 1210 101 WEA-01 26-Mar-2014 89 We would recommend the second approach. Generally speaking, single-attribute PKs are preferred over composite PKs. Single-attribute PKs are more easily handled if the table is to be linked to a related table later. (The linking is done through a FK – which is the PK in the “parent” table. But if the parent table uses a composite PK, how can you then create the appropriate FK?) In any case, the declaration of a composite PK automatically generates a matching composite index, so you would not decrease the index library if you used approach 1. 7. You read in this appendix that: An examination of the UCL's Inventory Management module reporting requirements uncovered the following problems: • The Inventory module generates three reports, once of which is an Inventory Movement Report. But the inventory movements are spread across two different entities (CHECK_OUT and WITHDRAW). That spread makes it difficult to generate the output and reduces the system's performance. • An item's quantity on hand is updated with an inventory movement that can represent a purchase, a withdrawal, a check-out, a check-in, or an inventory adjustment. Yet only the withdrawals and check-outs are represented in the system. What solution was proposed for that set of problems? How would such a solution be useful in other types of inventory environments? The proposed solution was to create a common entry point for all inventory movements. This common entry point is represented by a new entity named INV_TRANS. The INV_TRANS entity is used to record an entry for each inventory transaction. In other words, the system keeps track of all inputs to and withdrawals from inventory by using this INV_TRANS entity. It is important to realize that the INV_TRANS entity is a crucial entity in the system, because it reflects all item transactions. Such a solution is not unique to the UCL's inventory system: Most inventory systems must be able to keep track of such transactions. Having a central point of reference facilitates the processing, updating, querying, and reporting capabilities of the inventory system. The UCL's data model keeps track of several types of inventory transaction purposes or motives: checkouts, withdrawals, adjustments, and purchases. Note the system's flexibility: The user is able to classify all inventory transactions by type and/or motive. In addition to being flexible, the UCL system is easily expandable: If necessary, the system can support additional types of inventory transaction motives. For example, the system may be expanded to include inter warehouse inventory transfers, items retired from inventory because they are date-limited, and so on. (Date-limited inventory is typical for such things as pharmaceuticals, food, etc.) Given its flexibility and expandability, we may conclude that the UCL system's inventory data model represents a very viable solution to modeling real world inventory transactions. Therefore, it may be used to fit into just about any inventory environment. Note: Optimum vs. Implemented Solutions. The final UCL ERD makes use of the INV_TRANS entity to replace the WITHDRAW entity. Perhaps some of your students wonder about the similarity of the CHECK_OUT and CO_ITEM entities when compared to the INV_TRANS and INTR_ITEM entities. For instance, it is quite appropriate to argue that CHECK_OUT is a type of inventory transaction and that, therefore, CHECK_OUT is a subtype of an INV_TRANS supertype. Why did the designer create such apparent system redundancy? Why wasn't the type/subtype hierarchy used more efficiently? (Classification hierarchies and supertypes/subtypes are covered in Chapter 5, “Advanced Data Modeling.”) To answer this question, return to the discussion about fine-tuning the database for performance, integrity, and security. Based on the estimation of the number of transactions, the number of items, and the number of the possible concurrent accesses to the INV_TRANS entity, it was clear that this entity will be one of the most active in the system. The large number of check-outs reports and the even larger expected number of inventory transactions prompted both the designer and the end user to choose either controlled redundancy or having a performance bottleneck. Perhaps some students will argue that the use of the CHECK_OUT and CO_ITEM entities represents a major burden to the system and that, therefore, the system should be implemented without these entities. This argument clearly has some merit: The only immediate advantage of having the CHECK_OUT and CO_ITEM entities is that the Inventory check-outs report uses these entities, rather than the INV_TRANS and INTR_ITEM entities. Therefore, the elimination of CHECK_OUT and CO_ITEM reduces the concurrent access conflicts for the INV_TRANS and INTR_ITEM entities. Finally, we note that both the designer and the end user are aware of the consequences of the selected solution. Remember, this is a real solution to a real problem, and it helps to illustrate the point that we made earlier: The best solution is not always the one that is implemented. Each system is subject to constraints, and the designer must inform the end user of the consequences of the data modeling design selections. An important note in primary key selection for multi-user systems The LOG is an entity that keeps a record of all the students that use the UCL. Note that the primary key is formed by LOG_DATE, LOG_TIME, and USER_ID. Ask the students why USER_ID has been made a part of the primary key. Since each user can be in only one place at one time, it seems safe to assume that USER_ID does not need to be part of the primary key. So why not just use a primary key composed of LOG_DATE and LOG_TIME? For example, suppose that the student Christobal Colombus enters the UCL on 02-Mar-2014 at 02:10:11 pm. To use the UCL's facilities and services, Mr. Columbus must give his student identification card to the lab assistant. Clearly, Mr. Colombus can only be at that one location at that time. When the lab assistant enters Mr. Columbus's USER_ID, that entry is made at a specific and unique date and time. When the lab assistant registers the next student, that student's USER_ID is entered at a different time in the computer's clock. Since every USER_ID entry is made at a different time, there seems to be no need for the USER_ID to be part of the primary key. However, this scenario is correct only in a single-user, stand-alone system. Remember that the UCL system runs in a LAN and that the ACCESS module is accessed by two lab assistants through two different terminals. Therefore, it is possible that, at a given time, both data entries are made at the same computer clock time. When the data are to be saved to the database, one of the two entries will be executed first; and, to preserve entity integrity, the second entry will be aborted because the date and time already exist in the database. Since it is possible to have two users register in the LOG during the same day and at the same time, only their USER_IDs will be different. Therefore, to ensure uniqueness of the primary keys, the inclusion of USER_ID as part of the primary key is quite appropriate. Of course, you might use the LOG_READER, instead of the USER_ID, to define the primary key. After all, the same LOG_READER cannot be swiped twice at the precise same time. In either case, the uniqueness of the entry is preserved, thus preserving entity integrity. Which attribute (USER_ID or LOG_READER) is used as a part of the primary key is the designer's decision. The only requirement is that entity integrity is maintained. Problem Solutions 1. Verify the conceptual model you created in Appendix B, problem 3. Create a data dictionary for the verified model. The verification of the car dealership's database design conforms to the verification process described in Appendix C. (We have also illustrated the verification process in this appendix's review question 4.) Since the verification process has already been explored in depth in several places, we will focus on the ERDs that were modified during the course of the verification process. Use the Data Dictionary format shown in Chapter 7, “Introduction to Structured Query Language (SQL),”Table 7.3 as your data dictionary template. The basic verified database design is shown in Figure PC.1. As you discuss Figure PC.1, note that the verification process substantially modified the service component of the initial ERD. (See the discussion that accompanies Figure PD.3a in this manual’s Appendix D, problem 3.) These changes reflect the increasingly important accountability requirements. As you examine the ERD in Figure PC.1, focus on the following features: • SALESPERSON and MECHANIC are subtypes of the supertype EMPLOYEE. This feature is based on the likelihood that the subtypes contain data that are unique to those subtypes. For example, a sales person is likely to have at least part of his/her pay determined by sales commissions. Similarly, mechanics are likely to have special certification and training requirements that “general” employees are not likely to have. The use of these subtypes eliminates nulls in the EMPLOYEE table, thus making them desirable in this case. • Although some employee job-related data are stored in their subtypes – see, for example, our discussion of the SALESPERSON and MECHANIC subsets – we still need to know what the employee job assignments are. Although we have not included pay and benefit options in this design, both options are likely to be job related. Some jobs are paid on an hourly basis, some on a weekly basis, and some jobs are salaried. Base pay schedules are usually determined by job qualifications. Therefore, the JOB entity stores a JOB_PERIOD attribute (hour, week, or year) and a JOB_PAY attribute. If the JOB_PERIOD = “hour”, the JOB_PAY = $18.90 is clearly an hourly rate. If the JOB_PERIOD = “year”, a JOB_PAY = $45,275 is clearly a yearly salary. In larger companies, job assignments are useful in tracking the distribution of job “densities” to see if some job classification distributions are appropriate to meet the business objectives. (Do we have too many employees who are classified as “support” personnel? Too many accountants?) Also, note that the relationship between JOB and EMPLOYEE reflects the business rules that each employee has only one job assignment at a time. Naturally, any given job can be held by many employees. For example, many employees may be mechanics, support personnel, accountants, and so on. …Additional discussion points follow Figure PC.1. Figure PC.1 The Verified Car Dealership Crow’s Foot ERD … continued discussion of Figure PC.1’s ERD. • To track all maintenance procedures and parts precisely, only qualified mechanics may open and close service logs, check out parts, and sign off service work. Note that the PART_LOG tracks all parts that have been logged out. The relationship between SERVICE_LOG and PART_LOG lets us trace all checked-out parts to a specific service log entry. The use of the PART_CODE in both the SVC_LOG_LINE and the PART_LOG entities makes it possible to write a query to let us check whether or not a logged out part was actually used. All the maintenance actions can be tracked at this point. We know who opened and closed the service log through the SCV_LOG_ACTION. We know which mechanic performed each maintenance procedure (in the SVC_LOG_LINE), and we know which mechanic checked out which part(s) – in PART_LOG. • If a car was sold by the dealer, that fact is recorded in an invoice. However, the CAR entity may be expanded to include a “bought here” Y/N attribute, in addition to mileage and other pertinent data. Also, cars owned by the dealership may simply show the dealer as the “customer.” (Naturally, you can add a DLR_CAR entity if the dealer car attributes and data tracking requirements are different from the customer CAR data.) • Before any car is sold to a customer, that car must be inspected and, if necessary, repaired. Therefore, even a new car will show up in the SERVICE_LOG. Therefore, the SVC_LOG_NUM will never be null in the INVOICE … even if the invoice records the sale of a car, rather than a specific service charge. • The PAYMENT entity has a rather limited set of options at this point. However, it does enable the manager to track multiple payments on a given invoice and to keep track of specific invoice balances. Further verification procedures would (most likely!) add functionality to the PAYMENT entity. For example, you might change the PAYMENT entity to an account transaction entity, perhaps named ACCT_TRANSACTION. This change would reflect the need to identify the transaction type – debit or credit – and, if a payment is made, the payment mode – cash, credit card, check. • The employee qualifications can now be tracked without limit. If an employee gains an additional qualification, all that is needed is an entry in the EDUCATION table. • The customer car data are stored in CAR, so we can keep the service records on all the customer cars, thus producing the required car histories. (To save space, we have not included all the appropriate attributes – but you can add such attributes as CAR_BOUGHT (Y/N) to indicate whether or not the car was bought at this dealership, CAR_LAST_MILES to indicate the mileage recorded during the most recent service, and so on.) • At this point, we assume that the SERVICE_LOG and SVC_LOG_LINE records yield the information required to bill the customer. To set the stage for further discussion of Figure PC.1’s ERD, a few sample data entries in the added SERVICE_LOG, LOG_ACTION, MECHANIC, LOG_LINE, PART, and PART_LOG tables are useful. (The PART_CODE entry “000000” is a dummy PART entry that signifies “no part used.”) Also note that the order of the attributes is immaterial. (In other words, whether CAR_VIN is shown in the last column of the SERVICE_LOG table or as the first, second, or third column has no bearing on the discussion or on the results obtained from the use of the table.) Sample SERVICE_LOG Data SVC_LOG_NUM LOG_COMPLAINT SVC_LOG_CHARGE CAR_VIN 10012 Hard to start. Accelerates poorly. $89.75 2AA-W-123456 10013 Oil change. Rotate and balance tires. $19.95 5DR-T-8765432 10014 Temp gauge shows high temps. $135.70 4UY-D-6543210 Sample SVC_LOG_ACTION Data SVC_LOG_NUM SVC_LOGACT_TYPE SVC_LOGACT _DATE EMP_NUM 10012 Open 03-Mar-2014 104 10013 Open 03-Mar-2014 112 10012 Close 04-Mar-2014 112 10014 Open 04-Mar-2014 104 10013 Close 04-Mar-2014 104 Sample SVC_LOG_LINE Data (Several attributes left out to save space) SVC_LINE_NUM SVC_LOG_NUM SVC_LINE_WORK EMP_NUM PART_CODE 1 10012 Cleaned injection nozzles 106 000000 1 10013 Drained oil 112 000000 2 10013 Installed filter 112 FLTR-0156 3 10013 Replaced oil 112 Oil-PZ30/40 4 10013 Rotated tires 114 000000 5 10013 Balanced tires, using four weights (LF0.5oz, RF1.1oz, RR1.2oz, LR0.8 oz) 106 WT-LD10012 1 10014 Drained coolant 104 000000 2 10014 Replaced thermostat 112 THERM-007B 3 10014 Replaced coolant 104 COOL-289XZ Sample PART_LOG Data PARTLOG_NUM EMP_NUM PART_CODE SVC_LOG_NUM PARTLOG_DATE PARTLOG_UNITS 10185 112 FLTR-0156 10013 03-Mar-2014 1 10186 112 Oil-PZ30/40 10013 03-Mar-2014 8 10187 114 WT-LD10012 10013 03-Mar-2014 4 10188 112 THERM-007B 10014 04-Mar-2014 1 10189 114 COOL-289XZ 10014 04-Mar-2014 1 The main processes that can be identified in this system include: • The generation of an invoice (INSERT). • The car sales generation and reports (SELECT). • The registration of a service for a customer's car (INSERT, UPDATE). • The registration of the work log or of the employees (mechanics) who worked on a car (INSERT, UPDATE). • The registration of parts inventory (INSERT, UPDATE). • The registration of parts used in a service (INSERT, UPDATE). • The registration of the car history (INSERT, UPDATE). • Queries and reports such as: Parts List Car Price List Sales Reports Service Report Car History Report Parts Used Report Work Log Report The designer must check that the database model supports all these processes and that the model is flexible enough to support future modifications. If problems are encountered during the model's verification against the required database transactions that are designed to support the identified processes, the designer must make the necessary changes to the data model. These changes are reflected in Figure PC.1. NOTE The verification process for Problems 2-5 conforms to the process discussed at length in Problem 1. Therefore, we will only show the verified ERDs. The data dictionary format example shown in Problem 1 can also be used as the template in problems 2-5. Therefore, we do not show additional data dictionaries for Problems 2-5. The ERDs supply the necessary entities, the attribute names, and the relationships. However, it will be very useful to compare the ERDs in the following problems to the original ERDs – in the previous chapter -- from which they were derived. 2. Verify the conceptual model you created in Appendix B, Problem 4. Create a data dictionary for the verified model. Compare the ERD shown in Figure PC.2A to the ERD shown in Figure PD.4A (see Appendix D) to see the impact of the verification process. Use the Data Dictionary format shown in Chapter 6, Table 6.3 as your data dictionary template. Figure PC.2A The Crow's Foot Verified Conceptual Model for the Video Rental Store As you discuss the ERD components in Figure PC.2A, note particularly the following points: • Remind your students that relationships are read from the parent to the related entity. Therefore, ORDER contains ORD_LINC. (The natural tendency to read from top to bottom or from left to right is not the governing factor in an ERD!) • We can now track individual copies of each movie. If there are 12 copies of a given movie, each copy can be rented out separately. • ORD_LINE and RENT_LINE are composite entities. So why is COPY not a composite entity? Here is an excellent example of why single attribute PKs are a requirement when the entity is referenced by another entity. In this case, the COPY entity’s PK is referenced by the RENT_LINC. Therefore, COPY must have a single-attribute PK. (Note that the PK of the COPY entity is the single attribute COPY_CODE, rather than the combination of MOVIE_CODE and COPY_CODC.) • It is reasonable to assume that each order goes to a particular vendor. Therefore, the VEND_CODE is the FK in ORDER, rather than in ORD_LINC. However, if the order goes to a clearing house and you still want to keep track of the individual vendors that supplied the movies to the clearing house, VEND_CODE will the FK in ORD_LINC. NOTE The design shown in Figure PC.2A is implemented in a small sample database named RC_Video.mdb. This database, stored in MS Access format, is located on the Instructor’s CD. If you want your student to write the applications for this segment of the database, you will find that the appropriate tables are available. Because our discussion focus is on the database’s rental transaction segment, the database does not contain all of the tables that are shown in Figure PC.2A’s ERD. We have also added a number of attributes – especially in the RENTAL table – to make it easier to see how the actual applications might be developed. The partial implementation of the ERD shown in Figure PC.2A is reflected in the RC_Video database’s relational diagram segment depicted in Figure PC.2B. Figure PC.2B The Relational Diagram for the RC_Video Database Once the database design has been implemented, you can easily use MS Access to illustrate a variety of implementation issues. For example, in a real world application the RENTLINE table’s RENTLINE_DATE_OUT can simply be generated by specifying the default date to be the current date, Date(). The RENTLINE_DATE_DUE would then be Date()+2, assuming that the checked out videos are due two days later. (Or substitute whatever criteria you want to use in the queries.) 3. Verify the conceptual model you created in Appendix B, Problem 5. Create a data dictionary for the verified model. Compare the ERD shown in Figure PC.3 to the ERD shown in Figure PB.5a. Note that the original ERD survived the verification process intact. In this case, the verification process merely confirmed that the model met all the database requirements. Use the Data Dictionary format shown in Chapter 7, Table 7.3 as your data dictionary template. Figure PC.3 The Revised (Final) Crow's Foot ERD for the Manufacturer 4. Verify the conceptual model you created in Appendix B, Problem 6. Create a data dictionary for the verified model. Compare the ERD shown in Figure PC.4A to the ERD shown in Figure PD.6A (in Appendix D) to see the impact of the verification process. Use the Data Dictionary format shown in Chapter 7, Table 7.3 as your data dictionary template. Figure PC.4A The Crow's Foot ERD for Problem 4 (The Hardware Store) NOTE The following screen images are based on the database named RC_Hardware. This database, stored in MS Access format on your instructor’s CD. If you want your student to write the applications for this segment of the database, you will find that the appropriate tables are available. (We have added attributes in various tables to enhance their information content.) However, because our discussion focus is on the database’s sales transaction segment, the database does not contain the DEPENDENT table, nor does it contain the VENDOR, ORDER, and ORD_LINE tables that are shown as entities in Figure PC.4. As you discuss the ERD shown in Figure PC.4B, note that the transactions are tracked in the ACCT_TRANSACTION table. The sample table contents are captured in the screen shown in Figure PC.4b. You can easily demonstrate with a set of queries applied to this database that the inclusion of this table structure in the database design yields very desirable results. Figure PC.4B The RC_Hardware ACCT_TRANSACTION Table Contents As you discuss the sample table contents in Figure PC.4B with your students, note that the invoice balances are stored in this table, rather than in the INVOICE table. The reason for this arrangement is simple: the end user must be able to track the remaining balances after each transaction. If the balances for each of the invoices were kept in the INVOICE table, you would be limited to seeing only the most recent balance. Better yet, you can now track all payment transactions by customer or by invoice. For example, a simple query can be written to show the CUSTOMER, INVOICE, and ACCT_TRANSACTION results grouped by customer. Therefore, you can track the entire payment history for each customer. (See Figure PC.4C – note that the query name is shown in the header.) Figure PC.4C The RC_Hardware Transaction Query Results Naturally, the CUST_BALANCE value in the CUSTOMER table and the remaining TRANS_INV_BALANCE value in the ACCT_TRANSACTION table must be updated according to the TRANS_AMOUNT value entered by the end user in the ACCT_TRANSACTION table. The applications software must be written to automatically make such updates. For example, if you Microsoft Access, you can use macros or you can use VB to accomplish this task. As you examine the query output in Figure PC.4C, note that you can easily trace the transactions for each of the customers. For example, • Customer 10012 (Smith) made a purchase (invoice #1, transaction #1) on February 3, 2014. The transaction amount was $239.21, but customer 10012 made only a partial payment of $100.00, thus leaving a balance of $139.21 on invoice #1. • Customer 10012 made another purchase (transaction #6, $27.98) on February 15, 2014. This time, customer 10012 paid the entire invoice amount. (Note that the remaining balance for invoice #20 is $0.00 for that transaction.) • Customer 10012 made a $50.00 payment on account -- see transaction #7 on February 15, 2014, leaving the balance at $139.21 – 50 = $89.21 for the original invoice #1. (Note that the remaining balance for invoice #1 after transaction #1 was $139.21 on 3-Feb-2014.) The original invoice amount of $239.21 was retrieved from the INVOICE table used in this query and this value is not – and must not be – updated. (However, the applications software must update the CUSTOMER table’s customer balance to show the total of all outstanding balances for that customer.) • Customer 10012 made a $20.00 payment on account (see transaction #10) on February 17, 2014, leaving the balance for invoice #1 at $89.21 – 20 = $69.21. Again, the original invoice amount of $239.21 was retrieved from the INVOICE table used in this query and this value is not – and must not be – updated. • Customer 10020 (Rieber) received a $10 refund (see transaction #9 on February 17, 2014.) This transaction was applied to the outstanding balance of $92.19 (see transaction #8) for invoice #12, thus reducing the remaining balance for invoice #12 from $92.19 to $82.19. If the customer comes in to make a payment on account, the system’s end user must be able to query the INVOICE table to find the invoices with outstanding invoice balances. The customer then makes a payment to a specific invoice in the ACCT_TRANSACTION table and the applications software will update both the remaining balance in the ACCT_TRANSACTION table and the customer balance in the CUSTOMER table. If you want to know the entire payment history for each invoice, you can write the query to group the results by invoice number. Figure PC.4D shows the results. It is – again – worth noting that such capability is provided at the database design level. Figure PC.4D The RC_Hardware Invoice Payment History As you discuss Figure PC.4D, note that all the payment transactions for each invoice are easily traced. For example: • The total charge placed on invoice #1 is $239.21. The initial payment on February 3, 2014 was $100.00, leaving a balance of $139.21. • The next payment on invoice 1 was made on February 15. This payment of $50.00 leaves a balance of $89.20. Note that the invoice amount, $239.21, is stored in the INVOICE table and this amount must not be changed. • The next payment on invoice 1 was made on February 17. This payment of $20.00 leaves a balance of $69.20. Again note that the invoice amount, $239.21, is stored in the INVOICE table and this amount must not be changed. 5. Verify the conceptual model you created in Appendix B, Problem 7. Create a data dictionary for the verified model. Compare the ERD shown in Figure PC.5A to the ERD shown in Figure PD.7A (see Appendix B) to see the impact of the verification process. Note the ternary relationship between SIGN_OUT, LOG_LINE, PART, and EMPLOYEC. This relationship enables the end user to track all parts used in each of the log lines for each of the logs and to verify that the parts that were signed out for the log line were, in fact, used in that log line’s maintenance procedure. Use the Data Dictionary format shown in Chapter 6, Table 6.3 as your data dictionary template. Figure PC.5A The Verified Crow's Foot ERD for ROBCOR Aircraft Service The basic ERD shown in Figure PC.5A can easily be modified to incorporate additional tracking capability for a host of other requirements. For example, note that Figure PC.5B includes all the educational, training, and testing options for ROBCOR Aircraft Service’s employees. Given the growing regulatory environment and increasingly restrictive insurance requirements, such detailed tracking requirements are becoming more common in a wide range of different types of business operations. Therefore, discussions about the tracking requirements in the production database design are very productive. Figure PC.5B The Modified Crow's Foot ERD for ROBCOR Aircraft Service To help you demonstrate the use of the composite PKs in the EMP_TEST, EDUCATION, and TRAINING, we have implemented a segment of the FlyFar design in the FlyFar database. The relational diagram for the FlyFar database is shown in Figure PC.5C. NOTE The following screen images are based on the database named FlyFar. This database, stored in MS Access format, may be found on your instructor’s CD. If you want your student to write the applications for this segment of the database, you will find that the appropriate tables are available. (We have added attributes in various tables to enhance their information content. Figure PC.5C The Relational Diagram for the FlyFar Database Segment Note the effect of the composite PKs in the composite entities shown in Figure PC.5C. If you examine the last record in the EMP_TEST table shown in Figure PC.5D, you will see that this attempted record entry duplicates a previously entered record. However, note that the DBMS – in this case, Microsoft Access -- has caught the attempted duplication. (The use of the composite PK -- EMP_NUM + TEST_CODE + EMPTEST_DATE -- requires the PK entries to be unique in order to avoid an entity integrity violation. Therefore, the system catches the duplicate record before you have a chance to save it. In fact, to avoid the entity integrity violation, the DBMS will not permit you to save the duplicate record.) Figure PC.5D A Duplicate Record Warning If you change the test date to indicate that the test result to be entered is different from an earlier test result, the DBMS will accept the data entry. (Note that the FAR135-w test was taken twice by employee 105: once on 22-Jan-2013 and once on 01-Mar-2014.) Remind your students that you can also create a single-attribute, system-generated PK named EMPTEST_NUM for the tblEMPTEST table in Figure PC.5C. This action will convert the composite (weak) EMP_TEST entity to a strong entity. (The EMP_NUM and TEST_CODE remain as foreign keys.) However, if you still want to avoid the duplication of records – a very desirable feature – you must maintain a candidate key composed of EMP_NUM, TEST_CODE, and EMPTEST_DATE -- and you must set the index properties to “required” and “unique” for each of the attributes in that candidate key. (The same features may be used in the tblEDUCATION and tblTRAINING tables.) Whether or not you use a single-attribute PK or a composite PK may depend on specified system transaction and/or tracking requirements. The single-attribute PK/composite PK decision is often a function of professional judgment – clearly, the composite PKs work well in the original design shown in Figure PC.5B. However, if a PK is to be referenced by the FK(s) in one or more related tables, the creation of a single attribute PK is appropriate. In fact, trying to create a relationship between a FK in one table and a composite PK in a related table will quickly illustrate the need for a single-attribute PK. In any case, query design becomes a more complex task when relationships based on composite PKs are traced through several levels. 6 system that will enable an advisor to bring up the student's complete performance record at the university. A sample output screen should look like the one shown in Table PC.6. Table PC.6 The Student Transcript for Problem 6 Name: Xxxxxxxxxxxxxxxxx X. Xxxxxxxxxxxxxxxxxxxxxxx Page # of ## Department: xxxxxxxxxxxxxxxxxxxxxxx Major: xxxxxxxxxxxxxx Social Security Number: ###-##-#### Report Date: ##/Xxx/#### Spring, 20XX Course Hours Grade Grade points ENG 111 (Freshman English) 3 B ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Total this semester ## ## GPA: #.## Total to date ### ### Cumulative GPA: #.## Summer, 20XX Course Hours Grade Grade points CIS 300 (Computers in Society) 3 A ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Total this semester ## ## GPA: #.## Total to date ### ### Cumulative GPA: #.## Fall, 20XX Course Hours Grade Grade points CIS 400 (Systems Analysis) 3 B ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Xxxxxxxxxxxxxxxxxxxxxxxxxxx # X ## Total this semester ## ## GPA: #.## Total to date ### ### Cumulative GPA: #.## Note that this problem is, basically, an extension of the database design developed in Chapter 4's discussion of Tiny College. We merely need to expand the presentation to enable us to develop the required outputs. The Development of the ERD To satisfy the requirements, the ERD must be based on (at least) the following business rules: 1. A department has many students, and each student "belongs" to only one department. 2. A student takes many classes, and each class is taken by many students. 3. A student may enroll in a class one or more times. Naturally, if a class is taken more than once, that "repeat" class is taken in a different semester. 4. A class is a section of a course, i.e., a course can yield many classes, but each class references only one course. For example, two sections of the course described by CIS483, Database Systems, 3 credit hours, Prerequisites: 9 hours of CIS courses, including CIS370 (Systems Analysis) may be taught in the Fall and Spring semesters, while the course may not be offered in the Summer session. (Since a course is not necessarily offered each semester, CLASS is optional to COURSC.) 5. Each course belongs to a department. For example, the English department would not offer a Database course. The database should include at least the following components: DEPARTMENT (DEPT_CODE, DEPT_NAME) STUDENT (STU_NUM, STU_LNAME, STU_FNAME, STU_INITIAL, DEPT_CODE) DEPT_CODE references DEPT COURSE (CRS_CODE, CRS_DESC, CRS_CREDIT_HOURS) CLASS (CLASS_ID, CRS_CODE, CLASS_PLACE, CLASS_TIME) CRS_CODE references COURSE ENROLL (STU_NUM, CLASS_CODE, ENROLL_SEMESTER, ENROLL_CRS_CREDITS, ENROLL_CRS_NAME, ENROLL_GRADE) STU_NUM references STUDENT, CLASS_CODE references CLASS Note 1: The participation of SEMESTER allows a student to register for a class one or more times, but only one time per semester. Note 2: The ENROLL entity includes the course description and course credits, because the course name and its credits may change over time. Therefore, you cannot count on the current course name and credit value to reconstruct previous course names and credit hours used on a transcript, which is a historical record. Naturally, to avoid data anomalies, the applications software should be written to make sure that the system transfers current course data to the current transcript record. NOTE To keep the model simple, we have not included such "obvious" entities as MAJOR, connected to STUDENT and DEPARTMENT, the PROFESSOR who teaches CLASSES and who may chair the DEPARTMENT, the COLLEGE to which the DEPARTMENT belongs, etc. These details may be discussed in connection with the Tiny College database discussed in Chapter 4, Entity Relationship Modeling.” Given this simplification, the DEPARTMENT used in this example does not have any foreign keys. Verification of the ER model Required Output: Selected Student Record The required report can be easily generated through the use of the tables depicted in our database model. The SQL code that will generate the required information will look like this: SELECT STUDENT.STU_NUM, S_LNAME, DEPARTMENT.DEPT_CODE, DEPT_NAME, ENROLL_SEMESTER, ENROLL_CRS_ CREDIT, ENROLL_CRS_NAME, ENROLL_GRADE FROM STUDENT, DEPARTMENT, ENROLL, CLASS, COURSE WHERE STUDENT.STU_NUM = ENROLL.STU_NUM AND CLASS.CLASS_ID = ENROLL.CLASS_ID AND DEPARTMENT.DEPT_CODE = STUDENT.DEPT_CODE AND CLASS.CRS_CODE = COURSC.CRS_CODE ORDER BY ENROLL.STU_NUM, ENROLL_SEMESTER, ENROLL_CRS_NAME; The previous SQL query generates the data needed for the report. Specific output format may be created by using the DBMS's report generator or by using a 3GL programming language such a COBOL or C. Also, note that the "Grade points" column in the Student Record is a computed column that is produced by multiplying the CRS_CREDIT in the COURSE table by the numeric value equivalent to the letter ENROLL_GRADE in the ENROLL table. To compute the value for such a column, the programmer uses a conversion table such as the one shown in Table P6.1. Table P6.1 A Gradepoint Conversion Table Letter Grade Numeric Value A 4 B 3 C 2 D 1 F 0 When the verification process is completed, the ERD looks like the one shown in Figure PC.6. Figure PC.6 The Crow’s Foot ERD for the (Transcript-based) Student Advising System NOTE As you discuss the ERD shown in Figure PC.6, note that optionalities are often used for operational reasons. For example, keeping CLASS optional to COURSE means that you don’t have to generate a class when a new course is put into a catalog. (In this case, the optionality also reflects the business rule “not all courses generate classes each semester.”) Keeping ENROLL optional to both CLASS and STUDENT means that you won’t have to generate a dummy record in the ENROLL table when you sign up a new student or when you generate a new class entry in the registration schedule. 7 Design and verify a database application for one of your local not-for-profit organizations (for example, the Red Cross, the Salvation Army, your church, mosque, or synagogue). Create a data dictionary for the verified design. Since this problem's solution depends on the selected organization, no solution can be presented here. However, the steps required in the solution are shown in discussion question 4. An abbreviated version is presented in problem 1. 8 Using the information given in the physical design section (C.5), estimate the space requirements for the following entities: RESERVATION INV_TRANS TR_ITEM LOG ITEM INV_TYPE (Hint: You may want to check Appendix B's Table B.3, A Sample Volume of Information Log.) You must generate the data storage requirement for each of the tables. Therefore, begin by identifying the attribute characteristics and storage requirements. The supported data types depend on the database software. For example, some software supports the Julian date format, while other software requires dates to be identified as strings. Even date strings vary in length, depending on the default format (18-Mar-2014 or 3/18/14, for example.) Therefore, the correct answer depends on the DBMS you use. In short, the following data storage requirements are meant to be used for discussion purposes only. (Only a few sample tables are shown, but they are sufficient to illustrate the process and to serve as the basis for a discussion about required table spaces. Table: RESERVATION (4 per week, 14 weeks per semester, 56 reservations per semester) Attribute Data Type Storage (bytes) Row Length (bytes) Number of Rows Total Bytes RES_ID INT 4 RES_DATE DATE 8 USER_ID CHAR(11) 11 LA_ID CHAR(11) 11 34 56 1,904 Table: INV_TRANS (80 per week, 14 weeks per semester, 1,120 transactions per semester) Attribute Data Type Storage (bytes) Row Length (bytes) Number of Rows Total Bytes TRANS_ID INT 4 TRANS_TYPE CHAR(1) 1 TRANS_PURPOSE CHAR(2) 2 TRANS_DATE DATE 8 LA_ID CHAR(11) 11 USER_ID CHAR(11) 11 ORDER_ID INT 4 TRANS_COMEMT CHAR(50) 50 91 1,120 101,920 Table: TR_ITEM (240 per week, 14 weeks per semester, 3,360 per semester) Attribute Data Type Storage (bytes) Row Length (bytes) Number of Rows Total Bytes TRANS_ID INT 4 ITEM_ID NUMBER(8,0) 8 LOC_ID CHAR(10) 10 TRANS_QTY INT 4 26 3,360 87,360 Table: LOG (5,000 per week, 14 weeks per semester, 70,000 reservations per semester) Attribute Data Type Storage (bytes) Row Length (bytes) Number of Rows Total Bytes LOG_DATE DATE 4 LOG_TIME CHAR(12) 12 LOG_READER CHAR(1) 1 USER_ID CHAR(11) 11 32 70,000 2,240,000 Table: ITEM (890 identified) Attribute Data Type Storage (bytes) Row Length (bytes) Number of Rows Total Bytes ITEM_ID NUM(8,0) 8 TY_GROUP CHAR(8) 8 ITEM_INIV_ID CHAR(7) 7 ITEM_DESCRIPTION CHAR(10) 10 ITEM_QTY INT 4 VEND_ID CHAR(5) 5 ITEM_STATUS CHAR(1) 1 ITERM_BUY_DATE DATE 8 86 890 76,540 Table: INV_TYPE (15 categories) Attribute Data Type Storage (bytes) Row Length (bytes) Number of Rows Total Bytes TY_GROUP CHAR(8) 8 TY_CATEGORY CHAR(2) 2 TY_CLASS CHAR(2) 2 TY_TYPE CHAR(2) 2 TY_SUBTYPE CHAR(2) 2 TY_DESCRIPTION CHAR(35) 35 TY_UNIT CHAR(4) 4 55 15 825 Solution Manual for Database Systems: Design, Implementation, and Management Carlos Coronel, Steven Morris 9781337627900, 9781305627482
Close