Preview (6 of 20 pages)

Chapter 6 Data Communication: Delivering Information Anywhere and Anytime Learning Objectives Describe major applications of a data communication system. Explain the major components of a data communication system. Describe the major types of processing configurations. Explain the three types of networks. Describe the main network topologies. Explain important networking concepts, such as bandwidth, routing, routers, and the client/server model. Describe wireless and mobile technologies and networks. Discuss the importance of wireless security and the techniques used. Summarize the convergence phenomenon and its applications for business and personal use. Detailed Chapter Outline I. Defining Data Communication Data communication is the electronic transfer of data from one location to another. An information system’s effectiveness is measured in part by how efficiently it delivers information, and a data communication system is what enables an information system to carry out this function. By using the capabilities of a data communication system, organizations are not limited by physical boundaries. They can collaborate with other organizations, outsource certain functions to reduce costs, and provide customer services via data communication systems. E-collaboration is another main application of data communication. A. Why Managers Need to Know About Data Communication Data communication applications can enhance decision makers’ efficiency and effectiveness in many ways. For example, data communication applications support just-in-time delivery of goods, which reduces inventory costs and improves the competitive edge. Data communication systems also make virtual organizations possible, and these can cross geographic boundaries to develop products more quickly and effectively. It also enables organizations to use e-mail and electronic file transfer to improve efficiency and productivity. Following are some of the ways data communication technologies affect the workplace: Online training for employees can be provided via virtual classrooms. Internet searches for information on products, services, and innovation keep employees up to date. The Internet and data communication systems facilitate lifelong learning, which will be an asset for knowledge workers of the future. Boundaries between work and personal life are less clear-cut as data communication is more available in both homes and businesses. Web and video conferencing are easier, which can reduce the costs of business travel. Managers need a clear understanding of the following areas of data communication: The basics of data communication and networking The Internet, intranets, and extranets Wired and wireless networks Network security issues and measures Organizational and social effects of data communication Globalization issues Applications of data communication systems E-collaborations and virtual meetings are other important applications of data communication systems for managers. These applications are cost effective and improve customer service. II. Basic Components of a Data Communication System A typical data communication system includes the following components: Sender and receiver devices Modems or routers Communication medium (channel) Basic concepts in data communication include: Bandwidth is the amount of data that can be transferred from one point to another in a certain time period, usually one second. Attenuation is the loss of power in a signal as it travels from the sending device to the receiving device. Data transmission channels are generally divided into two types: broadband and narrowband. In broadband data transmission, multiple pieces of data are sent simultaneously to increase the transmission rate. Narrowband is a voice-grade transmission channel capable of transmitting a maximum of 56,000 bps, so only a limited amount of information can be transferred in a specific period of time. Before a communication link can be established between two devices, they must be synchronized, meaning that both devices must start and stop communicating at the same point. Synchronization is handled with protocols, rules that govern data communication, including error detection, message length, and transmission speed. A. Sender and Receiver Devices A sender and receiver device can take various forms: Input/output device, or “thin client”—used only for sending or receiving information; it has no processing power. Smart terminal—an input/output device that can perform certain processing tasks but is not a full-featured computer. Intelligent terminal, workstation, or personal computer—these serve as input/output devices or as stand-alone systems. Netbook computer—a low-cost, diskless computer used to connect to the Internet or a LAN. Minicomputers, mainframes, and supercomputers—these process data and send it to other devices or receive data that has been processed elsewhere, process it, then transmit it to other devices. Smartphones, mobile phones, MP3 players, PDAs, and game consoles—smartphones are mobile phones with advanced capabilities, such as e-mail and Web-browsing, and most have a built-in keyboard or an external USB keyboard. A video game console is an electronic device for playing video games. B. Modems A modem (short for “modulator-demodulator”) is a device that connects a user to the Internet. Dial-up, digital subscriber line (DSL), and cable access require modems to connect to the Internet. In today’s broadband world, DSL or cable modems are common. Digital subscriber line (DSL), a common carrier service, is a high-speed service that uses ordinary phone lines. With DSL connections, users can receive data at up to 7.1 Mbps and send data at around 1 Mbps, although the actual speed is determined by proximity to the provider’s location. Cable modems, on the other hand, use the same cable that connects to TVs for Internet connections; they can usually reach transmission speeds of about 16 Mbps. C. Communication Media Communication media, or channels, connect sender and receiver devices. They can be conducted (wired or guided) or radiated (wireless). They can be conducted (wired or guided) or radiated (wireless). Conducted media provide a physical path along which signals are transmitted, including twisted pair copper cable, coaxial cable, and fiber optics. Twisted pair copper cable consists of two copper lines twisted around each other and either shielded or unshielded. Coaxial cables are thick cables that can be used for both data and voice transmissions. Fiber-optic cables are glass tubes (half the diameter of a human hair) surrounded by concentric layers of glass, called “cladding,” to form a light path through wire cables. Fiber-optic cables have a higher capacity, smaller size, lighter weight, lower attenuation, and higher security than other cable types; they also have the highest bandwidth of any communication medium. Radiated media use an antenna for transmitting data through air or water. Some of these media are based on “line of sight” (an open path between sending and receiving devices or antennas), including broadcast radio, terrestrial microwave, and satellite. Satellites link ground-based microwave transmitters/receivers, known as Earth stations, and are commonly used in long-distance telephone transmissions and TV signals. Terrestrial microwave systems use Earth-based transmitters and receivers and are often used for point-to-point links between buildings. A communication medium can be a point-to-point or a multipoint system. In a point-to-point, only one device at a time uses the medium. In a multipoint system, several devices share the same medium, and a transmission from one device can be sent to all other devices sharing the link. III. Processing Configurations Data communication systems can be used in several different configurations, depending on users’ needs, types of applications, and responsiveness of the system. During the past 60 years, three types of processing configurations have emerged: centralized, decentralized, and distributed. A. Centralized Processing In a centralized processing system, all processing is done at one central computer. The main advantage of this configuration is being able to exercise tight control over system operations and applications. The main disadvantage is lack of responsiveness to users’ needs, because the system and its users could be located far apart from each other. This configuration is not used much now. B. Decentralized Processing In decentralized processing, each user, department, or division has its own computer (sometimes called an “organizational unit”) for performing processing tasks. A decentralized processing system is certainly more responsive to users than a centralized processing system. Decentralized systems have some drawbacks, including lack of coordination among organizational units, the high cost of having many systems, and duplication of efforts. C. Distributed Processing Distributed processing solves two main problems—the lack of responsiveness in centralized processing and the lack of coordination in decentralized processing—by maintaining centralized control and decentralizing operations. Some of the advantages of distributed processing include: Accessing unused processing power is possible Distance and location are not limiting Fault tolerance is improved because of the availability of redundant resources. Reliability is improved because system failures can be limited to only one site. The system is more responsive to user needs. The disadvantages of distributed processing include: There may be more security and privacy challenges. There may be incompatibility between the various pieces of equipment. Managing the network can be challenging. D. Open Systems Interconnection Model The Open Systems Interconnection (OSI) model is a seven-layer architecture for defining how data is transmitted from computer to computer in a network. Each layer in the architecture performs a specific task: Application layer—serves as the window through which applications access network services. It performs different tasks, depending on the application, and provides services that support users’ tasks, such as file transfers, database access, and e-mail. Presentation layer—formats message packets. Session layer—establishes a communication session between computers. Transport layer—generates the receiver’s address and ensures the integrity of messages by making sure packets are delivered without error, in sequence, and with no loss or duplication. Network layer—routes messages. Data Link layer—oversees the establishment and control of the communication link. Physical layer—specifies the electrical connections between computers and the transmission medium; defines the physical medium used for communication. IV. Types of Networks There are three major types of networks: local area networks, wide area networks, and metropolitan area networks. In all these networks, computers are usually connected to the network via a network interface card (NIC), a hardware component that enables computers to communicate over a network. A NIC, also called an “adapter card,” is the physical link between a network and a workstation, so it operates at the OSI model’s Physical and Data Link layers. A. Local Area Networks A local area network (LAN) connects workstations and peripheral devices that are in close proximity. Usually, a LAN covers a limited geographical area, such as a building or campus, and one company owns it. Its data transfer speed varies from 100 Mbps to 10 Gbps. LANs are used most often to share resources, such as peripherals, files, and software. They are also used to integrate services, such as e-mail and file sharing. In a LAN environment, there are two basic terms to remember: Ethernet and Ethernet cable. Ethernet is a standard communication protocol embedded in software and hardware devices used for building a LAN. An Ethernet cable is used to connect computers, hubs, switches, and routers to a network. B. Wide Area Networks A wide area network (WAN) can span several cities, states, or even countries, and it is usually owned by several different parties. The data transfer speed depends on the speed of its interconnections (called “links”) and can vary from 28.8 Kbps to 155 Mbps. A WAN can use many different communication media (coaxial cables, satellite, and fiber optics) and terminals of different sizes and sophistication (PCs, workstations, and mainframes); it can also be connected to other networks. C. Metropolitan Area Networks The Institute of Electrical and Electronics Engineers (IEEE) developed specifications for a public, independent, high-speed network that connects a variety of data communication systems, including LANs and WANs in the metropolitan areas. This network, called a metropolitan area network (MAN), is designed to handle data communication for multiple organizations in a city and sometimes nearby cities as well. The data transfer speed varies from 34 Mbps to 155 Mbps. V. Network Topologies A network topology represents a network’s physical layout, including the arrangement of computers and cables. Five common topologies include: star, ring, bus, hierarchical, and mesh. A. Star Topology The star topology usually consists of a central computer (host computer, often a server) and a series of nodes (typically, workstations or peripheral devices). The host computer supplies the main processing power. If a node fails, it does not affect the network’s operation, but if the host computer fails, the entire network goes down. Advantages of the star topology include: Cable layouts are easy to modify. Centralized control makes detecting problems easier. Nodes can be added to the network easily. It is more effective at handling heavy but short bursts of traffic. Disadvantages of the star topology include: If the central host fails, the entire network becomes inoperable. Many cables are required, which increases cost. B. Ring Topology In a ring topology, no host computer is required because each computer manages its own connectivity. Computers and devices are arranged in a circle so each node is connected to two other nodes: its upstream neighbor and its downstream neighbor. Transmission is in one direction, and nodes repeat a signal before passing it to the downstream neighbor. If any link between nodes is severed, the entire network is affected, and failure of a single node disrupts the entire network. A ring topology needs less cable than a star topology, but it is similar to a star topology in that it is better for handling heavy but short bursts of traffic. C. Bus Topology The bus topology (also called “linear bus”) connects nodes along a network segment, but the ends of the cable are not connected, as they are in a ring topology. A hardware device called a terminator is used at each end of the cable to absorb the signal. Without a terminator, the signal would bounce back and forth along the length of the cable and prevent network communication. Advantages of the bus topology include: It is easy to extend. It is very reliable. The wiring layout is simple and uses the least amount of cable of any topology, which keeps costs down. It handles steady (even) traffic well. Disadvantages of the bus topology include: Fault diagnosis is difficult. The bus cable can be a bottleneck when network traffic is heavy. D. Hierarchical Topology A hierarchical topology (also called a “tree”) combines computers with different processing strengths in different organizational levels. Traditional mainframe networks also use a hierarchical topology. The mainframe computer is at the top, front-end processors (FEPs) are at the next level, controllers and multiplexers are at the next level, and terminals and workstations are at the bottom level. A controller is a hardware and software device that controls data transfer from a computer to a peripheral device (examples are a monitor, a printer, or a keyboard) and vice versa. A multiplexer is a hardware device that allows several nodes to share one communication channel. The hierarchical topology offers a great deal of network control and lower cost, compared to a star topology. Its disadvantages include that network expansion may pose a problem, and there could be traffic congestion at the root and higher-level nodes. E. Mesh Topology In a mesh topology (also called “plex” or “interconnected”), every node (which can differ in size and configuration from the others) is connected to every other node. This topology is highly reliable. Failure of one or a few nodes does not usually cause a major problem in network operation, because many other nodes are available. VI. Major Networking Concepts A. Protocols Protocols are agreed-on methods and rules that electronic devices use to exchange information. Some protocols deal with hardware connections, and others control data transmission and file transfers. Protocols also specify the format of message packets sent between computers. B. Transmission Control Protocol/ Internet Protocol Transmission Control Protocol/Internet Protocol (TCP/IP) is an industry-standard suite of communication protocols. TCP/IP’s main advantage is that it enables interoperability—in other words, it allows the linking of devices running on many different platforms. Two of the major protocols in the TCP/IP suite are Transmission Control Protocol (TCP), which operates at the OSI model’s Transport layer, and Internet Protocol (IP), which operates at the OSI model’s Network layer. TCP’s primary functions are establishing a link between hosts, ensuring message integrity, sequencing and acknowledging packet delivery, and regulating data flow between source and destination nodes. IP is responsible for packet forwarding. An IP address consists of 4 bytes in IPv4 or 16 bytes in IPv6 (32 bits or 128 bits) and is divided into two parts: a network address and a node address. Computers on the same network must use the same network address, but each computer must have a unique node address. C. Routing Packet switching is a network communication method that divides data into small packets and transmits them to an address, where they are reassembled. A packet is a collection of binary digits—including message data and control characters for formatting and transmitting—sent from computer to computer over a network. The path or route that data takes on a network is determined by the type of network and the software used to transmit data. The process of deciding which path that data takes is called routing. Routing is similar to the path one takes from home to work. A packet’s route can change each time a connection is made, based on the amount of traffic and the availability of the circuit. The decision about which route to follow is done in one of two ways: at a central location (centralized routing) or at each node along the route (distributed routing). In most cases, a routing table, generated automatically by software, is used to determine the best possible route for the packet. The routing table lists nodes on a network and the path to each node, along with alternate routes and the speed of existing routes In centralized routing, one node is in charge of selecting the path for all packets. This node, considered the network routing manager, stores the routing table, and any changes to a route must be made at this node. All network nodes periodically forward status information on the number of inbound, outbound, and processed messages to the network routing manager. The network routing manager, therefore, has an overview of the network and can determine whether any part of it is underused or overused. Distributed routing relies on each node to calculate the best possible route. Each node contains its own routing table with current information on the status of adjacent nodes so packets can follow the best possible route. Each node also sends status messages periodically so adjacent nodes can update their tables. Distributed routing eliminates the problems caused by having the routing table at a centralized site. If one node is not operational, routing tables at other nodes are updated, and the packet is sent along a different path. D. Routers A router is a network connection device containing software that connects network systems and controls traffic flow between them. Routers operate at the Network layer of the OSI model and handle routing packets on a network. Cisco Systems and Juniper Networks are two major router vendors. A router performs the same functions as a bridge but is a more sophisticated device. A bridge connects two LANs using the same protocol, and the communication medium does not have to be the same on both LANs. Routers can also choose the best possible path for packets based on distance or cost. A router can also be used to isolate a portion of the LAN from the rest of the network; this process is called “segmenting.” There are two types of routers: static and dynamic. A static router requires the network routing manager to give it information about which addresses are on which network. A dynamic router can build tables that identify addresses on each network. Dynamic routers are used more often now, particularly on the Internet. E. Client/Server Model In the client/server model, software runs on the local computer (the client) and communicates with the remote server to request information or services. A server is a remote computer on the network that provides information or services in response to client requests. In the most basic client/server configuration, the following events usually take place: The user runs client software to create a query. The client accepts the request and formats it so the server can understand it. The client sends the request to the server over the network. The server receives and processes the query. The results are sent to the client. The results are formatted and displayed to the user in an understandable format. The main advantage of the client/server architecture is its scalability, meaning its ability to grow. Client/server architectures can be scaled horizontally or vertically. Horizontal scaling means adding more workstations (clients), and vertical scaling means migrating the network to larger, faster servers. To understand client/server architecture better, one can think of it in terms of these three levels of logic: Presentation logic: This is the top level, which is concerned with how data is returned to the client. Application logic: This is concerned with the software processing requests for users. Data management logic: This is concerned with data management and storage operations. The real challenge in a client/server architecture is how to divide these three logics between the client and server. Two-Tier Architecture In the two-tier architecture, a client (tier one) communicates directly with the server (tier two). The presentation logic is always on the client, and the data management logic is on the server. The application logic can be on the client, on the server, or split between them, although it is usually on the client side. This architecture is effective in small workgroups. Because application logic is usually on the client side, a two-tier architecture has the advantages of application development speed, simplicity, and power. On the downside, any changes in application logic, such as stored procedures and validation rules for databases, require major modifications of clients, resulting in upgrade and modification costs. N-Tier Architectures In a two-tier architecture, if the application logic is modified, it can affect the processing workload. An n-tier architecture attempts to balance the workload between client and server by removing application processing from both the client and server and placing it on a middle-tier server. The most common n-tier architecture is the three-tier architecture. Improving network performance is a major advantage of n-tier architecture. VII. Wireless and Mobile Networks A wireless network is a network that uses wireless instead of wired technology. A mobile network (also called a “cellular network”) is a network operating on a radio frequency (RF), consisting of radio cells, each served by a fixed transmitter, known as a “cell site” or “base station.” Wireless and mobile networks have the advantages of mobility, flexibility, ease of installation, and low cost. These systems are particularly effective when no infrastructure (such as communication lines or established wired networks) is in place. Drawbacks of mobile and wireless networks include the following: Limited throughput—throughput is similar to bandwidth. It is the amount of data transferred or processed in a specified time, usually one second. Limited range—the distance a signal can travel without losing strength is more limited in mobile and wireless networks. In-building penetration problems—wireless signals might not be able to pass through certain building materials or might have difficulty passing through walls. Vulnerability to frequency noise—interference from other signals, usually called “noise,” can cause transmission problems. Security—wireless network traffic can be captured with sniffers. There are various definitions of mobile and wireless computing. Mobile computing might simply mean using a laptop away from the office or using a modem to access the corporate network from a client’s office. Neither activity requires wireless technology. Wireless LANs usually refer to proprietary LANs, meaning they use a certain vendor’s specifications. Wireless networks have many advantages. For example, healthcare workers who use handheld, notebook computers or tablets (such as the iPad) with wireless capabilities are able to get patient information quickly. Because the information can be sent to and saved on a centralized database, it is available to other workers instantly. A Wireless Technologies In a wireless environment, portable computers use small antennas to communicate with radio towers in the surrounding area. Satellites in near-Earth orbit pick up low-powered signals from mobile and portable network devices. Wireless technologies generally fall into two groups: Wireless LANs (WLANs)—like their wired counterparts, WLANs are characterized by having one owner and covering a limited area. Wireless WANs (WWANs)—these networks cover a broader area than WLANs and include the following devices: cellular networks, cellular digital packet data (CDPD), paging networks, personal communication systems (PCS), packet radio networks, broadband personal communications systems (BPCS), microwave networks, and satellite networks. B. Mobile Networks Mobile networks have a three-part architecture: Base stations send and receive transmissions to and from subscribers. Mobile telephone switching offices (MTSOs) transfer calls between national or global phone networks and base stations. Subscribers (users) connect to base stations by using mobile communication devices. Mobile devices register by subscribing to a carrier service (provider) licensed for certain geographic areas. When a mobile unit is outside its provider’s coverage area, roaming occurs. To improve the efficiency and quality of digital communications, two technologies have been developed: Time Division Multiple Access and Code Division Multiple Access. Time Division Multiple Access (TDMA) divides each channel into six time slots. Each user is allocated two slots: one for transmission and one for reception. Code Division Multiple Access (CDMA) transmits multiple encoded messages over a wide frequency and then decodes them at the receiving end. Advanced Mobile Phone System (AMPS) is the analog mobile phone standard developed by Bell Labs and introduced in 1983. Digital technologies, however, are more widely used because of higher data capacities, improved voice quality, encryption capabilities, and integration with other digital networks. Many businesses use wireless and mobile networks to improve customer service and reduce operational costs. VIII. Wireless Security Security is important in any type of network, but it is especially important in a wireless network, because anyone walking or driving within the range of an access point (AP), even if outside the home or office, can use the network. An AP is the part of a WLAN that connects it to other networks. There are several techniques for improving the security of a wireless network: SSID (Service Set Identifier)—all client computers that try to access the AP are required to include an SSID in all their packets. A packet without an SSID is not processed by the AP. WEP (Wired Equivalent Privacy)—a key must be manually entered into the AP and the client computer. The key encrypts the message before transmission. EAP (Extensible Authentication Protocol)—EAP keys are dynamically generated based on the user’s ID and password. When the user logs out of the system, the key is discarded. WPA (Wi-Fi Protected Access)—this technique combines the strongest features of WEP and EAP. Keys are fixed, as in WEP, or dynamically changed, as in EAP. However, the WPA key is longer than the WEP key; therefore, it is more difficult to break. WPA2 or 802.11i—this technique uses EAP to obtain a master key. With this master key, a user’s computer and the AP negotiate for a key that will be used for a session. After the session is terminated, the key is discarded. IX. Convergence of Voice, Video, and Data In data communication, convergence refers to integrating voice, video, and data so that multimedia information can be used for decision making. Convergence requires major network upgrades, because video requires much more bandwidth. This has changed, however, with the availability of high-speed technologies, such as Asynchronous Transfer Mode (ATM), Gigabit Ethernet, 3G and 4G networks, and more demand for applications using these technologies. Gigabit Ethernet is a LAN transmission standard capable of 1 Gbps and 10 Gbps data transfer speeds. The ATM is a packet-switching service that operates at 25 Mbps and 622 Mbps, with maximum speed of up to 10 Gbps. The 3G network is the third generation of mobile networking and telecommunications. It has increased the rate of information transfer, its quality, video and broadband wireless data transfers, and the quality of Internet telephony or Voice over Internet Protocol (VoIP). More content providers, network operators, telecommunication companies, and broadcasting networks, among others, have moved toward convergence. Convergence is possible now because of a combination of technological innovation, changes in market structure, and regulatory reform. Common applications of convergence include the following: E-commerce More entertainment options as the number of TV channels substantially increases and movies and videos on demand become more available Increased availability and affordability of video and computer conferencing Consumer products and services, such as virtual classrooms, telecommuting, and virtual reality As a tool for delivering services, the Internet is an important contributor to the convergence phenomenon. Key Terms Data communication is the electronic transfer of data from one location to another. (P. 115) Bandwidth is the amount of data that can be transferred from one point to another in a certain time period, usually one second. (P. 117) Attenuation is the loss of power in a signal as it travels from the sending device to the receiving device. (P. 117) In broadband data transmission, multiple pieces of data are sent simultaneously to increase the transmission rate. (P. 117) Narrowband is a voice-grade transmission channel capable of transmitting a maximum of 56,000 bps, so only a limited amount of information can be transferred in a specific period of time. (P. 117) Protocols are rules that govern data communication, including error detection, message length, and transmission speed. (P. 117) A modem (short for “modulator-demodulator”) is a device that connects a user to the Internet. (P. 117) Digital subscriber line (DSL), a common carrier service, is a high-speed service that uses ordinary phone lines. (P. 118) Communication media, or channels, connect sender and receiver devices. They can be conducted or radiated. (P. 118) Conducted media provide a physical path along which signals are transmitted, including twisted pair copper cable, coaxial cable, and fiber optics. (P. 118) Radiated media use an antenna for transmitting data through air or water. (P. 119) In a centralized processing system, all processing is done at one central computer. (P. 119) In decentralized processing, each user, department, or division has its own computer (sometimes called an “organizational unit”) for performing processing tasks. (P. 120) Distributed processing maintains centralized control and decentralized operations. Processing power is distributed among several locations. (P. 120) The Open Systems Interconnection (OSI) model is a seven-layer architecture for defining how data is transmitted from computer to computer in a network, from the physical connection to the network to the applications that users run. It also standardizes interactions between network computers exchanging information. (P. 120) A network interface card (NIC) is a hardware component that enables computers to communicate over a network. (P. 121) A local area network (LAN) connects workstations and peripheral devices that are in close proximity. (P. 122) A wide area network (WAN) can span several cities, states, or even countries, and it is usually owned by several different parties. (P. 122) A metropolitan area network (MAN) is designed to handle data communication for multiple organizations in a city and sometimes nearby cities as well. (P. 123) A network topology represents a network’s physical layout, including the arrangement of computers and cables. (P. 123) The star topology usually consists of a central computer (host computer, often a server) and a series of nodes (typically, workstations or peripheral devices). (P. 123) In a ring topology, no host computer is required because each computer manages its own connectivity. (P. 124) The bus topology (also called “linear bus”) connects nodes along a network segment, but the ends of the cable are not connected, as they are in a ring topology. (P. 124) A hierarchical topology (also called a “tree”) combines computers with different processing strengths in different organizational levels. (P. 125) A controller is a hardware and software device that controls data transfer from a computer to a peripheral device (examples are a monitor, a printer, or a keyboard) and vice versa. (P. 125) A multiplexer is a hardware device that allows several nodes to share one communication channel. (P. 125) In a mesh topology (also called “plex” or “interconnected”), every node (which can differ in size and configuration from the others) is connected to every other node. (P. 125) Transmission Control Protocol/Internet Protocol (TCP/IP) is an industry-standard suite of communication protocols that enables interoperability. (P. 125) A packet is a collection of binary digits—including message data and control characters for formatting and transmitting—sent from computer to computer over a network. (P. 126) Routing is the process of deciding which path to take on a network. This is determined by the type of network and the software used to transmit data. (P. 126) A routing table, generated automatically by software, is used to determine the best possible route for a packet. (P. 126) In centralized routing, one node is in charge of selecting the path for all packets. This node, considered the network routing manager, stores the routing table, and any changes to a route must be made at this node. (P. 126) Distributed routing relies on each node to calculate its own best possible route. Each node contains its own routing table with current information on the status of adjacent nodes so packets can follow the best possible route. (P. 127) A router is a network connection device containing software that connects network systems and controls traffic flow between them. (P. 127) A static router requires the network routing manager to give it information about which addresses are on which network. (P. 127) A dynamic router can build tables that identify addresses on each network. (P. 127) In the client/server model, software runs on the local computer (the client) and communicates with the remote server to request information or services. A server is a remote computer on the network that provides information or services in response to client requests. (P. 127) In the two-tier architecture (the most common type), a client (tier one) communicates directly with the server (tier two). (P. 128) An n-tier architecture attempts to balance the workload between client and server by removing application processing from both the client and server and placing it on a middle-tier server. (P. 128) A wireless network is a network that uses wireless instead of wired technology. (P. 129) A mobile network (also called a cellular network) is a network operating on a radio frequency (RF), consisting of radio cells, each served by a fixed transmitter, known as a cell site or base station. (P. 129) Throughput is similar to bandwidth. It is the amount of data transferred or processed in a specified time, usually one second. (P. 129) To improve the efficiency and quality of digital communications, Time Division Multiple Access (TDMA) divides each channel into six time slots. Each user is allocated two slots: one for transmission and one for reception. This method increases efficiency by 300 percent, as it allows carrying three calls on one channel. (P. 132) To improve the efficiency and quality of digital communications, Code Division Multiple Access (CDMA) transmits multiple encoded messages over a wide frequency and then decodes them at the receiving end. (P. 132) In data communication, convergence refers to integrating voice, video, and data so that multimedia information can be used for decision making. (P. 135) Instructor Manual for MIS Hossein Bidgoli 9781305632004, 9781337625999, 9781337625982, 9781337406925

Document Details

Related Documents

Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right