Preview (15 of 53 pages)

CHAPTER 5 IT Infrastructure and Emerging Technologies LEARNING OBJECTIVES After reading this chapter, you will be able to answer the following questions: 1. What is IT infrastructure and what are its components? 2. What are the stages and technology drivers of IT infrastructure evolution? 3. What are the current trends in computer hardware platforms? 4. What are the current trends in software platforms? 5. What are the challenges of managing IT infrastructure and management solutions? OPENING CASE: BELL CANADA USES VIRTUAL DESKTOP IT INFRASTRUCTURE TO CONNECT 1700 DESKTOPS The opening case draws attention to how infrastructure, in this case virtual servers, can be used to support business objectives. It allowed Bell to deploy additional desktops, maintain security, allow people to telecommute, while reducing costs. 5.1 IT INFRASTRUCTURE As information becomes a valuable resource of the digital firm, the infrastructure used to care for the resource takes on added importance. We’ll examine all of the components that comprise today’s and tomorrow’s IT Infrastructure and how best to manage it. When you mention the phrase “information technology infrastructure,” most people immediately think of just hardware and software. However, there is more to it than just those two. In fact, the most important and often most-ignored component is that of services. Integrating all three components forces a business to think in terms of the value of the whole and not just the parts. Including all three components in any discussion of IT infrastructure truly fits the cliché that the whole is greater than the sum of its parts. DEFINING IT INFRASTRUCTURE If you define a firm’s IT infrastructure in terms of technology you limit the discussion to the hardware and software components. By broadening the definition to that of service based, you are then bringing into the discussion the services generated by the first two components. As technology advances the types of hardware and software available, it becomes more critical for the firm to focus on the services that a firm can provide to its customers, suppliers, employees, and business partners. A few of the services that may not be readily apparent are: • Telecommunications services: connecting employees, customers, and suppliers • Data management services: not just storing, but managing massive amounts of corporate data and making it available to internal and external users • IT education services: training employees on how to properly use the system • IT research and development services: researching future IT projects and investments EVOLUTION OF IT INFRASTRUCTURE Reviewing the evolution of corporate IT infrastructure can offer some insight into where we may be headed. The textbook identifies five stages in this evolution, each representing a different configuration of computing power and infrastructure elements. The five eras are: automated special-purpose machines, general-purpose mainframe and minicomputer computing, personal computers, client/server networks, and enterprise and Internet computing. General-Purpose Mainframe and Minicomputer Era: 1959 to Present The mainframe era began with highly centralized computing with networks of terminals concentrated in the computing department. While early models contained proprietary software and data, today’s mainframes are able to process a wide variety of software and data. It’s interesting to note that IBM began this era and remains the single largest supplier of mainframe computing. Although the experts and pundits predicted the death of the mainframe in the mid 1980’s, the mainframe has evolved and remains a strong, viable component in many IT infrastructures because of its ability to process huge amounts of data and transmissions. The mainframe era changed with the introduction of minicomputers produced by Digital Equipment Corporation (DEC) in 1965. Minicomputers are defined as middle-range computer used in systems for universities, factories, or research laboratories. These powerful machines cost much less than mainframes, and make decentralization possible. They can be customized to the specific needs of individual departments or business units rather than time sharing on a single huge mainframe. Personal Computer Era: 1981 to Present It’s interesting to note that the advances developed for personal computer in the home has given rise to much of the advances in corporate computing in the last 25 years. As the home user became more comfortable with using computers, and more applications were developed for personal computers, more employees demanded increased use of computers in the workplace. Although the Wintel PC standard has dominated this era, open-source software is starting to put a big dent into that stronghold. Client/Server Era: 1983 to Present As the desktop and laptop personal computers became more powerful and cheaper, businesses began using them to replace mini-computers and some mainframe computers by networking them together. Think of an octopus, with the body representing the server and the tentacles representing the clients. This simplified concept is what client/server computing is all about. At the heart of every network is a server. It can be a mainframe, midrange, minicomputer, workstation, or a souped-up personal computer. It’s where some of the data, applications software, and other instructions are stored that network users need in order to communicate with and process transactions on the network. The client computer is the node on the network that users need to access and process transactions and data through the network. A Web server serves a Web page to a client in response for a request for service. Web server software is responsible for locating and managing stored Web pages. Rather than one server trying to do it all, each server is assigned a specific task on an application server. Dividing tasks among multiple servers allows faster, more efficient responses that cost a business less to process than would a mainframe or one computer trying to do it all. Large companies use a multitiered client/server architecture that has several different levels of servers. Microsoft is the market leader for client/server networking. The Windows operating systems (Window Server, Windows Vista, Windows XP, Windows 2000) dominate 78 percent of the local area network market. A threat to Microsoft’s dominance in this market is being felt by the adoption of Linux. Enterprise Internet Computing Era: 1992 to Present Perhaps no other era has seen the explosive growth in functionality and popularity as this era. The problems created by proprietary, closed systems are being solved by the standards and open-source software created in this era. The promise of truly integrated hardware, software, and services is coming true with the technological advances in the last fifteen years. On the other hand, the promises of delivering critical business information painlessly and seamlessly across all organizational levels are made all the more difficult to match because of the ever-changing landscape of technology products and services. As you realize that each era built upon previous advances made in hardware, software, and services, let your imagination drift for a moment to the possibilities that the future holds. It truly is an exciting time to be involved in technology. TECHNOLOGY DRIVERS OF INFRASTRUCTURE EVOLUTION Let’s look at some of the reasons why we’ve evolved so much in the last 20 years and what lies in the future. Moore’s Law and Micro-processing Power Perhaps no other law holds as much weight in the evolution of computers as Moore’s Law. Take a moment to visit the Web site that describes it in more detail— http://www.intel.cc/technology/mooreslaw/index.htm?iid=tech_as+moore. As important as the microprocessor chip using transistors has been to increase in computing power, nanotechnology is the promise of the future. This new technology is being developed because of the limitations of the older technology. However, one must ask just how much computing speed and capacity does the average user really need on a PC or should the industry begin looking at developing other features instead? The Law of Mass Digital Storage As the amount of digital information expands, so too does our need and desire for more storage. In the early evolution of computing, storage needs were based on written text. Now we need the extra storage for photos, music, and video. How much storage does the average user really need? Is it the chicken-or-the-egg syndrome: give me more storage and I’ll find something to do with it or I now have all these new applications therefore I need more storage. One thing is certain, users will demand more storage and the technologists will develop it. Metcalfe’s Law and Network Economics If you build a network for ten users, you’ll spend the necessary money for the basic equipment. If you already have the equipment in place, you can add one more user at nominal costs. However, the eleventh user will bring value to the network far beyond what it costs to add him/her. Declining Communications Costs and the Internet One of the biggest drivers in the exploding use of computers is directly attributable to the Internet. It’s getting cheaper every day to connect to the Internet because of the declining communication costs. As more and more users connect to the Internet, businesses must find ways to meet the expectations and demands of users. Standards and Network Effects Nothing has helped grow the Internet more than having standards in place by which technology suppliers can create and build products and by which users can rely on the products working together. Technology standards are specifications that establish the compatibility of products and the ability to communicate in a network. Bottom Line: We’ve come so far so quickly in the evolution of technology. From massive, expensive mainframe computers to inexpensive, hand-held devices, the evolution and revolution continues. 5.2 INFRASTRUCTURE COMPONENTS What if you bought a car that did not include tires, a steering wheel, a radio, or a heater? After purchasing this vehicle, you had to shop around for the missing parts. When you entered a store, you are confronted with eight different steering wheels, six different radios, and nine different heaters. You quickly realize how incompatible the parts are with varying brands of vehicles and wished that the manufacturers simply put all the parts together for you. Once assembled, you drive to the gas station only to realize that your car can’t use that brand of gasoline. How frustrating. In part, that is what has happened to computers and peripherals over the years. In the early days of personal computers, the printer you had your eye on may not have worked with your brand of computers. You had to buy a scanner built specifically for your computer. You couldn’t connect to the Internet unless you had the correct modem for your Internet Service Provider. If you wanted to share photos with your friends, each of you had to have four different software programs, each of which would process the others’ photos. Now expand these examples to a corporate enterprise system. The evolution we are now experiencing is aiming to fix these problems and make computing ubiquitous anytime, anywhere. COMPUTER HARDWARE PLATFORMS The microprocessor is the heart of any computing device no matter how small or large. Two companies produce most micro-processing chips: Intel and Advanced Micro Devices (AMD). The most popular and widely known is Intel. (However, when you’re shopping for a new computer you should consider the AMD processor. It’s as good as the Intel chip and tends to be a little cheaper. Benchmark tests of the AMD chip against the Intel Celeron chip proved that the AMD chip was superior in performance.) Since the network is becoming so commonplace and the heart of computing, network service providers must have the server backbone in place to meet the increased demand. Blade servers are meeting the needs of service providers cheaper and easier than traditional big-box servers. IBM offers mainframe computers that can also provide the network processing although they are more expensive and require Unix software. OPERATING SYSTEM PLATFORMS Operating systems tell computers what to do, when to do it, and how. Operations such as logging on, file management, and network connectivity are controlled by the operating system. By far the most prolific operating system is Microsoft Windows in various versions. Windows is also the operating system used by some non-traditional computing devices such as hand-held PDAs and cell phones. Unix and Linux are often associated with large networks that require less application overhead and faster processing. Linux open-source software is becoming the operating system of choice for organizations looking to save money. Businesses and governments across the globe are adopting the Linux platform as a way to reduce IT spending and license costs. ENTERPRISE SOFTWARE APPLICATIONS Integrating applications into seamless processes across the organization is the goal of enterprise software applications. Customer relationship management and supply chain management systems are the two most popular applications in this category. We explore them more extensively in later chapters. Why are these applications becoming popular? In the back office, business processes that have historically been optimized for internal efficiency can now add the dimension of superior customer service, personalized to each customer, leveraging the skills of trained agents in the call centre. With better information from the customer, back office processes are improved. And in the long run, agents can gradually decrease the flow of paper into the back office, in favour of more efficient communication channels such as e-mail and the Web. (TechWorld.com, copied March 21, 2005) DATA MANAGEMENT AND STORAGE Businesses and organizations are gathering more and more data on customers, employees, and even the business itself. Managing and storing the data so that they are easily accessible and provide meaningful information to the organization is becoming a science in and of itself. Storage area networks (SANs) provide a cohesive, economical way to consolidate data from across any and all systems within the business. Online users want instant access to data and SANs help companies provide it. NETWORKING/TELECOMMUNICATIONS PLATFORMS As we continue the march towards convergence of all things digital, networking and telecommunications platforms will merge into one. Rather than having one platform for networking computer devices and a separate platform for telecommunications, we’ll see one company providing a combination of telephone services, cell phone connectivity, computers and peripheral devices, handheld PDAs, and wireless services all rolled into one. Many telecommunications companies are now merging with Internet service providers to offer a complete package of digital services. CONSULTING AND SYSTEM INTEGRATION SERVICES Systems used in many medium- and large-sized companies and organizations are so complex that most businesses simply can’t manage by themselves. Integration services provided by the likes of IBM and Hewlett-Packard are necessary to simply keep up with changes. In many ways it makes more business sense for a company such as Frito-Lay to concentrate on its core processes of making snack food and let IBM take care of the technology issues. These services become more critical as many companies merge their old legacy systems with newer technologies such as wireless computing. The legacy systems, some as old as 20 or 30 years, simply can’t be thrown away but must work seamlessly with today’s technologies. Companies choose not to totally replace legacy systems because it’s too expensive, involves too much training, and carries too much organizational change. It’s easier to use middleware and other technologies to merge old and new systems. INTERNET PLATFORMS The Internet and its technology standards continue to expand the services businesses are able to provide their employees, customers, suppliers, and business partners. Intranets and extranets built on Internet technologies give businesses an easy and inexpensive method of providing services that were cost prohibitive a few years ago. Rather than purchase all of the hardware necessary to support Web sites, intranets, and extranets, many small and medium-sized companies use Web hosting services instead. It’s cheaper and easier to have these service providers take care of hardware, software, and security issues while the business concentrates on its core processes. 5.3 CONTEMPORARY HARDWARE PLATFORM TRENDS Although the cost of computing has fallen exponentially, the cost of the IT infrastructure has actually expanded as a percentage of corporate budgets. Let’s look at how a company can successfully meld hardware technologies. The Laudons give a number of reasons for this including: • Costs of computing services and software are high, and the intensity of computing and communications has increased as other costs have declined. • Need to integrate information stored in different applications, on different platforms. • Need to build resilient infrastructure that can withstand huge increases in peak loads and more routine assaults from hackers and viruses while conserving electrical power. • Need to increase service levels to meet customer demands. THE EMERGING MOBILE DIGITAL PLATFORM Smart phones and netbooks are newer technology platforms that allow services to be delivered anywhere/anytime. GRID COMPUTING Take a moment and think about how much time you don’t use your personal computer. It’s actually quite a lot. In fact, most computers are idle more time than not. What if you could combine all the idle time of hundreds or thousands of computers into a continuous, connected computing capacity to capture, process, manage, store, and retrieve data? You wouldn’t have to purchase mammoth, super computers to realize this capability and capacity. You just have to turn to grid computing. It allows companies to save money on hardware and software, and increase computing and processing speeds to make the company more agile. CLOUD COMPUTING AND THE COMPUTING UTILITY Most companies don’t build their own electrical generating plants or their own water treatment facilities. They purchase only the utilities they need, even in peak demand times. Why not do that with computing capacity. If Amazon.com needs fifty percent more capacity during the 30-day Christmas buying period, why should it have to purchase that much infrastructure only to have it sit idle the other eleven months of the year. On-demand computing mirrors other utilities that provide necessary infrastructure from centralized sources. It’s cheaper and helps companies reduce the total cost of ownership of IT technology. They can also take advantage of newer technologies than what they are able to buy and maintain on their own. Utility computing also gives companies a chance to expand services that perhaps they wouldn’t be able to provide if they had to buy all the hardware and software. AUTONOMIC COMPUTING As companies rely more and more on IT to meet the demands of employees, customers, suppliers, and business partners, they can’t afford to have any system downtime at all. Downtime costs money. Autonomic computing is a step towards creating an IT infrastructure that is able to diagnose and fix problems with very little human intervention. Although this type of computing is still rather new, it promises to relieve the burden many companies experience in trying to maintain massive, complex IT infrastructures. Table 5-3 compares current computing techniques with future autonomic computing techniques. Bottom Line: There are many different ways a business can meet its computing demands: integrating computing and telecommunications platforms, grid computing, on-demand computing, and autonomic computing. Although these services may be relatively new, rest assured they will grow in demand as companies need more and more IT infrastructure. WINDOW ON TECHNOLOGY: COMPUTING GOES GREEN TO THINK ABOUT QUESTIONS 1. What business and social problems does data center power consumption cause? Answer: Excessive power consumption uses vast amount of electricity that must be generated through hydroelectric plants or coal-fired power plants. While hydroelectric generation plants are less stressful on the environment than coal-fired, nevertheless, they do pull resources from more useful purposes. Coal-fired power plants generate huge amounts of carbon dioxide into the atmosphere which some scientists and politicians claim is a major cause of global warming. Social implications of increased power consumption point to global warming. 2. What solutions are available for these problems? Which are the most environment-friendly? Answer: Some of the solutions to cut power consumption discussed in the case study are a good beginning. Building data centers that take advantage of hydroelectric power generation rather than coal-fired power plants; renewable energy projects; alternative energy; employee telecommuting; thin client computers, software that automatically turns computers off; more efficient chips. Perhaps the most environment-friendly solutions are those that control the hardware and software, thereby controlling the problem at its source. Virtualization holds great promise as a way to reduce power requirements by reducing the number of servers required to run applications. 3. What are the business benefits and costs of these solutions? Answer: Even though it may cost a business up-front money to install hardware and software that reduces power requirements, it will save a business a lot of money in the long run by reducing the amount it pays for electricity to run the equipment and cool it at the same time. Businesses that reduce their power needs help the environment and can promote themselves as environment-friendly. 4. Should all firms move toward green computing? Why or why not? Answer: All firms should make some effort to reduce their power requirements and promote green computing. From a business standpoint it makes sense to reduce costs, both short term and long term. Yes, all firms should move toward green computing. It not only reduces their environmental footprint but also saves costs in the long run through energy efficiency and resource conservation. Moreover, adopting green computing practices aligns with societal expectations for corporate responsibility in sustainability efforts. MIS IN ACTION QUESTIONS Perform an Internet search on the phrase “green computing” and then answer the following questions. 1. How would you define green computing? Answer: Green computing is a way to reduce the impact on the environment and reduce resources consumption that may be detrimental to the environment by using more efficient hardware and better software. “Still, IT execs would be wise to keep an eye on more than the economics of energy efficient computing. Energy consumption has gotten so huge – U.S. data centers consume as much power in a year as is generated by five power plants – that policy makers are taking notice and considering more regulation. A group of government and industry leaders is trying to set a clear standard for what constitutes a "green" computer, a mark that IT execs might find themselves held to. Global warming concerns could spark a public opinion swing – either a backlash against big data centers or a PR win for companies that can paint themselves green. IT vendors are piling on, making energy efficiency central to their sales pitches and touting ecofriendly policies such as "carbon-neutral computing." One under-the-radar example of what's changing is a long acronym you'll start hearing more: EPEAT, or the Electronic Product Environmental Assessment Tool. EPEAT was created through an Institute of Electrical and Electronics Engineers council because companies and government agencies wanted to put green criteria in IT requests for proposals. EPEAT got a huge boost on Jan. 24 when President Bush signed an executive order requiring that 95% of electronic products procured by federal agencies meet EPEAT standards, as long there's a standard for that product.” (Information Week, What Every Tech Pro Should Know About 'Green Computing', Marianne Kolbasuk McGee, March 10, 2007) 2. Who are some of the leaders of the green computing movement? Which corporations are leading the way? Which environmental organizations are playing an important role? Answer: Some of the major corporations leading the green computing initiative are the same major players in other computing venues: IBM, HP, and Dell. Other major corporations who are going green as a way to save money on power consumption include most Wall Street firms (since they use a tremendous amount of power in their data centers), banks like Wells Fargo, and Amazon.com. “IT management isn't the first place you would start looking for environmental activists. But in 2006, the people in charge of buying and deploying computer technology found the concept of green computing extra compelling. Analysts say the main reason is cost, energy and space savings; if it's also good for the environment, that's icing on the cake. "Even if a customer is not looking at IT purchasing from an environmental-impact perspective, things like power management and energy efficiency are now a TCO [total cost of ownership] and infrastructure issue," John Frey, manager of corporate environmental strategies at HP, told internetnews.com. The way things are going, Gartner predicts that by 2008, 50 percent of current datacentres will have insufficient power and cooling capacity to meet the demands of high-density equipment. "With the advent of high-density computer equipment such as blade servers, many datacentres have maxed out their power and cooling capacity," said Michael A. Bell, research vice president for Gartner. "It's now possible to pack racks with equipment requiring 30,000 watts per rack or more in a connected load. This compares to only 2,000 to 3,000 watts per rack a few years ago." And energy costs are rising. HP engineering research estimates that for every dollar spent on information technology, a company can expect to spend the same or more to power and cool it. As companies add more performance, they can expect those costs to continue rising. (Internetnews.com, Greener Systems an Unstoppable Trend, David Needle, December 27, 2006) Organizations that are playing a major role in green computing include the Environmental Protection Agency and Institute of Electrical and Electronics Engineers (see article in the answer to question 1.) 3. What are the latest trends in green computing? What kind of impact are they having? Answer: A few trends in green computing include purchasing green desktops — those built to reduce power needs; more efficient server computers; increase the use of virtualization as a way to reduce the number of servers needed in data centers. 2007 saw the beginning of the green computing movement so it’s a bit early to determine the overall impact all the initiatives are having. Much of the impetus behind the ‘green computing’ movement is not necessarily to save the environment, although it will certainly help reduce the impact on the environment. Rather, many companies are seeking ways to reduce power costs; going green is a way to do that. 4. What can individuals do to contribute to the green computing movement? Is the movement worthwhile? Answer: Individuals can contribute to the green computing movement by purchasing computers that have Energy Star ratings, turning off equipment they aren’t using, recycling computer equipment, and supporting companies that are going green. VIRTUALIZATION AND MULTICORE PROCESSORS One way of curbing hardware proliferation and power consumption is to use virtualization to reduce the number of computers required for processing. Virtualization is the process of presenting a set of computing resources (such as computing power or data storage) so that they can all be accessed in ways that are not restricted by physical configuration or geographic location. Benefits of server virtualization include: • Run more than one operating system at the same time on a single machine. Increase server utilization rates to 70 percent or higher. • Reduce hardware expenditures. Higher utilization rates translate into fewer computers required to process the same amount of work. • Mask server resources from server users. • Reduces power expenditures. • Ability to run legacy applications on older versions of an operating system on the same server as newer applications. • Facilitates centralization of hardware administration. Multicore Processors A multicore processor is an integrated circuit that contains two or more processors. This technology enables two processing engines with reduced power requirements and heat dissipation to perform tasks faster than a resource-hungry chip with a single processing core. Intel and Advanced Micro Devices (AMD) now make dual-core microprocessors and are introducing quad-core processors. Sun Microsystems sells servers using its eight-core Ultra Sparc T1 processor. 5.3 CONTEMPORARY SOFTWARE PLATFORM TRENDS What if you bought a beautiful new car with all the fanciest equipment inside, but when you tried to start the engine nothing happened? How can that be, you ask? The car cost a lot of money and it’s brand new! However, as soon as you put some gasoline in the tank, it starts right up and you’re moving down the road. You can have all the computer hardware money can buy, but if you don’t have the right software, you can’t do very much with the hardware and you’ve wasted a lot of money. Let’s review some information about software platform trends that helps get the most out of your hardware. There are five major themes in contemporary software platform evolution: • Linux and open-source software • Java • Enterprise software • Web services and service-oriented architecture • Software outsourcing LINUX AND OPEN-SOURCE SOFTWARE Contemporary software platform trends include the growing use of Linux, open-source software, and Java, software for enterprise integration, and software outsourcing. Opensource software is produced and maintained by a global community of programmers and is downloadable for free. Linux is a powerful, resilient open-source operating system that can run on multiple hardware platforms and is used widely to run Web servers. Linux In the early 1990s, a graduate student at the University of Finland wanted to build an operating system that anyone could download from the Internet, no one would own, and hundreds or thousands of people would work together on the creation, maintenance, and improvement. He began working on what is now known as Linux, a Unix-like operating system. He posted his program to a Web page and allowed anyone to change and improve the code. “Linux has developed into one of the world’s most popular operating systems. By the end of 2000, it was powering 30% of the Web server market, about equal to Microsoft’s slice of that pie, according to consultancy NetCraft, which tracks server operating systems.” (BusinessWeek, Feb 22, 2001) Experts predict its use will expand rapidly, since its small size and low cost make it ideal for information appliances. It’s also less crash-prone than most other operating systems, a feature that makes it very attractive to companies running e-commerce Internet businesses. Perhaps the success of open-source software is partly due to the very lack of bureaucratic organizations as we explained in Chapter 3. Open-source software has proven to be more secure than other leading software programs precisely because its code is so readily available. Security problems in proprietary software are usually discovered by those working inside the software manufacturer. That task is often restricted by the number of employees dedicated to the task, resource allocation, and organizational problems that don’t confound the open-source software movement. Open-source software isn’t limited to Linux but includes applications such as Mozilla Firefox Web browser (http://store.mozilla.org/product.php?code=MZ80006&catid=1) and low-cost office suite software such as Sun’s Star Office. Sun Microsystems, Inc., offers online office suites such as word processing and spreadsheets in Star Office (www.sun.com/software/star/staroffice/5.2/get.html). You can download the program for a small fee from the Internet and then use any computer connected to a corporate network or an intranet to create your files. Your files are available to you wherever you are in the world or to colleagues geographically separated from the main office. Files created using Star Office are compatible with Microsoft Word. And, the Star Office suite is much cheaper than Microsoft Office. SOFTWARE FOR THE WEB: JAVA AND AJAX Java is a programming language that has come on strong in the last decade. What makes this language so enticing is that it is operating system-independent and processor independent. This means that you don’t need to worry about compatibility between separate operating systems such as Windows, MacIntosh, or UNIX. Regardless of the hardware or software you use, this language will fit them all. Many businesses and individuals have long lamented the closed systems that caused incompatibility between different platforms. It’s been nearly impossible to share data between various hardware and software platforms. Many large mainframes couldn’t pass information to small PCs without special programs. Data used in smaller, individual PCs couldn’t pass information to larger information systems. Java solves these problems. Java can be used to create miniature programs called “applets”. Applets perform very small, specialized, one-at-a-time tasks. When a user wants to perform the task, the coding is moved from the server where it’s permanently stored and then executed on the client computer. When the task is completed, it’s deleted from the client and reverts to the server computer. In essence, you use an applet once and then literally throw it away. Using applets reduces storage needs on client computers and PCs. Again, it doesn’t matter whether the client is a PC or a terminal attached to a network. In fact, Java applets are being used on handheld computers, and on many other non-computer appliances. Java also reduces the “bloatware” problem of huge software application programs with more functions than the average person could ever hope to use. You don’t need a large application program to do a simple task. If you want to calculate the monthly payments for a car loan, you can use a simple Java applet instead of a huge spreadsheet program. This becomes an even more important feature as more applications move to smaller computing devices that don’t have the capacity to hold large software programs. Ajax is a method of building interactive applications for the Web that process user requests immediately. Ajax combines several programming tools including JavaScript, dynamic HTML (DHTML), Extensible Markup Language (XML), cascading style sheets (CSS), the Document Object Model (DOM), and the Microsoft Object. Ajax allows content on Web pages to update immediately when a user performs an action, unlike an HTTP request, during which users must wait for a whole new page to load. Web browsers were not created until the early 1990s and were first commercialized by Marc Andreesen, who started Netscape, Inc. He actually created the software while he was still a college student. The term browser came about because the software allows you to “browse” the various documents stored on the Internet. The other popular browser is Microsoft’s Internet Explorer. If you are an America Online subscriber, you probably use the AOL browser. You can also download and install the Firefox Web browser for free from http://www.mozilla.com/firefox/. WEB SERVICES AND SERVICE-ORIENTED ARCHITECTURE (SOA) Web services refers to set of loosely coupled software components that exchange information with each other using standard Web communication standards and languages. Some of the characteristics of Web services include: • They can exchange information between two different systems regardless of the operating systems or programming languages on which the systems are based. • They can be fused to build open standard Web-based applications linking systems of two different organizations. • They can be used to create applications that link disparate systems within a single company. • They are not tied to anyone operating system or programming language. • Different applications can use them to communicate with each other in a standard way without time-consuming custom coding. It’s becoming quite common for the average computer user to create Web pages using the Hypertext markup language (HTML). In fact, by using ubiquitous software applications such as Word 2003, Lotus 1-2-3, or PowerPoint, you can as easily create a Web page as you can a letter, chart, or database. Although you don’t have to be an expert in HTML, you should be familiar with how the coding works. Using tags, you describe how you want text, graphics, tables, video, and sound displayed on the Web page or site. The language is one of the easiest to learn, and you can do it by getting a good book that explains how the code works. You can also access various free Web sites that will teach you the basics. Combining HTML language into everyday applications is further integrating the Internet into everything we do. As we use the Web for more applications, the computer language is evolving to keep up with new and innovative uses. HTML has worked well for displaying text and graphics. Extensible Markup Language (XML) is designed to control data on a Web page or site and make it more manageable. A white paper written by Jon Bosak, Sun Microsystems, explains XML: XML gives us a single, human-readable syntax for serializing just about any kind of structured data — including relational data — in a way that lets it be manipulated and displayed using simple, ubiquitous, standardized tools. The larger implications of a standard, easily processed serial data format are hard to imagine, but they are obviously going to have a large impact on electronic commerce. And it seems clear that electronic commerce is eventually going to become synonymous with commerce in general. XML can do for data what Java has done for programs, which is to make the data both platform-independent and vendor-independent. This capability is driving a wave of middleware XML applications that will begin to wash over us around the beginning of 1999. However, the ability of XML to support data and metadata exchange shouldn’t be allowed to distract us from the purpose for which XML was originally designed. The designers of XML had in mind not just a transport layer for data but a universal media-independent publishing format that would support users at every level of expertise in every language. XHTML (Extensible Hypertext Markup Language) combines HTML language with the XML language to create a more powerful language for building more useful Web pages. It’s all part of the evolution of the Internet. Four software standards and communication protocols provide easy access to data and information via Web services in the first layer: • XML (eXtensible Markup Language): describes data in Web pages and databases • SOAP (Simple Object Access Protocol): allows applications to pass data and applications to one another • WSDL (Web Services Description Language): describes a Web service so that other applications can use it • UDDI (Universal Description, Discovery, and Integration): lists Web services in a directory so it can be located The second layer of Web service-oriented architecture consists of utilities that provide methods for: • security, third-party billing and payment systems • transporting messages and identifying available services The third layer is comprised of Web services applications themselves such as payment processing services. The distinct advantage of building Web services is their reusability. That is, you can build one Web service that can be used by many different businesses. This kind of functionality promises a whole slew of new Internet-related development companies to spring up in the next few years as this idea takes hold. Rather than forcing companies to throw out old technology, Web services allow them to grab data out of their old systems more easily. The state of New Mexico, for example, is creating a Web portal that allows employees to see and tailor their personal information on a single Web page — everything from paycheques to retirement plans. In the background, it’s using Web-service technology to tie together old software systems, including mainframes, that hadn’t been able to talk before. (BusinessWeek, e.biz, March 18, 2002) MASHUPS AND WIDGETS Mashus is a Web page or application that integrates complementary elements from two or more sources. Mash-ups are often created by using a development approach called Ajax. Web 2.0 (or Web 2) is the popular term for advanced Internet technology and applications including blogs, wikis, RSS, and social bookmarking. The two major components of Web 2.0 are the technological advances enabled by Ajax and other new applications such as RSS and Eclipse and the user empowerment that they support. One of the most significant differences between Web 2.0 and the traditional World Wide Web (retroactively referred to as Web 1.0) is greater collaboration among Internet users and other users, content providers, and enterprises. SOFTWARE OUTSOURCING Three external sources for software outsourcing are: • software packages from a commercial vendor • software services from an application service provider • outsourcing custom application development to an outside software firm In the past, most firms developed their own software in-house by using expensive teams of programmers. Now, firms increasingly turn over their software development needs to outside developers, including enterprise software firms who will sell them pre-packaged solutions customized to their needs. Firms are also increasingly outsourcing their software projects off-shore where they can capitalize on low-wage regions such as India, China, Eastern Europe, Africa, and Latin America. Software Packages and Enterprise Software Rather than design, write, test, and maintain legacy systems, many organizations choose to purchase software programs from other companies that specialize in certain programs. Let’s face it; there isn’t much difference in accounting needs and methodologies for a snack food manufacturer than there is for a paper manufacturer. So why not purchase a prewritten, fully tested, software program regardless of the business you are in? Application Service Providers (ASPs) Midsize companies in need of sophisticated enterprise resource planning programs can rent only what they need and can afford through on-line providers. Businesses can outsource their accounting needs to a Web-based service. Another plus for using application service providers (ASP) is the fact that their services are Web-based, thus making the user’s files accessible from virtually any computer connected to the Internet. The road-warriors love having instant access to documents from wherever they are. Workers can collaborate with others in distant offices through a Web-based ASP, and no one has to worry about their files being compatible with others—they are. There is some danger to outsourcing your information resources to an ASP. Remember, all your data are stored on another company’s server computers and you have little control of it. What happens if the ASP goes out of business? “The closure in October 2000 of San Francisco’s flashy Red Gorilla is a case in point. This ASP, which provided online billing and other services to lawyers and the like, closed its doors without warning and had no immediate contingency plan for its customers to retrieve their data. It went completely offline while arrangements were made for a customer transfer to OfficeTool.com. As this is written, customers have not been able to retrieve any of their data.” (Forbes, November 27, 2000) Software Outsourcing Companies are discovering that it’s cheaper and easier to hire third party vendors for software related tasks such as system maintenance, data management, and program development. Much as they’ve outsourced mundane and repetitive tasks to mainly overseas locations, many companies are outsourcing all kinds of software related work. The Internet has made this option more viable than it ever was. Bottom Line: The four major themes in contemporary software platform evolution—Linux and open-source software; Java; Web services and service oriented architecture; and software outsourcing; are providing companies new options in IT infrastructure technologies. WINDOW ON ORGANIZATIONS: SALESFORCE.COM: SOFTWARE AS A SERVICE GOES MAINSTREAM TO THINK ABOUT QUESTIONS 1. What are the advantages and disadvantages of the software-as-a-service model? Answer: Advantages: • Eliminates need for large upfront capital investments in systems • Eliminates lengthy implementations on corporate computers • Low-cost subscriptions; no expensive licensing and maintenance fees • No hardware for subscribers to purchase, scale, and maintain • No operating systems, database servers or applications servers to install • No consultants and staff • Accessible via standard Web browser with behind-the-scene software updates • Better scalability, eliminate cost and complexity of managing multiple layers of hardware and software Disadvantages: • The audience for the App Exchange application platform may not be large enough to deliver the level of growth Salesforce wants • The platform may not be attractive to larger companies for their application needs 2. What are some of the challenges facing Salesforce as it continues its growth? How well will it be able to meet those challenges? Answer: Challenges include: • Increased competition both from traditional industry leaders and new challengers hoping to replicate Salesforce’s success • Expanding its business model into other areas • Ensuring the system is available 24/7 with no outages Salesforce is answering the first two challenges by partnering with Google and combining its services with Gmail, Google Docs, Google Talk, and Google Calendar to allow its customers to accomplish more tasks via the Web. Salesforce.com and Google both hope that their Salesforce.com for Google Apps initiative will galvanize further growth in on-demand software. Salesforce opened up its Force.com application development platform to other independent software developers and listed their programs on its AppExchange. Small businesses can go online and download software applications, some add-ons to Salesforce.com and others that are unrelated. To ensure system availability, Salesforce.com provides tools to assure customers about its system reliability and also offers PC applications that tie into their services so users can work offline. 3. What kinds of businesses could benefit from switching to Salesforce and why? Answer: Small to medium-size businesses are probably the most likely ones to switch to Salesforce.com because of cost factors and the lack of having in-house resources to provide the same level of computing capacity. Businesses that are trying to increase the sophistication of their computing capabilities could also benefit from switching to Salesforce as long as the two are compatible. Businesses that rely on smart customer management would benefit greatly from using the tools available at Salesforce.com. Also companies that have small sales and marketing teams can benefit from the software-as-service business model. 4. What factors would you take into account in deciding whether to use Salesforce.com for your business? Answer: Businesses should assess the costs and benefits of the service, weighing all people, organization, and technology issues. Does the software-as-a-service applications integrate well with the existing systems? Does it deliver a level of service and performance that’s acceptable for the business? Does the SaaS fit with the business’s overall competitive strategy and allow the company to focus on core business issues instead of technology challenges? In deciding whether to use Salesforce.com for your business, factors to consider include the specific needs of your business, scalability of the platform, ease of customization, integration capabilities with existing systems, and cost-effectiveness compared to alternatives. Additionally, assessing the level of support and training available, as well as evaluating user feedback and industry reputation, is crucial for making an informed decision. MIS IN ACTION QUESTIONS Explore the Salesforce.com Web site. Go to the App Exchange portion of the site, and examine the applications available for each of the categories listed. Then answer the following questions: 1. What are the most popular applications on App Exchange? What kinds of processes do they support? Answer: The “Top Ten Installs in the last 30 days” (as of the time this was written) include calendar synchronization, browser buttons for Firefox, dashboards for several different analytical programs, search functions, AdWords for Google, and spreadsheets for collaboration. The processes supported by these applications include executive support system type functions (dashboards) and collaboration support functions like calendars and spreadsheets. 2. Could a company run its entire business using Salesforce.com and App Exchange? Explain your answer. Answer: Depending on the type of business, a company probably could run its entire operations using Salesforce.com and App Exchange. All four major functional areas of a business are supported: Sales and Marketing, Manufacturing and Production, Finance, and Human Resources. There are dozens of applications available to fully support all of these areas. It would be a matter of integrating the software from Salesforce.com and App Exchange with any existing legacy systems within the business. 3. What kinds of companies are most likely to use App Exchange? What does this tell you about how Salesforce.com is being used? (Answer copied directly from Salesforce.com Web site.) Answer: Take advantage of best practices and innovation in your industry using these industry specific applications. These tailored applications will empower you to streamline business processes, improve visibility and enhance productivity for your organization. • Communications: The applications in this subcategory allow managers to track and analyze sales information specific to the telecommunication, cable and satellite industries. • Financial Services: For financial services professionals, having easy access to the most up-to-date customer information is critical to achieving success. These pre-integrated applications enable finance professionals in banking, capital markets, insurance, mortgage lending, and wealth management to capture customer data, automate business processes, and extend the functionality of salesforce.com. • High-Tech: The fast paced world of High-Tech requires easy access to critical customer information, the ability to streamline processes and highly customizable systems. The pre-integrated applications in this section allow high tech management to extend the scope of their on-demand implementation far beyond CRM. • Media: Timely access to mission-critical information is essential for media and communications companies. These pre-integrated solutions enable sales, marketing and support professionals with data from best-of-breed systems driving their business operations. • Nonprofits: This industry solutions category lists apps that help nonprofit organizations with key business operations such as managing and tracking of volunteers and donors, managing donations (online and offline), as well as more specific solutions related to energy management and crisis monitoring and management. • Public Sector: For government agencies, the applications in this subcategory provide a host of solutions to streamline business processes and better manage constituent data. 5.5 MANAGEMENT ISSUES The speed and computing capacity of technology continues to advance at dizzying speeds and in ways we can hardly dream. Keeping up with all the advancements can be a major task in itself. The challenges of creating and managing a good IT infrastructure include: • Dealing with scalability and technology change • Management and governance • Making wise infrastructure investments • Coordinating infrastructure components DEALING WITH PLATFORM AND INFRASTRUCTURE CHANGE To be sure, it’s extremely hard to figure out ahead of time how much computing capacity a company will need. It’s like gazing into a crystal ball and trying to discern the future. Managers need to design scalability into their systems so that they don’t under- or overbuild their systems. The idea is to initially build the system for what the company thinks they need, but to design it in such a way that increasing capacity is a fairly easy thing to do. If the system is more successful than originally thought, or as the number of users increases, capacity can be increased without having to start over from scratch. MANAGEMENT AND GOVERNANCE A standing issue among information systems managers and CEOs has been the question of who will control and manage the firm’s IT infrastructure. Some of these issues involve: • Should departments and divisions have the responsibility of making their own information technology decisions or should IT infrastructure be centrally controlled and managed? • What is the relationship between central information systems management and business unit information systems management? • How will infrastructure costs be allocated among business units? MAKING WISE INFRASTRUCTURE INVESTMENTS IT infrastructure is a major investment for a firm. The question on how much a firm needs to spend on its IT infrastructure is not an easy one to make. A delicate balance must be reached to ensure that the firm spends enough money to provide business services and to outperform the competitors without spending more than it needs to. Competitive Forces Model for IT Infrastructure Investment Does your company spend too little on IT infrastructure thereby foregoing opportunities for new or improved products and services? Or does your company spend too much on IT infrastructure thereby wasting precious resources that could be utilized elsewhere? By answering the following questions, your company could align its IT spending with its needs. • Inventory the market demand for your firm’s services • Analyze your firm’s five-year business strategy • Examine your firm’s IT strategy, infrastructure, and cost for the next five years • Determine where your firm is on the bell curve between old technologies and brand new ones • Benchmark your service levels against your competitors • Benchmark your IT expenditures against your competitors Figure 5-12 illustrates a competitive forces model you can use to address the question of how much your firm should spend on IT infrastructure. Total Cost of Ownership of Technology Assets The cost issue is becoming more important to businesses and companies as computer technology and networks grow. Depending on the configuration of the network, a company can save or lose many dollars. What’s most important to remember is that the total cost of ownership (TCO) should extend past the hard dollars spent on hardware and software. The cost should incorporate such items as employee training, their ability to perform necessary functions given the network configuration, and lost productivity when the network is down. The TCO should also include the amount of money spent on communications wiring (telephone wires, fiber-optic cable, etc.) and security and access issues. By no means should managers simply count the dollars spent on the hardware and software; they need to also determine the cost to support and manage the system. SUMMARY 1. What is IT infrastructure, and what are its components? Answer: IT infrastructure is the shared technology resources that provide the platform for the firm’s specific information system applications. IT infrastructure includes hardware, software, and services that are shared across the entire firm. Major IT infrastructure components include computer hardware platforms, operating system platforms, enterprise software platforms, networking and telecommunications platforms, database management software, Internet platforms, and consulting services and systems integrators. You can better understand the business value of IT infrastructure investments by viewing IT infrastructure as a platform of services as well as a set of technologies. 2. What are the stages and technology drivers of IT infrastructure evolution? Answer: There are five stages of IT infrastructure evolution. IT infrastructure in the earliest stage consisted of specialized “electronic accounting machines”, that were primitive computers used for accounting tasks. IT infrastructure in the mainframe era (1959 to present) consists of a mainframe performing centralized processing that could be networked to thousands of terminals and eventually some decentralized and departmental computing using networked minicomputers. The personal computer era (1981 to present) in IT infrastructure has been dominated by the widespread use of standalone desktop computers with office productivity tools. The predominant infrastructure in the client/server era (1983 to present) consists of desktop or laptop clients networked to more powerful server computers that handle most of the data management and processing. The enterprise Internet computing era (1992 to present) is defined by large numbers of PCs linked into local area networks and growing use of standards and software to link disparate networks and devices linked into an enterprise-wide network so that information can flow freely across the organization. 3. What are the current trends in computer hardware platforms? Answer: The emerging trends in mobile digital computing platform, grid computing, mobile platforms, on-demand cloud computing, and autonomic computing demonstrate that, increasingly, computing is taking place over a network. Grid computing involves connecting geographically remote computers into a single network to create a computational grid that combines the computing power of all the computers on the network with which to attack large computing problems. Cloud computing is a model of computing in which firms and individuals obtain computing power and software over the Internet, rather than purchasing and installing the hardware and software on their own computers. In autonomic computing, computer systems have capabilities for automatically configuring and repairing themselves. Virtualization organizes computing resources so that their use is not restricted by physical configuration or geographic location. Server virtualization enables companies to run more than one operating system at the same time. A multicore processor is a microprocessor to which two or more processors have been attached for enhanced performance, reduced power consumption and more efficient simultaneous processing of multiple tasks. 4. What are the current trends in software platforms? Answer: Contemporary software platform trends include the growing use of Linux, opensource software, Java and Ajax, Web services, mashups and widgets, and software outsourcing. Open-source software is produced and maintained by a global community of programmers and is downloadable for free. Linux is a powerful, resilient open-source operating system that can run on multiple hardware platforms and is used widely to run Web servers. Java is an operating system and hardware-independent programming language that is the leading interactive programming environment for the Web. Ajax allows a client and server to exchange small pieces of data behind the scenes so that an entire Web page does not have to be reloaded each time the user requests a change. Web services are loosely coupled software components based on open Web standards that are not product-specific and can work with any application software and operating system. They can be used as components of Web-based applications linking the systems of two different organizations or to link disparate systems of a single company. Mashups and widgets are the building blocks of new software applications and services based on the cloud computing model. Companies are purchasing their new software applications from outside sources, including software packages, by outsourcing custom application development to an external vendor (that may be offshore) or by renting software services (SaaS). 5. What are the challenges of managing IT infrastructure and management solutions? Answer: Major infrastructure challenges include dealing with infrastructure change, agreeing on infrastructure management and governance, and making wise infrastructure investments. Solution guidelines include using a competitive forces model to determine how much to spend on IT infrastructure and where to make strategic infrastructure investments, and establishing the total cost of ownership (TOC) of information technology assets. The total cost of owning technology resources includes not only the original cost of computer hardware and software but also costs for hardware and software upgrades, maintenance, technical support, and training. KEY TERMS The following alphabetical list identifies the key terms discussed in this chapter. Ajax — development technique for creating interactive Web applications capable of updating the user interface without reloading the entire browser page. Application server — software that handles all application operations between browser based computers and a company’s back-end business applications or databases. Application service provider (ASP) — company providing software that can be rented by other companies over the Web or a private network. Autonomic computing — effort to develop systems that can manage themselves without user intervention. Blade servers — entire computer that fits on a single, thin card (or blade) and that is plugged into a single chassis to save space and power and reduce complexity. Client — the user point of entry for the required function in client/server computing. Normally a desktop computer, workstation, or laptop computer. Client/server computing — a model for computing that splits processing between clients and servers on a network, assigning functions to the machine most able to perform the function. Edge Computing — model for distributing the computing load (or work) across many layers of Internet computers to minimize response time. Enterprise application integration (EAI) software — software that works with specific software platforms to tie together multiple applications to support enterprise integration. Extensible Markup Language (XML) — general purpose language that describes the structure of a document and supports links to multiple documents, enabling data to be manipulated by the computer. Used for both Web and non-Web applications. Grid computing — applying the resources of many computers in a network to a single problem. Hypertext Markup Language (HTML) — page description language for creating Web pages and other hypermedia documents. Java — programming language that can deliver only the software functionality needed for a particular task, such as a small applet downloaded from a network; it can run on any computer and operating system. Legacy Systems — a system that has been in existence for a long time and that continues to be used to avoid the high cost of replacing or redesigning it. Linux — reliable and compactly designed operating system that is an offshoot of Unix, that can run on many different hardware platforms, and that is available for free or at very low cost. Used as an alternative to Unix and Microsoft Windows NT. Mainframe — largest category of computer, used for major business processing. Mashup — composite software applications that depend on high-speed networks, universal communication standards, and open-source code and are intended to be greater than the sum of their parts. Middleware — software that connects two disparate applications, enabling them to communicate with each other and to exchange data. Minicomputers — middle-range computer used in systems for universities, factories, or research laboratories. Moore’s Law — assertion that the number of components on a chip doubles each year. Multicore processor — is an integrated circuit that contains two or more processors. Multitiered (N-tier) client/server architecture — client/server network in which the work of the entire network is balanced over several different levels of servers. Nanotechnology — technology that builds structures and processes based on the manipulation of individual atoms and molecules. On-demand computing — firm’s off-loading peak demand for computing power to remote, large-scale data processing centres, investing just enough to handle average processing loads and paying for only as much additional computing power as the market demands. Also called utility computing. Open-source software — software that provides free access to its program code, allowing users to modify the program code to make improvements or fix errors. Operating systems — the system software that manages and controls the activities of the computer. Outsourcing — the practice of contracting computer centre operations, telecommunications, networks, or applications development to external vendors. Scalability — the capability of a computer, product, or system to expand to serve a larger number of users without breaking down. Server — computer specifically optimized to provide software and other resources to other computers over a network. Service-oriented architecture (SOA) — software architecture of a firm built on a collection of software programs that communicate with each other to perform assigned tasks to create a working software application. Simple Object Access Protocol (SOAP) — set of rules that allow Web services applications to pass data and instructions to one another. Software package — a prewritten, pre-coded, commercially available set of programs that eliminates the need to write software programs for certain functions. Storage area network (SAN) — a high-speed network dedicated to storage that connects different kinds of storage devices, such as tape libraries and disk arrays, so they can be shared by multiple servers. Technology standards — specifications that establish the compatibility of products and the ability to communicate in a network. Total cost of ownership (TCO) — designates the total cost of owning technology resources, including initial purchase costs, the cost of hardware and software upgrades, maintenance, technical support and training. Universal Description, Discovery, and Integration (UDDI) — a framework that enables a Web service to be listed in a directory of Web services to that it can be easily located by other organizations and systems. Unix — operating system for all types of computers that is machine independent and supports multiuser processing, multitasking, and networking. Used in high-end workstations and servers. Utility computing — model of computing in which companies pay only for the information technology resources they actually use during a specified time period. Also called on-demand computing or usage-based pricing. Virtualization — is the process of presenting a set of computing resources so that they can all be accessed in ways that are not restricted by physical configuration or geographic location. Web browser — an easy-to-use software tool for accessing the World Wide Web and the Internet. Web hosting service — company with large Web server computers used to maintain the Web sites of fee-paying subscribers. Web server — software that manages requests for Web pages on the computer where they are stored and that delivers the page to the user’s computer. Web services — set of universal standards using Internet technology for integrating different applications from different sources without time-consuming custom coding. Used for linking systems of different organizations or for linking disparate systems within the same organization. Web Services Description Language (WSDL) — common framework for describing the tasks performed by a Web service so that the service can be used by other applications. Windows — Microsoft family of operating systems for both network servers and client computers. Wintel PC — any computer that uses Intel microprocessors (or compatible processors) and a Windows operating system. REVIEW QUESTIONS 1. What is IT infrastructure and what are its components? Define IT infrastructure from both a technology and a services perspective. Answer: • Technical perspective is defined as the shared technology resources that provide the platform for the firm’s specific information system applications. It consists of a set of physical devices and software applications that are required to operate the entire enterprise. • Service perspective is defined as providing the foundation for serving customers, working with vendors, and managing internal firm business processes. In this sense, IT infrastructure focuses on the services provided by all the hardware and software. IT infrastructure is a set of firm-wide services budgeted by management and comprising both human and technical capabilities. List and describe the components of IT infrastructure that firms need to manage. Students may wish to use Figure 5-10 to answer the question. IT infrastructure today is composed of seven major components. • Internet Platforms – Apache, Microsoft IIS, .NET, UNIX, Cisco, Java • Computer Hardware Platforms – Dell, IBM, Sun, HP, Apple, Linux machines • Operating Systems Platforms – Microsoft Windows, UNIX, Linux, Mac OS X • Enterprise Software Applications – (including middleware), SAP, Oracle, PeopleSoft, Microsoft, BEA • Networking/Telecommunications – Microsoft Windows Server, Linux, Novell, Cisco, Lucent, Nortel, MCI, AT&T, Verizon • Consultants and System Integrators – IBM/KPMG, EDS, Accenture • Data Management and Storage – IBM DB2, Oracle, SQL Server, Sybase, MySQL, EMC Systems 2. What are the stages and technology drivers of IT infrastructure evolution? List each of the eras in IT infrastructure evolution and describe its distinguishing characteristics. Answer: Five stages of IT infrastructure evolution include: • General-purpose mainframe and minicomputer era (1959 to present): consists of a mainframe performing centralized processing that could be networked to thousands of terminals and eventually some decentralized and departmental computing using networked minicomputers. • Personal computer era (1981 to present): dominated by the widespread use of standalone desktop computers with office productivity tools. • Client/server era (1983 to present): consists of desktop or laptop clients networked to more powerful server computers that handle most of the data management and processing. • Enterprise computing era (1992 to present): defined by large numbers of PCs linked together into local area networks and growing use of standards and software to link disparate networks and devices into an enterprise-wide network so that information can flow freely across the organization. • Cloud computing era (2000 to present): a model of computing where firms and individuals obtain computing power and software applications over the Internet, rather than purchasing their won hardware and software. Define and describe the following: Web server, application server, multitiered client/server architecture. Answer: • Web server: software that manages requests for Web pages on the computer where they are stored and that delivers the page to the user’s computer. • Application server: software that handles all application operations between browser-based computers and a company’s back-end business applications or databases. • Multitiered client/server architecture: client/server network in which the work of the entire network is balanced over several different levels of servers. Describe Moore’s Law and the Law of Mass Digital Storage • Moore’s Law: the number of components on a chip with the smallest manufacturing costs per component (generally transistors) had doubled each year. Moore later reduced the rate of growth to a doubling every two years. • Law of Mass Digital Storage: the amount of digital information is roughly doubling every year. Almost all of this information growth involves magnetic storage of digital data, and printed documents account for only 0.003 percent of the annual growth. The cost of storing digital information is falling at an exponential rate of 100 percent a year. Both of these concepts explain developments that have taken place in computer processing, memory chips, storage devices, telecommunications and networking hardware and software, and software design that have exponentially increased computing power while exponentially reducing costs. Describe how network economics, declining communication costs, and technology standards affect IT infrastructure. Network economics: Metcalfe’s Law helps explain the mushrooming use of computers by showing that a network’s value to participants grows exponentially as the network takes on more members. As the number of members in a network grows linearly, the value of the entire system grows exponentially and theoretically continues to grow forever as members increase. Declining communication costs: Rapid decline in costs of communication and the exponential growth in the size of the Internet is a driving force that affects the IT infrastructure. As communication costs fall toward a very small number and approach zero, utilization of communication and computing facilities explodes. Technology standards: Growing agreement in the technology industry to use computing and communication standards. Technology standards unleash powerful economies of scale and result in price declines as manufacturers focus on the products built to a single standard. Without economies of scale, computing of any sort would be far more expensive than is currently the case. 3. What are the current trends in computer hardware platforms? Describe the evolving mobile platform, grid computing, and cloud computing. Answer: Mobile platform: more and more business computing is moving from PCs and desktop machines to mobile devices like cell phones and smartphones. Data transmissions, Web surfing, e-mail and instant messaging, digital content displays, and data exchanges with internal corporate systems are all available through a mobile digital platform. Netbooks, small low-cost lightweight subnotebooks that are optimized for wireless communication and Internet access, are included. Grid computing: connects geographically remote computers into a single network to create a “virtual supercomputer” by combining the computational power of all computers on the grid. Cloud computing: a model of computing where firms and individuals obtain computing power and software applications over the Internet, rather than purchasing their own hardware and software. Data are stored on powerful servers in massive data centers, and can be accessed by anyone with an Internet connection and standard Web browser. Explain how businesses can benefit from autonomic computing, virtualization, and multicore processors. Autonomic computing Benefits of autonomic computing include systems that automatically do the following: • Configure themselves • Optimize and tune themselves • Heal themselves when broken • Protect themselves from outside intruders and self-destruction • Reduces maintenance costs • Reduces downtime from system crashes Virtualization Benefits of server virtualization include: • Run more than one operating system at the same time on a single machine. • Increase server utilization rates to 70 percent or higher. • Reduce hardware expenditures. Higher utilization rates translate into fewer computers required to process the same amount of work. • Mask server resources from server users. • Reduce power expenditures. • Run legacy applications on older versions of an operating system on the same server as newer applications. • Facilitates centralization of hardware administration. Multicore processors Benefits of multi-core processors: • Cost savings by reducing power requirements and hardware sprawl Less costly to maintain as fewer systems need to be monitored. • Performance and productivity benefits beyond the capabilities of today’s single-core processors. • Able to handle the exponential growth of digital data and the globalization of the Internet. • Able to meet the demands of sophisticated software applications under development. • Run applications more efficiently than single-core processors – giving users the ability to keep working even while running the most processor intensive task in the background. • Able to increase performance in areas such as data mining, mathematical analysis, and Web serving. 4. What are the current trends in software platforms? Define and describe open source software and Linux and explain their business benefits. Answer: Open-source software provides all computer users with free access to the program code so they can modify the code, fix errors in it, or to make improvements. Opensource software is not owned by any company or individual. A global network of programmers and users manage and modify the software. By definition, open-source software is not restricted to any specific operating system or hardware technology. Several large software companies are converting some of their commercial programs to open source. Linux is the most well-known open-source software. It’s a UNIX-like operating system that can be downloaded from the Internet, free of charge, or purchased for a small fee from companies that provide additional tools for the software. It is reliable, compactly designed, and capable of running on many different hardware platforms, including servers, handheld computers, and consumer electronics. Linux has become popular during the past few years as a robust low-cost alternative to UNIX and the Windows operating system. Thousands of open-source programs are available from hundreds of Web sites. Businesses can choose from a range of open-source software including operating systems, office suites, Web browsers, and games. Open-source software allows businesses to reduce the total cost of ownership. It provides more robust software that’s often more secure than proprietary software. Define Java and Ajax and explain why they are important. Java: Java is a programming language that delivers only the software functionality needed for a particular task. With Java, the programmer writes small programs called applets that can run on another machine on a network. With Java, programmers write programs that can execute on a variety of operating systems and environments. Further, any program could be a series of applets that are distributed over networks as they are needed and as they are upgraded. Java is important because of the dramatic growth of Web applications. Java is an operating system-independent, processor-independent, object-oriented programming language that can run on multiple hardware platforms. It provides a standard format for data exchange on Web sites. Ajax: Ajax is short for Asynchronous JavaScript and XML. It allows a client and server to exchange small pieces of data behind the scene so that an entire Web page does not have to be reloaded each time the user requests a change. It’s another Web development technique for creating interactive Web applications that make it easier and more efficient for Web site users to complete forms and other interactive features. Define and describe Web services and the role played by XML. Web services offer a standardized alternative for dealing with integration across various computer platforms. Web services are loosely coupled software components based on XML and open Web standards that are not product specific and can work with any application software and operating system. They can be used as components of Web-based applications linking the systems of two different organizations or to link disparate systems of a single company. Web services are not tied to a particular operating system or programming language. Different applications can use them to communicate with each other in a standard way without time-consuming custom coding. XML provides a standard format for data exchange, enabling Web services to pass data from one process to another Businesses use Web services to tie their Web sites with external Web sites creating an apparently seamless experience for users. The benefit derives from not having to recreate applications for each business partner or specific functions within a single company. Define and describe software mashups and widgets. Mashups are new software applications and services based on combining different online software applications using high-speed data networks, universal communication standards, and open-source code. Entrepreneurs are able to create new software applications and services based on combining different online software applications. The idea is to take different sources and produce a new work that is “greater than” the sum of its parts. Web mashups combine the capabilities of two or more online applications to create a kind of hybrid that provides more customer value than the original sources alone. Widgets are small software programs that can be added to Web pages or placed on the desktop to provide additional functionality. Web widgets run inside a Web page or a blog. Desktop widgets integrate content from an external source into the user’s desktop to provide services such as a calculator, dictionary, or display current weather conditions. Businesses benefit most from these new tools and trends by not having to re-invent the wheel. Widgets have already been developed by someone else and a business can use them for its own purposes. Mashups let a business combine previously developed Web applications into new ones with new purposes. They don’t have to re-invent the previous applications from scratch—merely use them in the new processes. Name and describe the three external sources for software. Software packages from a commercial software vendor: prewritten commercially available set of software programs that eliminates the need for a firm to write its own software program for certain functions, such as payroll processing or order handling. Software-as-a-service: a business that delivers and manages applications and computer services from remote computer centers to multiple users using the Internet or a private network. Instead of buying and installing software programs, subscribing companies can rent the same functions from these services. Users pay for the use of this software either on a subscription or a per-transaction basis. The business must carefully assess the costs and benefits of the service, weighing all people, organizational, and technology issues. It must ensure it can integrate the software with its existing systems and deliver a level of service and performance that is acceptable for the business. Outsourcing custom application development: an organization contracts its custom software development or maintenance of existing legacy programs to outside firms, frequently firms that operate offshore in low-wage areas of the world An outsourcer often has the technical and management skills to do the job better, faster, and more efficiently. Even though it’s often cheaper to outsource the maintenance of an IT infrastructure and the development of new systems to external vendors, a business must weight the pros and cons carefully. Service level agreements are formal contracts between customers and service providers that define the specific responsibilities of the service provider and the level of service expected by the customer. 5. What are the challenges of managing IT infrastructure and management solutions? Name and describe the management challenges posed by IT infrastructure. Answer: Creating and maintaining a coherent IT infrastructure raises multiple challenges including: Making wise infrastructure investments: IT infrastructure is a major capital investment for the firm. If too much is spent on infrastructure, it lies idle and constitutes a drag on firm financial performance. If too little is spent, important business services cannot be delivered and the firm’s competitors will outperform the under investing firm Coordinating infrastructure components: firms create IT infrastructures by choosing combinations of vendors, people, and technology services and fitting them together so they function as a coherent whole. Dealing with scalability and technology change: as firms grow, they can quickly outgrow their infrastructure. As firms shrink, they can get stuck with excessive infrastructure purchased in better times. Scalability refers to the ability of a computer, product, or system to expand to serve a larger number of users without breaking down. Management and governance: involves who will control and manage the firm’s IT infrastructure. Explain how using a competitive forces model and calculating the TCO of technology assets help firms make infrastructure investments The competitive forces model can be used to determine how much to spend on IT infrastructure and where to make strategic infrastructure investments, starting out new infrastructure initiatives with small experimental pilot projects and establishing the total cost of ownership of information technology assets. The total cost of owning technology resources includes not only the original cost of acquiring and installing hardware and software, but it also includes the ongoing administration costs for hardware and upgrades, maintenance, technical support, training, and even utility and real estate costs for running and housing the technology. The TCO model can be used to analyze these direct and indirect costs to help firms determine the actual cost of specific technology implementations. DISCUSSION QUESTIONS 1. Why is selecting computer hardware and software for the organization an important business decision? What management, organization, and technology issues should be considered when selecting computer hardware and software? Answer: As computer hardware and software significantly impact an organization’s performance, the selection of IT assets is critical to the organization’s operations and ultimate success. Issues, include capacity planning and scalability, making decisions regarding the required computer processing and storage capabilities, computer and computer processing arrangements, kinds of software and software tools needed to run the business, determining the criteria necessary to select the right software, the acquisition and management of the organizations hardware and software assets, and what new technologies might be available and beneficial to the firm. 2. Should organizations use software service providers for all their software needs? Why or why not? What management, organization, and technology factors should be considered when making this decision? Answer: The answer to the first question is very dependent upon the organization and its processing, storage, and business needs. When evaluating software service providers, the organization should examine such factors as availability and reliability, technology, fees and how the fees are assessed, and available applications. Managers should compare the costs and capabilities of using software service providers to the organizations costs and capabilities of operating and owning its own hardware and software assets. The organization should examine how using the service will impact organizational culture and how using an outside vendor addresses organizational and business needs. The technology factors include examining how well usage of the service fits with the firms IT infrastructure, as well as examining the appropriateness of using a software service provider to address the current problem. COLLABORATION AND TEAMWORK: EVALUATING SERVER OPERATION SYSTEMS Form a group with three or four of your classmates. One group should research and compare the capabilities and costs of Linux versus the most recent version of the Windows operating system for servers. Another group should research and compare the capabilities and costs of Linux versus UNIX. Each group should present its findings to the class, using electronic presentation software if possible. Answer: Answers for this project will vary as students will select different sources from which to gather the information. To complete the project they could easily obtain information by using the Web to gather data. Good sources to explore are: • http://www.linux.com/ • http://www.microsoft.com/ • http://www.unix.org/ In addition to the above examples, students can use shopping bots to extract this type of information or they can use print media, computer magazines, the university library, and computer science resource materials. One group will research and compare Linux and the latest Windows server operating system, focusing on capabilities, performance, security, and licensing costs. Another group will examine the features, performance, and costs of Linux versus UNIX for server operations, highlighting compatibility, support, and scalability. Each presentation will provide a comprehensive analysis to aid in informed decision-making for server operating system selection. LEARNING TRACK MODULES How Computer Hardware and Software Work Service-Level Agreements The Open Source Software Initiative Students will find Learning Track Modules on these topics on the MyMISLab for this chapter. HANDS-ON MIS: PROJECTS Management Decision Problems 1. University of Vancouver Medical Center: demand for additional servers and storage technology was growing by 20 percent each year. UVMC was setting up a separate server for every application; servers and other computers were running different operating systems; it was using technologies from many different vendors. This case provides an excellent example of how a business can inadvertently create a quagmire with technology. UVMC should consider virtualization as a way to manage its server situation. Virtualization would allow the organization to consolidate many different applications on just a few servers. It could also allow the organization to run different operating systems on a single server. UVMC could consider outsourcing its IT infrastructure so it could concentrate on its core processes. Sometimes an organization must use third-party vendors who specialize in technology, rather than trying to do everything itself. 2. Qantas Airways: needs to keep costs low while providing a high level of customer service. Management had to decide whether to replace its 30-year-old IT infrastructure with newer technology or outsource it. What factors should be considered in the outsourcing decision? List and describe points that should be addressed in an outsourcing service level agreement. Quantas should use the competitive forces model to help determine how much it should spend on its IT infrastructure. Then it should determine its total cost of ownership of technology assets. It should assess the costs and benefits of software-as-a-service outsourcing, weighing all the people, organizational, and technology issues, including the ability to integrate with existing systems and deliver a level of service and performance that is acceptable for the business. If it chooses to outsource its technology infrastructure, the service level agreement should define the specific responsibilities of the service provider and the level of service expected by Quantas. The SLA specifies the nature and level of services provided, criteria for performance measurement, support options, provisions for security and disaster recovery, hardware and software ownership and upgrades, customer support, billing, and conditions for terminating the agreement. IMPROVING DECISION MAKING: USING A SPREADSHEET TO EVALUATE HARDWARE AND SOFTWARE OPTIONS Software skills: Spreadsheet formulas Business Skills: Technology pricing This project requires students to use their Web research skills to obtain hardware and software pricing information, and then use spreadsheet software to calculate costs for various system configurations. Answers may vary, depending on the time to access the Web sites of computer hardware and software vendors to obtain pricing information. The sample solution files provided here are for purposes of illustration and may not reflect the most recent prices for desktop hardware and software products. Students will need to use their Web browsing software to find pricing for these products on the Web. They can then enter this information in a spreadsheet and perform some simple calculations to determine prices for 30 desktop systems. An example file named Ch05_Evaluate_HWSW_Options.xls can be found in the Chapter 5 folder. IMPROVING DECISION MAKING: USING WEB RESEARCH TO BUDGET FOR A SALES CONFERENCE Software skills: Internet-based software Business Skills: Researching transportation and lodging costs The Foremost Composite Materials Company is planning a two-day sales conference for October 15–16, starting with a reception on the evening of October 14. The conference consists of all-day meetings that the entire sales force, numbering 125 sales representatives and their 16 managers, must attend. Each sales representative requires his or her own room, and the company needs two common meeting rooms, one large enough to hold the entire sales force plus a few visitors (200) and the other able to hold half the force. Management has set a budget of $85 000 for the representatives’ room rentals. The hotel must also have such services as overhead and computer projectors as well as business centre and banquet facilities. It also should have facilities for the company reps to be able to do work in their rooms and to enjoy themselves in a swimming pool or gym facility. The company would like to hold the conference in either Whistler, BC or Montreal, QC. Foremost usually likes to hold such meetings in Hilton- or Marriott-owned hotels. Use their sites (www.hilton.com) and (www.marriott.com) to select a hotel in whichever of these cities that would enable the sites company to hold its sales conference within its budget. Link to the two sites’ home pages, and search them to find a hotel that meets Fore most’s sales conference requirements. Once you have selected the hotel, locate flights arriving the afternoon prior to the conference because the attendees will need to check in the day before and attend your reception the evening prior to the conference. Your attendees will be coming from Toronto (54), Vancouver (32), Quebec City (22), Edmonton (19), and Winnipeg (14). Determine costs of each airline ticket from these cities. When you are finished, draw up a budget for the conference. The budget will include the cost of each airline ticket, the room cost, and $60 per attendee per day for food. • What was your final budget? • Which did you select as the best hotel for the sales conference and why? The students will likely find hotels that interest them personally. In addition, the cities may not have Hiltons or Marriotts, so students should be encouraged to evaluate other hotels (e.g., Sheraton, Delta). The template that has been provided has a checklist for all of the hotel requirements to help keep them on track. You can show this in class or distribute it for your students use. They should also write a brief report detailing why they chose the hotel they did and price should not be the only issue. Several airlines Web sites are available now and the students will choose various ones based on their knowledge of airlines. Some will go directly to the airline site and others will go to discounters. Ask them to rate the use of the Web site in their report as well. CASE STUDY: AMAZON’S NEW STORE: UTILITY COMPUTING 1. What technology services does Amazon provide? What are the business advantages to Amazon and to subscribers of these services? What are the disadvantages to each? What kinds of businesses are likely to benefit from these services? Answer: Amazon provides cloud computing, also known as on-demand computing or utility computing. Similar to other utility providers like electric, water, and natural gas, Amazon provides computing capacity to businesses that want to pay only for what they use. Amazon can generate extra revenue from other businesses by offering its excess capacity to those that need it. Like most companies, Amazon used only a small portion of it total computing capacity at any one time. Its infrastructure is considered by many to be among the most robust in the world. Subscribers to the Simple Storage Service (S3) can use only what they need without having to purchase their own hardware and software. That reduces the total cost of ownership for small and medium-size businesses. The system is scalable and reliable for both Amazon and subscribers. The Elastic Compute Cloud (EC2) service enables businesses to utilize Amazon’s servers for computing tasks without having the overhead costs. Risks associated with incorporating the technology are minimal for businesses—Amazon takes most of the risks. Companies may want to go with more established names in computing; Amazon is not known as a technology company—its reputation is more as a retailer. It’s combating this perception by not requiring service contracts. However, its competitors like IBM, HP, and Sun Microsystems may follow Amazon’s lead and offer utility computing without requiring service-level agreements. Some companies are wary of using a supplier that doesn’t offer SLAs which guarantee the availability of services in terms of time. The growth of Amazon Web Services (AWS) could be harmful to its Web services line as well as its retail line if the company doesn’t position itself to handle a dramatic increase in demand on its infrastructure. Customers may experience outages in the service and not have any recompense since there are no service level agreements—only Amazon’s word that it will maintain 99.9 percent availability. Businesses, large and small, can benefit from using AWS. The service relieves small business from the TCO of having its own systems. AWS creates the opportunity for others to work at Web scale without making the mistakes that Amazon has already made and learned from. Large businesses can use AWS as an auxiliary unit without having to increase their hardware and associated TCO. 2. How do the concepts of capacity planning, scalability, and TCO apply to this case? Apply these concepts both to Amazon and to subscribers of its services. Answer: Amazon must provide hardware capacity planning and scalability for not just its own needs but for all its subscribers. Overestimates will create a drain on Amazon’s financial assets. Underestimating capacity and scalability will create shortages for its own business and its subscribers. Too many instances of non-availability will create the impression that Amazon can’t manage the service. Estimating scalability for such a large, diverse number of users without breaking down is a huge task. Amazon must bear the total TCO of its services, all the while ensuring it can profit from it. The services’ subscribers benefit from not having to worry about these issues and not bearing the brunt of TCO issues. 3. Search the Internet for companies that supply utility computing. Select two or three such companies and compare them to Amazon. What services do these companies provide? What promises do they make about availability? What is their payment model? Who is their target client? If you were launching a Web startup business, would you choose one of these companies over Amazon for Web services? Why or why not? Would your answer change if you were working for a larger company and had to make a recommendation to the CTO? Answer: Sun Microsystems offers utility computing through grid computing. It charges $1 per CPU hour. It provides platforms for its target users in computational mathematics, computer aided engineering, electronic design automation, financial services, life sciences computing tasks. Software developers use Sun’s Network.com service for building, testing, and deploying new applications to their customers. It promises 99.9 percent availability. Hewlett-Packard (HP) provides utility computing for PCs, server storage, mail and messaging, print, and centralized data center infrastructure through its distributed grid technology. It targets small, medium and large sized companies for a variety of computing services. Costs were not available on its web site. Availability was listed as 99.9 percent. Amazon seems to be an easier service to incorporate into a start-up business because it has been geared towards small and medium sized businesses since its inception. It doesn’t bring the same baggage to the table as the larger, more diverse companies do. 4. Name three examples each of IT infrastructure hardware components and software components that are relevant to this case. Describe how these components fit into or are used by Amazon’s Web services and/or the customers that subscribe to these services. Answer: Amazon’s Web services use the following hardware components: • client/server architecture as the server • grid computing • distributed processing • storage area networks • virtualization • multicore processors Customers using Amazon’s Web services utilize the following hardware components: • client/server architecture as the client • distributed processing • storage area networks Amazon’s Web services use the following software components: • Linux and Unix operating system software • Java, Ajax, and HTML as the provider • XML • Software as a Service (Saas) as the software provider Customers using Amazon’s Web services utilize the following software components: • Windows operating system or Mac OS Java, Ajax, and HTML as the recipient • Software as a Service (Saas) as the software user • Mashups, widgets, cloud computing could be used by customers 5. Think of an idea for a Web-based startup business. Explain how this business could utilize Amazon’s S3 and EC2 services. Answer: Students will present a variety of startup business ideas in this question. They should address the following components: • Costs associated with S3 data storage o Estimates of how much data will be stored o Costs per gigabyte of data • Access procedures for S3 data storage—they may have to research Amazon’s site to determine what the processes are • Costs associated with EC2 o Estimate the number of instance-hours the business will consume o Estimate the inbound and outbound data traffic o Estimate the AMI costs Access procedures for EC2 • Interfaces that may be required between the business and Amazon’s services • Processes that may be necessary in case of outages An idea for a Web-based startup business could be an online photo editing platform. This business could utilize Amazon's S3 service to store user-uploaded images securely and reliably. Additionally, Amazon EC2 could be used to host the platform's servers, providing scalable computing power to handle varying levels of user traffic and demand. 6. What are the legal, ethical, and privacy issues for Canadian businesses that might want to use the AWS and S3 services offered by Amazon? How could that affect the potential for these services in Canada? Is there a solution to this problem? Answer: The companies are required to adhere to privacy legislation (PIPEDA) - see www.priv.gc.ca Also, companies in Canada must protect their customer data and the Patriot Act in the US allows government access to data files. Canadian companies will want to store data on servers located in Canada. Solutions include working with Amazon to ensure the servers are located in Canada. Legal, ethical, and privacy issues for Canadian businesses using AWS and S3 services by Amazon include compliance with Canadian data protection laws, ensuring secure handling of sensitive information, and addressing concerns regarding data sovereignty and jurisdiction. Failure to address these issues could lead to legal liabilities and reputational damage, affecting the potential for these services in Canada. One solution is for Amazon to offer data centers located within Canada to ensure compliance with local regulations and address data sovereignty concerns. Solution Manual for Management Information Systems: Managing the Digital Firm Kenneth C. Laudon, Jane P. Laudon, Mary Elizabeth Brabston 9780132142670, 9780133156843

Document Details

Related Documents

person
Olivia Johnson View profile
Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right