Research and Advances
Computing Applications

Centralization vs. Decentralization of Application Software

Whichever way the IT department chooses, the result should never lose sight of the user.
Posted
  1. Introduction
  2. Ultimate Goal: User Satisfaction
  3. Centralization/Decentralization Cycle
  4. Centralization Enablers
  5. Decentralization Enablers
  6. The Choice
  7. Conclusion
  8. References
  9. Authors
  10. Figures

Historically, information technology departments have cycled between centralized and decentralized application software distribution, although modular program design and enterprise management software may break that cycle. Meanwhile, IT departments that want to manage the distribution and configuration of software across their networks are searching for an acceptable balance of control, reliability, and speed. Distributing application files on individual PCs maximizes network performance, but makes it much more difficult to enforce configuration standards and maintain control. Placing application files in a few central locations gives an IT department significant control over software configuration but may degrade network performance and lead to user dissatisfaction.

As the components and software in corporate networks become increasingly complex, simplification of their management and administration becomes essential. The network-attached PC has a high cost of ownership; CIO Magazine has estimated the cost to be $10,000 per desktop per year [3, 4]. The costs of a network-attached PC mostly cover maintenance, not installation or actually buying the equipment. These costs can be cut by reducing the number of hours spent on network administration tasks like implementing new software, distributing new software versions, applying patches, and troubleshooting problems with the individual software applications installed on each workstation.

Centralization of application software is one approach corporate IT departments turn to in order to achieve this cost reduction. Major companies, including Hallmark, Hollywood Video, and Lexmark, report up to a 50% reduction in the total cost of ownership as a result of centralization and standardization in a thin-client environment [1, 5]. Unfortunately, a centralized scheme may not be appropriate in all organizations; distributed schemes are sometimes preferable. The best solution for a specific IT department depends on several factors, including bandwidth availability, application modularity, and the uniformity of an organization’s workstation configurations.

Here, we offer guidelines for evaluating whether centralization or decentralization of application software is most appropriate for a particular organization. We describe the advantages of both centralization and decentralization, as well as the cycle motivating IT departments to move from centralized schemes to decentralized schemes (and often back again). We explore the hardware and software solutions enabling both centralization and decentralization. Finally, we present a framework for evaluating whether centralization or decentralization is most appropriate for a particular organization.

Back to Top

Ultimate Goal: User Satisfaction

Whether an IT department adopts a centralized or decentralized software distribution scheme, its ultimate goal is to maximize user satisfaction. This goal is achieved by providing a combination of reliability and flexibility appropriate to the needs of the user community. Centralization and decentralization differ in how that goal is achieved. In a centralized scheme, the application software resides at a central location (or several central locations, depending on the size of the installed client base). In a decentralized scheme, the application software resides on each of the client machines.

Issues influencing centralization. In a centralized scheme, the minimum possible number of application files (such as word processing documents and spreadsheets), and operating system files (such as Windows 98, 2000, and Me) reside on the client. Network centralization is characterized by six principal advantage, as described here.

Easier-to-enforce uniform standards. Enforcement of a consistent operating system, hardware driver files, or software versions is easier in a centralized software scheme. The diversity in workstation configurations can be minimized by restricting the options available in network-based configuration files.

Easier workstation support, repair, and maintenance. Reduced diversity in the individual workstation configurations simplifies the task of keeping all workstations in a corporate network as homogeneous as possible. In addition, files kept in a central location can be protected from unintended modifications. This protection and relative isolation reduce the chance of accidental deletion and corruption of files essential to the operation of an application.

Easier software installation, upgrades, and patches. Centralizing application software reduces the number of copies claiming precious storage and bandwidth on the corporate network. Fewer copies mean fewer technicians required to update, upgrade, and fix software problems, because changes have to be made in fewer places.

Lower support costs. Greater control over workstation configuration means fewer support technicians required to maintain the installed base of workstations.

Improved service to end users. Users should experience fewer disruptions and more timely fixes. Especially because they automatically receive the latest version whenever they access an application, they never have to wait for a technician to show up for a new installation or while a technician installs the new version.

Economies of scale for hardware. Nearly every large organization has file servers for data storage, complete with fault tolerance and the ability to provide a comprehensive backup of user data. These centralized servers simplify the task of increasing networkwide disk storage; it is more cost-effective to increase network storage than to continuously upgrade disk storage for each client on the network.

Tools and architectures supporting centralized administration of networks are increasingly available. Universal access to network services is available through a variety of directory services products, including Active Directory from Microsoft and from Directory Services for Netware from Novell. A centralized software scheme fits well with the centralization of network administration, leading to the consolidation of network maintenance in a single location. Network and software administration tools together allow a single support technician to manage and maintain most of the application software from the central location.

Issues influencing decentralization. In a decentralized scheme, application files reside on each of the client workstations; software application decentralization is characterized by three principal advantages.

Minimal bandwidth requirements. Placing application software on each workstation means it does not have to be retrieved from a central file server each time it is run. Because local application storage limits the movement of application software through the network, it requires significantly less bandwidth than a centralized scheme. If the current network infrastructure does not provide sufficient bandwidth to its clients to support a centralized application scheme, decentralization represents a potential alternative.

Less-restrictive software design requirements. Because the executable files for almost all current application software are large, placing these files on the server exacerbates existing bandwidth problems or creates new ones. The only way to avoid network bottlenecks without increasing bandwidth is to design highly modular applications. Local storage of application files eliminates this constraint by rendering it an irrelevant issue.

More end-user choice regarding workstation configuration. In organizations with computers configured for use in a variety of tasks and no significant numbers of any particular configuration, it may not be feasible to enforce standards. A decentralized approach makes it easier to accommodate the needs of individual users. Because software installations are separate, one workstation’s configuration presents no risk of interfering with the successful operation of others.

The choice between centralization and decentralization involves assessment of how important the advantages of each scheme are to a particular organization. For example, if reliability and maintainability are most important, centralization may be the better choice. If configuration flexibility and user choice are most important, decentralization may be the better choice.

Back to Top

Centralization/Decentralization Cycle

Now we consider bandwidth and modularity issues, as well as how the increased size and sophistication of application software can lead to an ongoing cycle between centralized and decentralized application schemes. While directory services products provide a means to centralize network administration, efforts to centralize application software have been less successful. Centralized access to application software is not a new concept, having been touted since the 1980s as a major advantage of local-area networks. In fact, since the proliferation of mainframe computers in the 1960s, IT departments have struggled over whether to centralize or decentralize application software (see Figure 1). The history of the LAN in a typical IT department often looks something like this:

  • PCs are isolated, each with its own copy of the operating system and its own set of software.
  • The LAN allows the IT department to place data in a central location—the file server—so many users can access it while security is enforced and comprehensive periodic backups are performed.
  • Application software is moved to the file server, and all but the most basic operating system files and the software required to attach to the network are moved to central file servers.
  • Increasing file size in applications creates network bottlenecks; software application load times increase to unacceptable levels, first with Windows, then with the applications themselves.
  • The full operating system moves back to the local machine, but application software is left on the file server. Applications become larger and larger, making network response unacceptable.
  • Application software moves back to the local workstation. File servers provide such network services as authentication and print services, but only user data files are kept on the server.
  • The configuration of each workstation is ultimately quite different from how it began. The IT department finds itself back where it started, with copies of the same software loaded on many workstations across its network. All the support issues resolved by moving applications to the server return when they are moved back to individual workstations.

Moreover, applications and operating systems are much more complex today than they were in the early- to mid-1990s. For example, the directory on a PC that contains DOS 6.22 has about 100 files, the directory containing Windows 3.11 has about 450 files, the Windows 95 directory has about 1,200 files, and Windows 2000 has about 5,000 files. It follows that more user support is required for Windows 2000 than for DOS 6.22. The result is that more and more IT staff has to be hired to support the decentralized systems.

IT departments appear to be facing a dilemma. If they move applications locally to each PC’s hard drive, administrative headaches quickly ensue. If they move applications to a server, the server and the network infrastructure are bogged down by traffic (see Figure 2). To break this cycle and determine whether to adopt a centralized or decentralized strategy for software distribution, it is necessary to examine the enablers of each option, and then develop a framework from which to make a decision.

Back to Top

Centralization Enablers

The primary barrier to network centralization is limited bandwidth, for which two approaches offer remedies. The first is to provide enough network bandwidth to push capacity beyond the point at which bottlenecks occur. The second is to increase the modularity of the applications, reducing network throughput at any point in time, thus reducing the amount of needed bandwidth.

Bandwidth and network hardware. The most vexing problem of network centralization is the potential for network bottlenecks. The two topologies with the largest installed base—16Mb Token Ring and 10Mb Ethernet—do not deliver acceptable response time when 100 clients simultaneously attempt to download an 8.4MB word processing application (such as Microsoft Word).

This inability to deliver an acceptable response time is not in itself a permanent problem. New technologies, including 100Mb Ethernet, Gigabit Ethernet, Asynchronous Transfer Mode, and Fiber Distributed Data Interface, offer much greater bandwidth. However, with the exception of 100Mb Ethernet (also called Fast Ethernet), these technologies are too expensive to be practical for widespread implementation. Prices will most certainly drop significantly in the near future; standard 10Mb Ethernet cards, which sold for more than $400 in the early 1990s, now go for about $20. However, the problem inherent in waiting for new technologies is that IT departments wind up “chasing bandwidth.” Bandwidth increases are inevitably consumed by the ever larger files inevitably produced by each new generation of applications.

Modularity and the Java platform. Another significant development over the past six years is the still-emerging Java standard. By providing a mechanism for easy development and distribution of small application components, Java represents a potential solution for implementing a centralized network. Java has proved itself in the Web environment. The applets required to run graphical users interfaces are relatively small (less than 100KB), easy to load across a network, and put little strain on limited bandwidth.

Is Java something truly new or simply a continuation of the long-running centralization/decentralization cycle? Java applications are placed on a central server and downloaded to client workstations each time they are run. This arrangement is similar to what IT departments were doing with standard executable files in the late 1980s, before they were too large to leave on a file server. Today, load times for Java applications are acceptable, because the majority of Java applets perform simple tasks, such as display a menu or scroll text across the screen. If tomorrow’s hypothetical “Microsoft Word for Java” application is 8.4MB (the size of Microsoft Word 2000), will the IT department have gained anything by deploying Java applications? Is it just a matter of time before Java applications are so large they, too, have to be loaded locally on each workstation?

An early test of Java’s potential for enabling modular application software design appears to have been a failure. In 1999, Lotus Development discontinued sales of its eSuite. The eSuite product design was based on the principle that most users’ needs could be met with a subset of the capabilities normally provided by a full set of desktop productivity applications. That is, eSuite was designed as a series of small pieces, each providing specific functions to the user, with individual pieces downloaded from the central server only when needed. As long as only a few pieces had to be downloaded to each user, bandwidth and performance were not a problem. Unfortunately, many pieces had to be downloaded to each user, thus defeating the design’s main purpose.

Back to Top

Decentralization Enablers

Efforts to make the alternate scheme of locally installed software more viable show progress. These solutions enforce a standard “model” configuration on each workstation on a network. Although slight deviations from the standard may occur, mechanisms are put in place as needed to bring the configurations back in line automatically.

Software distribution solutions. Packages for software distribution management, including Microsoft’s Systems Management Server and Norton Utilities’ Norton Administrator, deliver new software installations to each client automatically. If an individual workstation has a problem with its installation of a particular package, it can be “redelivered” to the workstation. These software management systems allow applications to remain distributed on each client machine while still enforcing uniformity.

Self-repairing software. Microsoft Office 2000 has the ability to check if it is missing components and automatically reinstall any missing ones from the network. If an end user or a system administrator accidentally deletes an essential file, the entire package need not be reinstalled.

Operating system-based workstation restrictions. In addition to providing an automated method of correcting software problems after they occur, changes to the workstation configuration can be prevented in the first place. For example, Windows 2000 has the ability to restrict its file system and registry so users cannot install new software or make significant changes to the configurations of their workstations. Some packages, including Fortres Grand’s Fortres 101, can place such restrictions on the Windows 98, 2000, and Me operating systems.

Unfortunately, these decentralization enablers can be problematic. A 1998 InformationWeek article cited a 1997 study by the Gartner Group consulting firm that found that three years after being purchased, “70% of enterprise management packages are neither fully implemented nor meeting the goals of their users” [2]. This less-than-perfect performance attests to the difficulty of supporting differentiated software configurations. An implementation may be undermined by permitting differences in individual PC configurations—an important reason for deciding on a decentralized scheme in the first place.

Back to Top

The Choice

Determining whether centralization or decentralization should prevail in a particular organization depends on future developments in software and hardware technology. Network administrators have to balance performance and reliability. Centralization offers greater reliability, though often at the expense of performance; distributed strategies offer greater performance, though often at the expense of reliability.

Figure 3 is a framework for deciding whether or not to centralize application software; the three factors in the decision tree are the degree of modularity of software applications; the network’s bandwidth capacity; and the feasibility of a uniform software configuration.

Application modularity. Increased software modularity reduces the size of each individual piece of an application that must be transferred from server to client. This modular structure reduces the amount of data passing through the network at any single moment. If an organization’s software is highly modular, it can be more readily put in a central location and accessed from multiple clients. If a software package consists of large application components, downloading them to multiple workstations is more likely to create a network bottleneck.

Bandwidth capacity of existing infrastructure. An IT department should also examine its network infrastructure. If application software is centralized and not modular, large files must be transferred from the server to the client; the performance of the network greatly depends on its bandwidth capacity. In networks with low bandwidth capacity, the centralization of large non-modular applications is likely to create a network bottleneck. In networks with high bandwidth capacity, even low application modularity may not create a network bottleneck.

Feasibility of a uniform software configuration. If an organization’s workstations perform tasks so different from one another they require completely different software configurations, the advantages of centralizing applications are largely negated. An increasing number of software applications on the network increases the likelihood there will be a compatibility conflict between two or more of the packages. For example, it may be extremely difficult for multiple versions of a single application to work together due to dynamically linked library conflicts. Under these conditions, it may be easier for the IT department to load specialized software applications on the local workstations that require them, leaving the widely used applications on the network.

Figure 3 shows how software modularity, infrastructure bandwidth, and feasibility of configuration uniformity can be used to determine whether an IT department should consider a centralized or decentralized application strategy. If modularity, bandwidth capacity, and feasibility of a uniform configuration are relatively high, the physical conditions are ideal for a centralized strategy. If all three are relatively low, the physical conditions are ideal for a decentralized strategy.

Ideal physical conditions rarely exist. Most organizations are somewhere between the two extremes, and it is not possible to say definitively which strategy is best, although Figure 3 provides some broad guidelines. For example, where application modularity is high and configuration uniformity is also high, it may be unnecessary to have high bandwidth capacity in order to choose centralization. The degrees of the individual characteristics determine which solution is most appropriate.

Back to Top

Conclusion

The issue of whether or not to centralize application software on a network is an important consideration affecting nearly every large IT department. Historically, IT departments have cycled between centralization and decentralization. Today, however, modular software development paradigms, increasing bandwidth, and enterprise management software may be able to break the cycle and give IT the ability to choose definitively which solution is best for a particular environment.

We outlined three important physical factors—modularity, bandwidth, and feasibility of a uniform configuration—influencing the decision. Technologies are being developed that push in both directions. Java and other relatively new application development platforms facilitate centralization by increasing the potential for software modularity. The latest enterprise management packages, on the other hand, facilitate decentralization through their potential for automating maintenance of distributed applications.

An IT decision-maker should weigh the advantages of both centralization and decentralization in order to determine the most appropriate solution. The framework in Figure 3 should be valuable to IT managers seeking to balance control, reliability, and speed—the factors that ultimately determine user satisfaction.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. The course of decentralization, 1960-2001.

F2 Figure 2. Cycling between low and high centralization.

F3 Figure 3. Decision tree: Which software configuration strategy is best?

Back to top

    1. Anthes, G. TCO tales. Computerworld (Mar. 2, 1998); www.computerworld.com/cwi/story/0,1199,NAV47_STO30015,00.html.

    2. Gillooly, C. Enterprise management disillusionment. InformationWeek (Feb. 16, 1998); www.informationweek.com/669/69iudis.htm.

    3. Strassman, P. The real costs of PCs. Computerworld (Jan. 13, 1997); www.computerworld.com/cwi/story/0,1199,NAV47_STO1091,00.html.

    4. Wheatley, M. Every last dime. CIO Magazine (Nov. 15, 2000); www.cio.com/archive/111500_dime.html.

    5. Wheatley, M. The cutting edge. CIO Magazine (June 1, 1998); www.cio.com/archive/060198_tco.html.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More