Computing [Home Page]
---------------------------------------------------------------------------

Client Server - What is It?

Draft 2/20/96

"Client-Server" Nice Concept, Too Bad It Became a Buzz Word

The term "client-server", like so many technical concept names, has become distorted and abused by excessive use in product promotion. To get attention, every product must now be proclaimed as an "object-oriented", "client-server", "LAN" technology. This seriously confuses any discussion of those topics. You may remember when "LAN" meant "Local Area Network", a kind of local communication infrastructure to be contrasted to "WAN", a "Wide Area Network". Then suddenly there was "Novell LAN", "Microsoft LAN Manager", and even "UNIX LAN", which were in fact Network Operating Systems not LAN's at all. And so it is with "client-server".

So lets go back to basics to see if the term "client-server" can be salvaged to have some concrete meaning in our technology dialogs. Let's start by separating "client-server" from its supporting network technology concepts.

Back to Basics - the "Network" Concept

A network is a shared, switchable, data communication utility.

The services made possible by a computer network are based on the network's ability to communicate between programs anywhere on the network. Hence multiple resources are accessible regardless of their location.

Modern workstations and services can each use multiple, simultaneous connections. They can exchange single messages, or make and drop data stream connections as needed. Network technology makes these large numbers of connections to multiple destinations more convenient and more cost effective, and consequently possible instead of impractical. Networking technology not so much makes existing services more cost effective but rather removes the cost obstacle to new services that have been prevented by cost.

Network technology is now common in smart buildings, security systems, and aircraft and automobile systems. The reason for using network technology for these is the same as for data networks, communication between multiple devices separated by some distance.

Back to Basics - the Network "Physical" Infrastructure

Virtual connections on a shared, physical infrastructure provide the necessary multiple data connections.

The physical infrastructure is the real, physical basis of the network communication. Communication is accomplished by signaling over actual, real-world paths or "wires" consisting of copper wires, glass fibers, connectors, and various electronic components. More of these "wires" cost more; each physical attachment costs more; higher performance costs more; and ongoing maintenance also has a cost.

The campus network infrastructure uses star wiring from the workspace to wiring closets, bus or point-to-point wiring between wiring closets, and a token ring backbone. These each use switchable, "wire" sharing technologies, such as Ethernet, LocalTalk, PPP, and FDDI. The structure trades off performance, maintainability, and cost, and is updated as the physical technologies improve. Dial-up Home IP connections are essentially a variation of star wiring.

Back to Basics - the Network "Virtual" Infrastructure

The virtual connections on the shared, physical infrastructure provide the necessary multiple data connections.

A virtual connection is a data transport service provided by the network between two nodes. There is little cost for more connections, however there is some cost to setup and maintain the software.

Data networking uses software directed switching and routing of data packets communicated over the shared, physical infrastructure. This technology exchanges the increasing cost of additional physical "wires" for the small cost of additional, software connections.

The primary virtual transport infrastructure on campus and on the Internet is TCP/IP. The campus network also routes Novell SPX/IPX and Apple AppleTalk.

So Far No Client-Server

So far we have built up the basic idea of the data communication network, but it is just a raw, peer-to-peer, data transport infrastructure. So in a sense we are saying that "client-server" is not really a networking technology! And that is correct. Client-server is actually a collection of related, higher level, software ideas that make use of networking technology.

Client-Server - a Software Technology

Throughout computing we encounter the general idea of one piece of software requesting service from another. Main programs calling subprograms, programs requesting service from the operating system, etc. This is often described in terms of "layering", a higher level program calls on a lower level program for service, and that lower level program calls a yet lower level. The lowest levels are usually the operating system or subsystem API's. The programmer or operating system designer decides what functions are best done at each level and what level is the most effective place to do each piece of the computing work. The lowest levels usually do the most common or shared functions, and the higher levels do the more task specific functions.

And so it is with "client-server". It is just the technology of dividing function and workload between a workstation or "client" computer device and a service or "server" computer device. Client-server is the modern version of the old design questions with the design elements spread over a network rather than assembled in a central mainframe. The parts of the software communicate over a data network, using application-level protocols. Each part of the application can be specialized for the job it is doing and consequently overall performance is enhanced. Typically the client initiates a request and the server responds to a request.

An example is a database management system (DBMS) where the functions are split into a "front-end" that interacts with the user to enter data, ask questions of the data, and write reports, and a "back-end" that stores the data, controls access to data, protects data, and makes necessary changes to the data. The main advantages of the client server model are: less network traffic, better performance, greater flexibility in the application, availability of data, and better control of access to the data.

So Why Client-Server Now? - The Personal Computer Workstation

The importance of the client-server design ideas and the supporting network infrastructure is closely linked to the personal computer workstation as the successor to the shared-mainframe terminal.

The main advantage of the personal computer is that its local processor and tightly coupled devices can interact more effectively with its user than a remotely connected mainframe. Other advantages are local autonomy, guaranteed response time, constant compute power, linear growth of cost, etc.

The central mainframe excelled in communication, resource sharing, and common support focus. The network infrastructure extends these advantages to the personal computer workstation.

So Why Client-Server Now? - The Network is the System

The "system" is now the composite of the workstation and all services available to that workstation.

This changes the role of both the "personal computer" and the "central mainframe". Neither can claim the prize alone. The "personal computer" becomes the workstation - the basic tool of the knowledge worker. It no longer functions entirely by itself but rather becomes the agent for the user in the system of network resources. The "central mainframe" becomes a server, a service provider in the system of network resources. It no longer controls the user interface directly but rather provides services to its clients.

Follow the Yellow Brick Road

So now lets follow these technologies from their dark beginnings, though the present, and on into the near future. Since at any particular time, such as the present, we will be using technologies that span from the recent past to the near future, this will help us understand how long to hold on to the past and what to do now that will continue in value into the near future.

The Yellow Computer System

Computer technology provides three general services: processing, storage, and input-output-control.

We can use a computer system to process or analyze data into new forms. This uses the processing units of the system known as the Central Processing Unit, microprocessor, etc. A processing unit takes in data, converts/analyzes/decides about that data, and generates a result as its output. There are general purpose processors and special purpose processors.

We can use a computer system to store and retrieve data. Computer "mass storage" file systems keep databases, programs, text, pictures, video, sound, measurements, etc. The primary storage method of the past was paper (books, sheets, drawings, maps, chart recordings, photographs) kept in libraries, office files, and museums.

We can use a computer system to monitor data and events in the "real world" and to effect control of aspects of that world. Industrial control and automation, smart buildings, experiment data gathering and control, security and safety monitoring, automobile engines, VCRs, etc. These commonly involve "real world" interfaces called "analog to digital" input and "digital to analog" output, that is, sensors and actuators.

So What's the Connection?

The units of a computer system are connected to each other for purposes of communication and control. The processor, storage, and input-output-control unit interconnections can be classified by method as specific, point-to-point, or shared bus. They can also be classified by the speed and distance over which they can operate.

As the price of the three system elements plummeted and performance increased, we at first used larger and larger units, then more and more units. Interconnection and interoperation of the units then became the design problem. The length of a connection is inversely related to its speed at constant cost. The optimal combination of the units and their interconnection changes over time due to purpose and relative cost.

Processors first came packaged as a "mainframe". Disk and input-output was connected directly to and close to the processor using one-of-a-kind designs. Since the mainframe was marketed as the biggest and the fastest, the no-compromise designs tended to continue as "closely coupled" systems with one-of-a-kind design. "Networking" evolved here primarily to connect input-output devices such as terminals, card readers, and printers when the customers were no longer allowed in the computer room. As the mainframes became bigger and faster, operating systems went from one job at a time, to multiple jobs at a time, and eventually multi-user and multitasking. At first this was to share the processor, then to share the storage.

The "minicomputer" also started as a one-of-a-kind design with closely coupled disk and input-output, but almost immediately became "modular", built up from common subunits. This required interconnection using standardized electronic and physical interface designs called 'backplanes", "busses", and cabling systems. Minicomputers were still closely coupled systems, and soon the minicomputer user was also pushed out of the computer room and the systems went from single job to multiuser, multitasking, and shared disk. Later, minicomputers began to exchange data over direct point-to-point connections with other minicomputers and mainframes.

Then "Varooom...", the smallest became the biggest. The microcomputer one-of-a-kind designs fell almost instantly to the modular model from the minicomputer experience. The two main threads were the Intel 8080 systems with the S-100 card bus, and the Apple II. The S-100 bus thread was open architecture and driven by many small computer companies and their S-100 bus third party add-on companies. The Apple bus was controlled by the Apple company. The S-100 thread followed the minicomputer trends, pushed its users away from the system box, and went from single user to multiuser, multitasking, shared disk (MPM and UNIX). Apple did not join this trend.

But something new happened with microcomputers. Mass storage was still very expensive relative to the cost of the processor package. This accelerated the local networking trend. By the end of 1970s, third parties were providing shared disk and printers via LANs! This trend came to full implementation in small companies, the Fortune 500 companies and Universities were still closeted with their mainframes and timeshared minicomputers.

Another trend that evolved primarily with microcomputers was file transfer over dialup connections (XMODEM and later Kermit) and central shared mail, file, and information repositories called Bulletin Boards.

So What is the Connection? - Squeeze It and Stretch It.

So we see two major trends in operation. Squeezing system components closer and closer together for faster performance, and separating system components farther apart for convenience and accessibility.

How Do We Move Forward? - Capacity, Speed, Reliability, Accessibility

More capacity or speed uses larger units or more units. More reliability uses duplicate units. Accessibility separates units with longer interconnections but conflicts with performance which prefers close connections.

How Do We Move Forward? - Mass Storage

Lets look at mass storage or disk to see these trends in action, since mass storage has evolved into more of the possible styles.

Disks can have larger capacity by becoming bigger (more recording surface more platters) or more dense (more recording in a given space). Since bigger makes access slower the more successful trend for single units has been more density. Larger capacity can also be gotten with more disk units. One aspect of RAID (Redundant Array of Inexpensive Disks) is just more disk units, closely coupled to act as a single disk. But larger capacity can also be had by distributing disk units on a network.

Disks can have faster access by rotating the platter faster and by having more read heads. Faster access can also be had by taking data from multiple disks at the same time (RAID striping). But performance for an end user can improved by delivering only the exact required data from a server disk (client-server "back-end").

Single disks can have more reliability through limited redundancy (sector checks). Or multiple disks can be used: partial, tightly-coupled redundancy (RAID ECC), or complete duplication (mirror, duplex) whether tightly-coupled or networked. This may soon be combined with encryption for data authentication.

However to take best advantage of each of these possibilities requires coordination of these mass storage unit techniques with the application and system software design.

How Do We Move Forward? - Processing

The story is pretty much the same for processing.

Processors can have more capacity or speed by making units that run faster ("clock speed") and by increasing the size of data that can be operated on in a single clock time (16-bit, 32-bit, 64-bit, 128-bit, ...). Or multiple processing units can run in parallel on the same task (pipelining), or in parallel on different parts of the task (multiprocessing). Processors running in parallel on different parts of the task can be closely coupled (Dual Processing, Multiprocessing) or loosely coupled over a network (distributed processing, client-server). Whether closely or loosely coupled, the processors can be specialized or general.

A modern PC has one or two general processors (CPUs) closely coupled with several specialized processors (serial port UART, Ethernet controller chip).

Processors can have more reliability through limited redundancy (memory and address bus parity checking) or complete duplication (two Pentiums can run in lock step).

... Etc., Etc. ...
[This article was never completed]

Cool Verbiage

  • The Fat Client model has problems with software and configuration distribution.
  • The Fat Server model has performance and versatility problems..
  • Hidden Ideas: Standardized clients and standardized servers and services.
  • The Old Storage Pyramid is Still The Point

Page modified: 03 Jan 2012 12:58:42 -0800

--------------------------------------------------
[Back to Top   [Home Page]