This site contains older material on Eiffel. For the main Eiffel page, see

Table of Contents Previous section


Back to square one. We must first review the various forms of concurrency, to understand how the evolution of our field requires most software developers to make concurrency part of their mindset.


More and more, we want to use the formidable amount of computing power available around us; less and less, we are willing to wait for the computer (although we have become quite comfortable with the idea that the computer is waiting for us). So if one processing unit would not bring us quickly enough the result that we need, we will want to rely on several units working in parallel. This form of concurrency is known as multiprocessing.

Spectacular applications of multiprocessing have involved researchers relying on hundreds of computers scattered over the Internet, at times when their (presumably consenting) owners did not need them, to solve computationally intensive problems such as breaking cryptographic algorithms. Such efforts do not just apply to computing research: Hollywood's insatiable demand for realistic computer graphics has played its part in fueling progress in this area; for example the preparation of the movie Toy Story, one of the first to involve artificial characters only (only the voices are human), relied at some point on a network of more than one hundred high-end workstations --- more economical, it seems, than one hundred professional animators.

Multiprocessing is also ubiquitous in high-speed scientific computing, to solve ever larger problems of physics, engineering, meteorology or economics.

More routinely, many computing installations use some form of load balancing: automatically dispatching computations among the various computers available at any particular time on the local network of an organization.

Multiprocessing is also part of the computing architecture known as client-server computing, which assigns various specialized roles to the computers on a network: the biggest and most expensive machines, of which a typical company network will have just one or a few, are "servers" handling shared databases, heavy computations and other strategic central resources; the cheaper machines, ubiquitously located wherever the end users are, handle decentralizable tasks such as the human interface and simple computations; they forward to the servers any task that exceeds their competence.

The current popularity of the client-server approach is a swing of the pendulum away from the trend of the preceding decade. Initially (nineteen-sixties and seventies) architectures were centralized, forcing users to compete for resources. The personal computer and workstation revolution of the eighties was largely about empowering users with resources heretofore reserved to the Center (the "glass house" in industry jargon). Then they discovered the obvious: a personal computer cannot do everything, and some resources must be shared. Hence the emergence of client-server architectures in the nineties. The inevitable cynical comment --- that we are back to the one-mainframe-many-terminals architecture of our youth, only with more expensive terminals now called "client workstations" --- is not really justified: the industry is simply searching, through trial and error, for the proper tradeoff between decentralization and sharing.


The other main form of concurrency is multiprogramming, which involves a single computer working on several tasks at once.

If we consider general-purpose systems (excluding processors that are embedded in an application device, be it a washing machine or an airplane instrument, and single-mindedly repeat a fixed set of operations), computers are almost always multi-programmed, performing operating system tasks in parallel with application tasks. In a strict form of multiprogramming the parallelism is apparent rather than real: at any single time the processing unit is actually working on just one job; but the time to switch between jobs is so short that from the outside we can believe that they proceed concurrently. In addition, the processing unit itself may do several things in parallel (as in the advance fetch schemes of many computers, where each clock cycle loads the next instruction at the same time it executes the current one), or may actually be a combination of several processing units, so that multiprogramming becomes intertwined with multiprocessing.

A common application of multiprogramming is time-sharing, allowing a single machine to serve several users at once. But except in the case of very powerful "mainframe" computers this idea is considered much less attractive now than it was when computers were a precious rarity. Today we consider our time to be the more valuable resource, so we want the system to do several things at once just for us. In particular, multi-windowing user interfaces allow several applications to proceed in parallel: in one window we browse the Web, in another we edit a document, in yet another we compile and test some software. All this requires powerful concurrency mechanisms.

Providing each computer user with a multi-windowing, multiprogramming interface is the responsibility of the operating system. But increasingly the users of the software we develop want to have concurrency within one application. The reason is always the same: they know that computing power is available by the bountiful, and they do not want to wait idly. So if it takes a while to load incoming messages in an e-mail system, you will want to be able to send an outgoing message while this operation proceeds. With a good Web browser you can access a new site while loading pages from another. In a stock trading system, you may at any single time be accessing market information from several stock exchanges, buying here, selling there, and monitoring a client's portfolio.

It is this need for intra-application concurrency which has suddenly brought the whole subject of concurrent computing to the forefront of software development and made it of interest far beyond its original constituencies. Meanwhile, all the traditional applications remain as important as ever, with new developments in operating systems, the Internet, local area networks, and scientific computing --- where the insatiable quest for speed demands ever higher levels of multi-processing.


Table of Contents Next section