This site contains older material on Eiffel. For the main Eiffel page, see

Table of Contents Previous section


As usual, this discussion will not throw a pre-cooked answer at you, but instead will carefully build a solution from a detailed analysis of the problem and an exploration of possible avenues, including a few dead ends. Although necessary to make you understand the techniques in depth, this thoroughness might lead you to believe that they are complex; that would be inexcusable, since the concurrency mechanism on which we will finally settle is in fact characterized by almost incredible simplicity. To avoid this risk, we will begin by examining a summary of the mechanism, without any of the rationale.

If you hate "spoilers", preferring to start with the full statement of the issues and to let the drama proceed to its dénouement step by step and inference by inference, ignore the one-page summary that follows and skip directly to the next section.

The extension covering full-fledged concurrency and distribution will be as minimal as it can get starting from a sequential notation: a single new keyword --- separate. How is this possible? We use the fundamental scheme of O-O computation: feature call, x.f (a), executed on behalf of some object O1 and calling f on the object O2 attached to x, with the argument a. But instead of a single processor that handles operations on all objects, we may now rely on different processors for O1 and O2 --- so that the computation on O1 can move ahead without waiting for the call to terminate, since another processor handles it.

Because the effect of a call now depends on whether the objects are handled by the same processor or different ones, the software text must tell us unambiguously what the intent is for any x. Hence the need for the new keyword: rather than just x: SOME_TYPE, we declare x: separate SOME_TYPE to indicate that x is handled by a different processor, so that calls of target x can proceed in parallel with the rest of the computation. With such a declaration, any creation instruction createI> x.make (...) will spawn off a new processor --- a new thread of control --- to handle future calls on x.

Nowhere in the software text should we specify which processor to use. All we state, through the separate declaration, is that two objects are handled by different processors, since this radically affects the system's semantics. Actual processor assignment can wait until run time. Nor do we settle too early on the exact nature of processors: a processor can be implemented by a piece of hardware (a computer), but just as well by a task (process) of the operating system, or just a thread of such a task. Viewed by the software, "processor" is an abstract concept; you can execute the same concurrent application on widely different architectures (time-sharing on one computer, distributed network with many computers, threads within one Unix or Windows task...) without any change to its source text. All you will change is a "Concurrency Configuration File" which specifies the last-minute mapping of abstract processors to physical resources.

We need to specify synchronization constraints. The conventions are straightforward:

  • No special mechanism is required for a client to resynchronize with its supplier after a separate call x.f (a) has gone off in parallel. The client will wait when and if it needs to: when it requests information on the object through a query call, as in value := x.some_query. This automatic mechanism is called wait by necessity.

  • To obtain exclusive access to a separate object O2, it suffices to use the attached entity a as an argument to the corresponding call, as in r (a).

  • A routine precondition involving a separate argument such as a causes the client to wait until the precondition holds.

  • To guarantee that we can control our software and predict the result (in particular, rest assured that class invariants will be maintained), we must allow the processor in charge of an object to execute at most one routine at any given time.

  • We may, however, need to interrupt the execution of a routine to let a new, high-priority client take over. This will cause an exception, so that the spurned client can take the appropriate corrective measures --- most likely retrying after a while.

This is all there is to the mechanism, which will enable us to build the most advanced concurrent and distributed applications through the full extent of O-O techniques, from multiple inheritance to Design by Contract --- as we will now study in detail, forgetting for a while all that we have read in this short (but essentially complete) preview.

(Note: A complete summary appears in 28.11.)


Table of Contents Next section