This site contains older material on Eiffel. For the main Eiffel page, see


One synchronization rule remains to be seen. It will address two questions at once:

  • How can we make a client wait until a certain condition is satisfied, as in conditional critical regions?

  • What is the meaning of assertions, in particular preconditions, in a concurrent context?

A buffer is a separate queue

To study what happens to assertions, it is interesting to take a closer look at a notion that is ubiquitous in concurrent application (and has already appeared informally several times in this chapter): bounded buffers. Bounded buffers allow different components of a concurrent system to exchange data, produced by some and consumed by others, without forcing each producer that has generated an object to wait until a consumer is ready to use it, and conversely. Instead, communication occurs through a shared structure, the buffer; producers deposit their wares into the buffer, and consumers get their material from it. In a bounded implementation the structure can only hold a certain number maxcount of items, and so it can get full. But waits will only occur when a consumer needs to consume and the buffer is empty, or when a producer needs to produce and the buffer is full. In a well-regulated system such events will be much more infrequent than with unbuffered communication, and their frequency will decrease as the buffer's capacity grows. True, a new source of delays arises because buffer access must be exclusive: at most one client may at any one time be performing a deposit (put) or retrieval (item, remove) operation. But these are very simple and fast operations, so any resulting wait is typically short --- much shorter than if producers and consumers had to wait directly for each other.

In most cases the time sequence in which objects have been produced is relevant to the consumers, so the buffer must maintain a first-in, first-out policy (FIFO): an object deposited before another must be retrieved before it. The behavior is similar to that of train cars being added at one end of a single track and removed at the other end:

Although we need not preoccupy ourselves with the details here, a typical implementation can use an array representation of size capacity = maxcount + 1, managed circularly; the integer oldest will be the index of the oldest item, and next the index of the position to be used for inserting the next item that comes in. We can picture the array as being torn into a donut so that positions 1 and capacity are conceptually adjacent:

The procedure put used by a producer to add an item x will be implemented as

representation.put (x, next); next := (next\\ maxcount) + 1

where \\ is the integer remainder operation; the query item used by consumers to obtain the oldest element simply returns representation @ oldest (the array element at index oldest); and procedure remove simply executes oldest:= (oldest\\ maxcount) + 1. The array entry at index capacity, shaded in gray on the figure, is kept free; this makes it possible to distinguish between the test for empty, expressed as next = oldest, and the test for full, expressed as (next\\ maxcount) + 1 = oldest.

The structure, with its FIFO policy, and the circular array representation, are of course not concurrency-specific: what we have is simply a bounded queue similar to many of the structures studied in preceding chapters. Writing the corresponding class is trivial; here is a short form of the class, in simplified form (main features only, header comments removed, principal assertion clauses only):

class interface BOUNDED_QUEUE [G] feature
  empty, full: BOOLEAN

  put (x: G)
      not full
      not empty
      not empty
      not full
  item: G
      not empty
end -- class interface BOUNDED_QUEUE

Obtaining from this description a class describing bounded buffers is about as simple as we could dream:

separate class BOUNDED_BUFFER [G] inherit

The separate qualifier applies only to the class where it appears, not its heirs. So a separate class may, as here, inherit from a non-separate one, and conversely. The convention is the same as with the other two qualifiers applicable to a class: expanded and deferred. The three properties --- separate, expanded, deferred --- are mutually exclusive, so at most one of the qualifiers may appear before the keyword class.

We see once again the fundamental simplicity of concurrent O-O software development, and the smooth transition from sequential to concurrent concepts, made possible in particular by the method's focus on encapsulation. A bounded buffer (a notion for which the concurrency literature often presents complicated descriptions) is nothing else than a bounded queue made separate.

Preconditions under concurrent execution

Let us look at a typical use of a bounded buffer buffer by a client, for example a producer that needs to deposit a certain object y using the procedure put. Assume that buffer is an attribute of the enclosing class, having been declared, for some type T which is also the type of y, as buffer: BOUNDED_BUFFER [T].

The client may for example have initialized buffer to a reference to the actual buffer passed by its creation procedure, using the business card scheme suggested earlier: make (b: BOUNDED_BUFFER [T], ...) is do ..;. buffer := b; ... end

Because buffer, being declared of a separate type, is a separate entity, any call of the form buffer.put (y) is a separate call and has to appear in a routine of which buffer is an argument. So we should instead use put (buffer, y) where put (a routine of the client class, not to be confused with the put of BOUNDED_BUFFER, which it calls) is declared as

put (b: BOUNDED_BUFFER [T]; x: T) is
    -- Insert x into b. (First attempt.)
    b.put (x)

Well, this is not quite right. Procedure put of BOUNDED_BUFFER has a precondition, not full. Since it does not make sense to try to insert x into b if b is full, we should mimic this precondition for our new procedure in the client class:

put (b: BOUNDED_BUFFER [T]; x: T) is
    -- Insert x into b.
    not b.full
    b.put (x)

Better. How can we call this procedure with a specific buffer and x? We must make sure, of course, that the precondition is satisfied on input. One way is to test:

if not full (buffer) then put (buffer, x)                -- [PUT1]

but we could also rely on the context of the call as in

remove (buffer); put (buffer, x)                -- [PUT2]

where the postcondition of remove includes not full. (Example PUT2 assumes that its initial state satisfies the appropriate precondition, not empty, for remove itself.)

Is this going to work? The answer, disappointing in light of the earlier comments about the unpredictability of bugs in concurrent systems, is maybe. Between the test for full and the call for put in the PUT1 variant, or between remove and put in PUT2, any other client may have interfered and made the buffer full again. This is the same flaw that required us, earlier on, to provide an object reservation mechanism through encapsulation.

We could try encapsulation again by writing PUT1 or PUT2 as a procedure to which buffer will be passed as argument, giving for PUT1:

put_if_possible (b: BOUNDED_BUFFER is

-- Insert x into b if possible; otherwise set was_full to true.

do if full (buffer) then was_full:= True else

put (buffer, x); was_full := False



But this does not really help me as a client. First, having to check was_full on return is a nuisance; then, what do I do if it is true? Try again, probably --- but with no more guarantee of result. What I probably want is a way to execute put when the buffer is indisputably non-full, even if I have to wait for this to be the case.

The precondition paradox

This situation that we have just uncovered is disturbing because it seems to invalidate, in a concurrent context, the basic methodological guideline for getting software right: Design by Contract. With a queue, that is to say in sequential computation, we have been used to precisely defined specifications of mutual obligations and benefits:

_ Satisfy Precondition From Postcondition:
Client Only call put(x) on a non-full queue. Get new, non-empty queue with x added.
Supplier Update queue to add x and ensure not empty. Processing protected by assumption that queue not full.

Implicit behind such contracts is a no hidden clause principle: the precondition is the only requirement that a client must satisfy to get served. If you call put with a non-full queue, you are entitled to the routine's result, as expressed by the postcondition.

But in a concurrent context, with a separate supplier such as a BOUNDED_BUFFER, things are rather distressing for the client: however hard we try to please the supplier by ensuring its stated precondition, we can never be sure to meet its expectations! The suppliers, however, still need the precondition to execute correctly. For example the body of routine put in class BOUNDED_QUEUE (which is the same as the routine in BOUNDED_BUFFER) will most likely not work unless full is guaranteed to be false.

To summarize: suppliers cannot do their work without the guarantee that the precondition holds; but for separate arguments the clients are unable to ensure these preconditions. This may be called the concurrent precondition paradox.

There is a similar postcondition paradox: on return from a separate call to put, we cannot any more be sure that not empty and other postcondition clauses hold for the client. These properties are satisfied just after the routine's termination; but some other client may invalidate them before the caller gets restarted. Because the problem is even more serious for preconditions, which condition the correct execution of suppliers, the rest of the discussion mainly considers preconditions.

The paradoxes only arise for separate formal arguments. For a non-separate argument --- in particular for an expanded value such as an integer --- we can continue to rely on the usual properties of assertions. But this not much consolation.

Although this has not yet been widely recognized in the literature, the concurrent precondition paradox is one of the central issues of concurrent O-O software construction, and the futility of trying to retain habitual assertion semantics is one of the principal factors distinguishing concurrent computation from its sequential variants.

The precondition paradox may also arise in situations that are not ordinarily thought of as involving concurrency, such as accessing a file. This is explored in exercise 28.6.

The concurrent semantics of preconditions

To resolve the concurrent precondition paradox we assess the situation through three observations:

  1. Suppliers still need the preconditions to protect their routine bodies.
    Clients cannot rely any more on the usual (sequential) semantics of preconditions.
    Because each client may be vying with others for resource access, a client may be prepared to wait before it gets its resources --- if this guarantees correct processing after the wait.

The conclusion is almost inescapable: we still need preconditions, if only for the suppliers' sake, but they must be given a different semantics. Instead of being a correctness condition, as in the sequential context, a precondition applying to a separate argument will be a wait condition. This will apply to what we may call "separate precondition clauses": any precondition clause involving a call whose target is a separate argument. A typical separate precondition clause is not b.full for put.

Here is the rule:

Separate call semantics Before it can start executing the routine's body, a separate call must wait until every object attached to a separate argument is free, and every separate precondition clause is satisfied.

A separate object is free if it is not being used as an actual argument of a separate call (implying that no routine is being executed on it).

With this rule the above version of put in a client class achieves the desired result:

put (b: BOUNDED_BUFFER [T]; x: T) is
    not b.full
    b.put (x)
    not b.empty

A call of the form put (buffer), from a producer client, will wait until buffer is free (available) and not full. If b is free but full, the call cannot be satisfied; but some other client, a consumer, may get access to it (since the precondition of interest to consumers, not b.empty, will be satisfied in this case); after such a client has removed an item, making the buffer non-full, the producer client can now have its call executed.

It is the implementation's responsibility to define how to manage requests from various separate clients and, in particular, which client to let through if two or more satisfy the conditions of the above rule (free objects, preconditions satisfied). The implementation may provide library mechanisms for tuning this policy to each application's needs.

Be sure to note that the special semantics of preconditions as wait conditions only applies to what we have called separate precondition clauses, that is to say, clauses involving a condition of the form b.some_property where b is a separate argument. A non-separate clause, such as i > = 0 where i is an integer, or b /= Void even if b is separate (this does not involve a call on b), will keep its usual correctness semantics since the concurrent precondition paradox does not apply in such cases: if the client ensures the stated condition before the call, it will still hold when the routine starts; if the condition does not hold, no amount of waiting would change the situation.

Assertions, sequential and concurrent

The idea that assertions, and in particular preconditions, may have two different semantics --- sometimes correctness conditions, sometimes wait conditions --- may have surprised you. But there is no way around it: the sequential semantics is inapplicable in the case of separate precondition clauses.

One possible objection must be addressed. We have seen that a mere compilation switch can turn run-time assertion checking on or off. Is it not dangerous, then, to attach that much semantic importance to preconditions in concurrent object-oriented systems? No, it is not. The assertions are an integral part of the software, whether or not they are enabled at run time. Because in a correct sequential system the assertions will always hold, we may turn off assertion checking for efficiency if we think we have removed all the bugs; but conceptually the assertions are still there. With concurrency the only difference is that certain assertions --- the separate precondition clauses --- may be violated at run time even for a correct system, and serve as wait conditions. So the assertion monitoring options must not apply to these clauses.

A validity constraint

To avoid deadlock situations, a validity constraint is required on preconditions and postconditions clauses. Assume we permitted routines of the form

f (x: SOME_TYPE) is   require 
some_property (separate_attribute)   do 
...    end 

where separate_attribute is a separate attribute of the enclosing class. Nothing in this example, save separate_attribute, need be separate. The evaluation of f's precondition, either as part of assertion monitoring for correctness, or as a synchronization condition if the actual argument corresponding to x in a call is itself separate, could cause blocking if the attached object is not available.

This is not acceptable and is prohibited by the following rule:

Assertion argument rule If an assertion contains a function call, any actual argument of that call must, if separate, be a formal argument of the enclosing routine, if any.

States and transitions

The following figure summarizes some of the preceding discussion by showing the various possible states in which objects and processors may be, and how they will change state as a result of calls.

A call is successful if the handler of its target is idle or suspended, all its non-void separate arguments are attached to free obj ects, and the corresponding separate precondition clauses, if any, are true. Note that this makes the definitions of object and processor states mutually dependent.

Table of Contents Next section