This site contains older material on Eiffel. For the main Eiffel page, see http://www.eiffel.com.

28.9 EXAMPLES

To illustrate the mechanism, here now are a few examples chosen from diverse backgrounds --- from traditional concurrent programming examples through large-scale multiprocessing to real-time applications.

The dining philosophers

The
Philosophers'
Spaghetti
Plate

Dijkstra's "dining philosophers", an artificial example meant to illustrate the behavior of operating system processes vying for shared resources, is an obligatory part of any discussion on concurrency. Five philosophers around a table spend their time thinking, then eating and so on. To eat the spaghetti, each needs access to the fork immediately to his left and to his right --- creating contention and possible deadlock.

The following class (definitely not what people mean by "spaghetti code") describes the philosopher's behavior. Thanks to the mechanism for reserving objects through separate arguments, there is essentially (in contrast with most of the solutions in the literature) no specific synchronization code:


separate class PHILOSOPHER creation
  make
inherit
  GENERAL_PHILOSOPHER;
  PROCESS
    rename setup as getup undefine getup end
feature {BUTLER}
  step is
      -- Perform a philosopher's tasks.
    do
      think;
      eat (left, right)

end feature {NONE} eat (l, r: separate FORK) is -- Eat, having grabbed l and r. do ... end end -- class PHILOSOPHER

The entire synchronization requirement is embodied by the call to eat, which uses arguments left and right representing the two necessary forks, thus reserving these objects.

The simplicity of this solution comes from the mechanism's ability to reserve several resources through a single call having several separate arguments, here left and right. If we restricted the separate arguments to at most one per call, the solution would use one of the many published algorithms for getting hold of two forks one after the other without causing deadlock.

The principal procedure of class PHILOSOPHER does not appear above since it comes from PROCESS: procedure live, which as given in PROCESS (see page 933) simply executes from setup until over loop step end, so all we need to redefine here is step. I hope you will enjoy the renaming of setup as getup --- denoting the philosopher's initial operation.

Thanks to the use of multiple object reservation through arguments, the solution described here does not produce deadlock, but it is not guaranteed to be fair; some of the philosophers can conspire to starve the others. Here too the literature provides various solutions, which may be integrated into the above scheme.

To avoid confusion of genres the concurrency-independent features of a philosopher have been kept in a class GENERAL_PHILOSOPHER:



class GENERAL_PHILOSOPHER creation

  make
feature -- Initialization
  make (l, r: separate FORK) is
      -- Define l as left and r as right forks.
    do
      left := l; right := r
    end
feature {NONE} -- Implementation
  left, right: separate FORK;
      -- The two required forks
  getup is
      -- Take any necessary initialization action.
    do ... end;
  think is
      -- Any appropriate action or lack thereof.
    do ... end;
end -- class GENERAL_PHILOSOPHER

The rest of the system simply takes care of initialization and of describing the auxiliary abstractions. Forks have no immediately relevant properties:

class FORK end

A butler is used to set-up and start a session:

class BUTLER creation
  make
feature
  count: INTEGER;
    -- The number of both philosophers and forks
  launch is
      -- Start a full session.
    local
      i: INTEGER
    do
      from i := 1 until i > count loop
        launch_one (participants @ i); i := i + 1
      end
    end
feature {NONE}
  launch_one (p: PHILOSOPHER) is
      -- Let one philosopher start his actual life.
    do
      p.live
    end;
  participants: ARRAY [PHILOSOPHER];
  cutlery: ARRAY [FORK]

feature {NONE} -- Initialization
  make (n: INTEGER) is
      -- Initialize a session with n philosophers.
    require
      n >= 0
    do
      count := n;
      create participants.make (1, count); create cutlery.make (1, count);
      make_philosophers
    ensure
      count = n
    end;
  make_philosophers is
      -- Set up philosophers.
    local
      i: INTEGER; p: PHILOSOPHER; left, right: FORK;
    do
      from i := 1 until i > count loop
        p := philosophers @ i;
        left := cutlery @ i;
        right := cutlery @ ((i + 1) \\ count);
        create p.make (left, right);
        i := i + 1
      end
    end
invariant
  count >= 0; participants.count = count; cutlery.count = count
end

Note how launch and launch_one, using a pattern discussed in the presentation of wait by necessity ("A multi- launcher", page 958), rely on the property that the call p.live will not cause waiting, allowing the loop to proceed to the next philosopher.

Making full use of hardware parallelism

The following example illustrates how to use wait by necessity to draw the maximum benefit from any available hardware parallelism. It shows a sophisticated form of load balancing in which we offload computation to many different computers on a network. Thanks to the notion of processor, we can rely on the concurrency mechanism to choose these computers automatically for us.

The example itself --- computing the number of nodes in a binary tree --- is of little practical value, but illustrates a general scheme that may be extremely useful for large, heavy computations such as those encountered in cryptography or advanced computer graphics, for which developers need all the resources they can get, but do not want to have to take care manually of the assignment of abstract computing units to actual computers.

Consider first a class extract that does not involve concurrency:

class BINARY_TREE [G] feature
  left, right: BINARY_TREE [G];
  ... Other features ...
  nodes: INTEGER is
      -- Number of nodes in this tree
    do
      Result := node_count (left) + node_count (right) + 1
    end
feature {NONE}
  node_count (b: BINARY_TREE [G]): INTEGER is
      -- Number of nodes in b
    do
      if b /= Void then Result := b.nodes end
    end
end -- class BINARY_TREE

Function nodes uses recursion to compute the number of nodes in a tree. The recursion is indirect, through node_count.

In a concurrent environment offering many processors, we could offload all the separate node computations to different processors. Declaring the class as separate, replacing nodes by an attribute and introducing procedures does the job:

separate class BINARY_TREE1 [G] feature
  left, right: BINARY_TREE1 [G];
  ... Other features ...
  nodes: INTEGER;
  update_nodes is
      -- Update nodes to reflect the number of nodes in this tree.
    do
      nodes := 1;
      compute_nodes (left); compute_nodes (right);
      adjust_nodes (left); adjust_nodes (right)
    end

feature {NONE}
  compute_nodes (b: BINARY_TREE1 [G]) is
      -- Update information about the number of nodes in b.
    do
      if b /= Void then
        b.update_nodes
      end
    end;
  adjust_nodes (b: BINARY_TREE1 [G]) is
      -- Adjust number of nodes from those in b.
    do
      if b /= Void then nodes := nodes + b.nodes end
    end
end -- class BINARY_TREE1

The recursive calls to compute_nodes will now be started in parallel. The addition operations wait for these two parallel computations to complete.

If an unbounded number of CPUs (physical processors) are available, this solution seems to make the optimal possible use of the hardware parallelism. If there are fewer CPUs than nodes in the tree, the speedup over sequential computation will depend on how well the implementation allocates CPUs to the (virtual) processors.

The presence of two tests for vacuity of b may appear unpleasant. It results, however, from the need to separate the parallelizable part --- the procedure calls, launched concurrently on left and right --- from the additions, which by nature must wait for their operands to become ready.

An attractive property of the solution is that it ignores the practical problem of assigning the actual computers. The software just allocates processors as it needs to. (This is done in the creation instructions, not shown, which will appear in particular in the insertion procedure: to insert a new element into a binary tree you create a new node through createI> new_node.make (new_element) which here, new_node being of the separate type BINARY_TREE1[G], will allocate a new processor to it.) The mapping of these virtual processors to the available physical resources available is entirely automatic. (See"Mapping the processors: the Concurrency Control File", page 942 )

Locks

Assume you want to allow a number of clients (the "lockers") to obtain exclusive access to certain resources (the "lockables") without having to enclose the exclusive access sections in routines. This will provide us with a semaphore-like mechanism. Here is a solution:


class LOCKER feature
  grab (resource: separate LOCKABLE) is
      -- Request exclusive access to resource.
    require
      not resource.locked
    do
      resource.set_holder (Current)
    end
  release (resource: separate LOCKABLE) is
    require
      resource.is_held (Current)
    do
      resource.release
    end
end
class LOCKABLE feature {LOCKER}
  set_holder (l: separate LOCKER) is
      -- Designate l as holder.
    require
      l /= Void
    do
      holder := l
    ensure
      locked
    end;
  locked: BOOLEAN is
      -- Is resource reserved by a locker?
    do
      Result := (holder /= Void)
    end;
  is_held (l: separate LOCKER): BOOLEAN is
      -- Is resource reserved by l?
    do
      Result := (holder = l)
    end;

  release is
      -- Release from current holder.
    do
      holder := Void
    ensure
      not locked
    end
feature {NONE}
  holder: separate LOCKER
end

Any class describing resources will inherit from LOCKABLE. The proper functioning of the mechanism assumes that every locker performs sequences of grab and release operations, in this order. Other behavior will usually result in deadlock; this problem was mentioned in the discussion of semaphores as one of the major limitations of this technique. But we can once again rely on the power of object-oriented computation to enforce the required protocol; rather than trusting every locker to behave, we may require lockers to go through procedure use in descendants of the following class:

deferred class LOCKING_PROCESS feature
  resource: separate LOCKABLE;
  use is
      -- Make disciplined use of resource.
    require
      resource /= Void
    do
      from create lock; setup until over loop
        lock.grab (resource);
        exclusive_actions;
        lock.release (resource)
      end;
      finalize
    end;
  set_resource (r: separate LOCKABLE) is
      -- Select r as resource for use.
    require
      r /= Void
    do
      resource := r
    ensure
      resource /= Void
    end

feature {NONE}
  lock: LOCKER;
  exclusive_actions
      -- Operations executed while resource is under exclusive access
    deferred
    end;
  setup
      -- Initial action; by default: do nothing.
    do
    end;
  over: BOOLEAN is
      -- Is locking behavior finished?
    deferred
    end;
  finalize
      -- Final action; by default: do nothing.
    do
    end
end -- class LOCKING_PROCESS

An effective descendant of LOCKING_PROCESS will effect exclusive_actions and over, and may redefine setup and finalize. Note that it is desirable to write LOCKING_PROCESS as a descendant of PROCESS. (Exercise 28.7.)

Whether or not we go through class LOCKING_PROCESS, a grab does not take away the corresponding lockable from all possible clients: it only excludes other lockers observing the protocol. To exclude any possible client from accessing a resource, you must enclose the operations accessing the resource in a routine to which you pass it as argument.

Note that routine grab of class LOCKER is an example of what has been called the business card scheme: passing to resource a reference to the Current locker, which the resource will keep as a separate reference.

Based on the pattern provided by these classes, it is not difficult to write classes implementing semaphores under their various forms. Note that object-oriented mechanisms should help us help the users of our classes avoid the classic danger of semaphores: executing a reserve on a resource and forgetting to execute the corresponding free. If we provide a high-level behavior class, which includes a reserve for each free and leaves the intermediate operations deferred, developers inheriting from that class will be sure to follow the right pattern. (See exercise 28.8.)

Coroutines

Although not truly concurrent, at least not in its basic form, our next example is essential as a way to test the general applicability of a concurrent mechanism.

The first (and probably the only) major programming language to include coroutines was also the first object-oriented language, Simula 67; we will study its coroutine mechanism as part of the presentation of Simula. That discussion will also present some examples of the practical use of coroutines.

Coroutines emulate concurrency on a sequential computer. They provide a form of functional program unit ("functional" as opposed to "object-oriented") that, although similar to the traditional notion of routine, provides a more symmetric form of communication. With a routine call, there is a master and a slave: the caller starts a routine, waits for its termination, and picks up where it left; the routine, however, always starts from the beginning. The caller calls; the routine returns. With coroutines, the relationship is between peers: coroutine a gets stuck in its work and calls coroutine b for help; b restarts where it last left, and continues until it is its turn to get stuck or it has proceeded as far as needed for the moment; then a picks up its computation. Instead of separate call and return mechanisms, there is a single operation, resume c, meaning: restart coroutine c where it was last interrupted; I will wait until someone e

This is all strictly sequential and meant to be executed on a single process (task) of a single computer. But the ideas are clearly drawn from concurrent computation; in fact, an operating system that provide such schemes as time-sharing, multitasking (as in Unix) and multithreading, mentioned at the beginning of this chapter as providing the appearance of concurrency on a single computer, will internally implement them through a coroutine-like mechanism.

Coroutines may be viewed as a boundary case of concurrency: the poor man's substitute to concurrent computation when only one thread of control is available. It is always a good idea to check that a general-purpose mechanism degrades gracefully to boundary cases; so let us see how we can represent coroutines. The following two classes will achieve this goal.


separate class COROUTINE creation
  make
feature {COROUTINE}
  resume (i: INTEGER) is
      -- Wake up coroutine of identifier i and go to sleep.
    do
      actual_resume (i, controller)
    end
feature {NONE} -- Implementation
  controller: COROUTINE_CONTROLLER;
  identifier: INTEGER;
  actual_resume (i: INTEGER; c: COROUTINE_CONTROLLER) is
      -- Wake up coroutine of identifier i and go to sleep.
      -- (Actual work of resume).
    do
      c.set_next (i); request (c)
    end;
  request (c: COROUTINE_CONTROLLER) is
      -- Request eventual re-awakening by c.
    require
      c.is_next (identifier)
    do
      -- No action necessary
    end
feature {NONE} -- Creation
  make (i: INTEGER; c: COROUTINE_CONTROLLER) is
      -- Assign i as identifier and c as controller.
    do
      identifier := i;
      controller := c
    end
end -- class COROUTINE

separate class COROUTINE_CONTROLLER feature {NONE}
  next: INTEGER
feature {COROUTINE}
  set_next (i: INTEGER) is
      -- Select i as the identifier of the next coroutine to be awakened.
    do
      next := i
    end;
  is_next (i: INTEGER): BOOLEAN is
      -- Is i the index of the next coroutine to be awakened?
    do
      Result := (next = i)
    end
end -- class COROUTINE_CONTROLLER

One or more coroutines will share one coroutine controller (created through a "once" function not shown here). Each coroutine has an integer identifier. To resume a coroutine of identifier i, procedure resume will, through actual_resume, set the next attribute of the controller to i, and then block, waiting on the precondition next = j, where j is the coroutine's own identifier. This ensures the desired behavior. (Exercise 28.10.)

Although it looks like normal concurrent software, this solution is so organized in that (if all coroutines have different identifiers) at most one coroutine may proceed at any time, making it useless to allocate more than one physical CPU. (The controller could actually make use of its own CPU, but its actions are so simple as not to warrant it.)

The recourse to integer identifiers is necessary since giving resume an argument of type COROUTINE, a separate type, would cause deadlock. In practice, you should probably use unique declarations to avoid having to choose the values manually. This use of integers also has an interesting consequence: if we allow two or more coroutines to have the same identifier, then with a single CPU we obtain a non-deterministic mechanism: a call to resume (i) will cause the restarting of any coroutine whose identifier has value i. With more than one CPU a call resume (i) will allow all coroutines of identifier i to proceed in parallel.

So the above scheme, which for a single CPU provides a coroutine mechanism, doubles up in the case of several CPU as a mechanism for controlling the maximum number of processes of a certain type which may be simultaneously active.

An elevator control system

The following example shows a case where object-orientation and the mechanism defined in this chapter can be used to achieve a pleasantly decentralized event-driven architecture for a real-time application.

The example describes software for an elevator control system, with several elevators serving many floors. The design below is somewhat fanatically object-oriented in that every significant type of component in the physical system --- for example the notion of individual button in an elevator cabin, marked with a floor number --- has an associated separate class, so that each corresponding object such as a button has its own virtual thread of control (processor). The benefit is that the system is entirely event-driven; it does not need to include any loop for examining repeatedly the status of objects, for example whether any button has been pressed.

The class texts below are only sketched, but provide a good idea of what a complete solution would be. In most cases the creation procedures have not been included.

Class MOTOR describes the motor associated with one elevator cabin, and the interface with the mechanical hardware:

separate class MOTOR feature {ELEVATOR}
  move (floor: INTEGER) is
      -- Go to floor; once there, report.
    do
      "Direct the physical device to move to floor";
      signal_stopped (cabin)
    end;
  signal_stopped (e: ELEVATOR) is
      -- Report that elevator stopped on level e.
    do
      e.record_stop (position)
    end
feature {NONE}
  cabin: ELEVATOR;
  position: INTEGER is
      -- Current floor level
    do
      Result := "The current floor level, read from physical sensors"
    end
end

The creation procedure of this class must associate an elevator, cabin, with every motor. Class ELEVATOR includes the reverse information through attribute puller, indicating the motor pulling the current elevator.

The reason for making an elevator and its motor separate objects is to reduce the grain of locking: once an elevator has sent a move request to its elevator, it is free again, thanks to the wait by necessity policy, to accept requests from buttons either inside or outside the cabin. It will resynchronize with its motor upon receipt of a call to procedure record_stop, through signal_stopped. Only for a very short time will an instance of ELEVATOR be reserved by a call from either a MOTOR or BUTTON object


separate class ELEVATOR creation
  make
feature {BUTTON}
  accept (floor: INTEGER) is
      -- Record and process a request to go to floor.
    do
      record (floor);
      if not moving then process_request end
    end
feature {MOTOR}
  record_stop (floor: INTEGER) is
      -- Record information that elevator has stopped on floor.
    do
      moving := false; position := floor; process_request
    end
feature {DISPATCHER}
  position: INTEGER;
  moving: BOOLEAN
feature {NONE}
  puller: MOTOR;
  pending: QUEUE [INTEGER];
      -- The queue of pending requests
      -- (each identified by the number of the destination floor)
  record (floor: INTEGER) is
      -- Record request to go to floor.
    do
      "Algorithm to insert request for floor into pending"
    end;
  process_request is
      -- Handle next pending request, if any.
    local
      floor: INTEGER
    do
      if not pending.empty then
        floor := pending.item;
        actual_process (puller, floor);
        pending.remove
      end
    end;

  actual_process (m: separate MOTOR; floor: INTEGER) is
      -- Direct m to go to floor.
    do
      moving := True; m.move (floor)
    end
end

There are two kinds of button: floor buttons, which passengers press to call the elevator to a certain floor, and cabin buttons (inside a cabin) which passengers press to request a move to a certain floor. The corresponding requests are different: a request from a cabin button is directed to the cabin to which the button belongs, whereas a floor button request may be handled by any elevator. So a floor button request will be sent to a dispatcher object, which will poll the various elevators and select the one which can best handle the request. (The precise selection algorithm is left unimplemented below since it is irrelevant to this discussion; the same applies to the algorithm used by elevators to manage their pending queue of requests in class ELEVATOR above.)

Class FLOOR_BUTTON assumes that there is only one button on each floor. It is not difficult to update the design to support two buttons, one for up requests and the other for down requests.

It is convenient although not essential to have a common parent BUTTON for the classes representing the two kinds of button. Remember that the features exported by ELEVATOR to BUTTON are, through the standard rules of selective information hiding, also exported to the two descendants of this class.

separate class BUTTON feature
  target: INTEGER
end
separate class CABIN_BUTTON inherit BUTTON feature
  target: INTEGER;
  cabin: ELEVATOR;
  request is
      -- Send to associated elevator a request to stop on level target.
    do
      actual_request (cabin)
    end;
  actual_request (e: ELEVATOR) is
      -- Get hold of e and send a request to stop on level target.
    do
      e.accept (target)
    end
end

separate class FLOOR_BUTTON inherit
  BUTTON
feature
  controller: DISPATCHER;
  request is
      -- Send to dispatcher a request to stop on level target.
    do
      actual_request (controller)
    end
  actual_request (d: DISPATCHER) is
      -- Send to d a request to stop on level target.
    do
      d.accept (target)
    end
end

The question of switching button lights on and off has been ignored. It is not hard to add calls to routines which will take care of this.

Here finally is class DISPATCHER. To refine the algorithm that selects an elevator in procedure accept, it will need to access the attributes position and moving of class ELEVATOR, which in the full system should be complemented by a boolean attribute going_up. Such accesses will not cause any problem as the above design ensures that ELEVATOR objects never get reserved for a long time.

separate class DISPATCHER creation
  make
feature {FLOOR_BUTTON}
  accept (floor: INTEGER) is
      -- Handle a request to send an elevator to floor.
    local
      index: INTEGER; chosen: ELEVATOR;
    do
      "Algorithm to determine what elevator should handle the
      request for floor";
      index := "The index of the chosen elevator";
      chosen := elevators @ index;
      send_request (chosen, floor)
    end

feature {NONE}
  send_request (e: ELEVATOR; floor: INTEGER) is
      -- Send to e a request to go to floor.
    do
      e.accept (floor)
    end;
  elevators: ARRAY [ELEVATOR];
feature {NONE} -- Creation
  make is
      -- Set up the array of elevators.
    do
      "Initialize array elevators"
    end
end

A watchdog mechanism

Together with the previous one, the following example helps show the applicability of the mechanism to real-time problems. It also provides a good illustration of the concept of duel.

We want to enable an object to perform a call to a certain procedure action, with the provision that the call will be interrupted, and a boolean attribute failed set to true, if the procedure has not completed its execution after t seconds. The only basic timing mechanism available is a procedure wait (t), which will execute for t seconds.

Here is the solution, using a duel. A class that needs the mechanism should inherit from class TIMED and provide an effective version of the procedure action which, in TIMED, is deferred. To let action execute for at most t seconds, it suffices to call timed_action (t). This procedure sets up a watchdog (an instance of class WATCHDOG), which executes wait (t) and then interrupts its client. If, however, action has been completed in the meantime, it is the client that interrupts the watchdog.

 deferred class TIMED inherit
  CONCURRENCY
feature {NONE}
  failed: BOOLEAN; alarm: WATCHDOG;

  timed_action (t: REAL) is
      -- Execute action, but interrupt after t seconds if not complete.
      -- If interrupted before completion, set failed to true.
    do
      set_alarm (t); action; unset_alarm (t); failed := False
    rescue
      if is_concurrency_interrupt then failed := True end
    end;
  set_alarm (t: REAL) is
      -- Set alarm to interrupt current object after t seconds.
    do
        -- Create alarm if necessary:
      if alarm = Void then create alarm end;
      yield; actual_set (alarm, t)
    end;
  unset_alarm (t: REAL) is
      -- Remove the last alarm set.
    do
      insist;
      demand; actual_unset (alarm); wait_turn
    end;
  action is
      -- The action to be performed under watchdog control
    deferred
    end
feature {NONE} -- Actual access to watchdog
  ... Procedures actual_set and actual_unset, left to the reader ...
feature {WATCHDOG} -- The interrupting operation
  stop is
      -- Empty action to let watchdog interrupt a call to timed_action
    do
      -- Nothing
    end
end -- class TIMED


separate class
  WATCHDOG
feature {TIMED}
  set (caller: separate TIMED; t: REAL) is
      -- Set an alarm after t seconds for client caller.
    do
      yield; wait (t);
      demand; caller. stop; wait_turn
    rescue
      if is_concurrency_interrupt then early_termination := True end
    end;
  unset is
      -- Remove alarm.
    do
      early_termination := False
  end
feature {NONE}
  early_termination: BOOLEAN;
end -- class WATCHDOG

In class WATCHDOG, the boolean attribute early_termination and the rescue clause are not actually needed but have been added for clarity.

Accessing buffers

As a last example, let us return to the example of bounded buffers used several times in the presentation of the mechanism. We have seen that the class could be declared as just separate class BOUNDED_BUFFER [G] inherit BOUNDED_QUEUE [G] end, assuming the proper sequential BOUNDED_QUEUE class.

To use a call such as q.remove on an entity q of type BOUNDED_BUFFER [T], you must enclose it in a routine using q as formal argument. It may be useful for that purpose to provide a class BUFFER_ACCESS that fully encapsulates the notion of bounded buffer; application classes may inherit from BUFFER_ACCESS. There is nothing difficult about this class, but it provides a good example of how we can encapsulate separate classes, directly derived from sequential ones such as BOUNDED_QUEUE, so as to facilitate their direct uses by concurrent applications.


indexing
  description: "Encapsulation of access to bounded buffers"
class BUFFER_ACCESS [G] is
  put (q: BOUNDED_BUFFER [G]; x: G) is
      -- Insert x into q, waiting if necessary until there is room.
    require
      not q.full
    do
      q.put (x)
    ensure
      not q.empty
    end;
  remove (q: BOUNDED_BUFFER [G]) is
      -- Remove an element from q, waiting if necessary
      -- until there is such an element.
    require
      not q.empty
    do
      q.remove
    ensure
      not q.full
    end;
  item (q: BOUNDED_BUFFER [G]): G is
      -- Oldest element not yet consumed
    require
      not q.empty
    do
      Result := q.item
    ensure
      not q.full
    end
end

Table of Contents Next section