|
||||
Table of Contents Previous section 28.1 A SNEAK PREVIEWAs usual, this discussion will not throw a pre-cooked answer at you, but instead will carefully build a solution from a detailed analysis of the problem and an exploration of possible avenues, including a few dead ends. Although necessary to make you understand the techniques in depth, this thoroughness might lead you to believe that they are complex; that would be inexcusable, since the concurrency mechanism on which we will finally settle is in fact characterized by almost incredible simplicity. To avoid this risk, we will begin by examining a summary of the mechanism, without any of the rationale. If you hate "spoilers", preferring to start with the full statement of the issues and to let the drama proceed to its dénouement step by step and inference by inference, ignore the one-page summary that follows and skip directly to the next section. The extension covering full-fledged concurrency and distribution will be as minimal as it can get starting from a sequential notation: a single new keyword --- separate. How is this possible? We use the fundamental scheme of O-O computation: feature call, x.f (a), executed on behalf of some object O1 and calling f on the object O2 attached to x, with the argument a. But instead of a single processor that handles operations on all objects, we may now rely on different processors for O1 and O2 --- so that the computation on O1 can move ahead without waiting for the call to terminate, since another processor handles it. Because the effect of a call now depends on whether the objects are handled by the same processor or different ones, the software text must tell us unambiguously what the intent is for any x. Hence the need for the new keyword: rather than just x: SOME_TYPE, we declare x: separate SOME_TYPE to indicate that x is handled by a different processor, so that calls of target x can proceed in parallel with the rest of the computation. With such a declaration, any creation instruction createI> x.make (...) will spawn off a new processor --- a new thread of control --- to handle future calls on x. Nowhere in the software text should we specify which processor to use. All we state, through the separate declaration, is that two objects are handled by different processors, since this radically affects the system's semantics. Actual processor assignment can wait until run time. Nor do we settle too early on the exact nature of processors: a processor can be implemented by a piece of hardware (a computer), but just as well by a task (process) of the operating system, or just a thread of such a task. Viewed by the software, "processor" is an abstract concept; you can execute the same concurrent application on widely different architectures (time-sharing on one computer, distributed network with many computers, threads within one Unix or Windows task...) without any change to its source text. All you will change is a "Concurrency Configuration File" which specifies the last-minute mapping of abstract processors to physical resources. We need to specify synchronization constraints. The conventions are straightforward:
This is all there is to the mechanism, which will enable us to build the most advanced concurrent and distributed applications through the full extent of O-O techniques, from multiple inheritance to Design by Contract --- as we will now study in detail, forgetting for a while all that we have read in this short (but essentially complete) preview. (Note: A complete summary appears in 28.11.)
Table of Contents Next section
|