This site contains older material on Eiffel. For the main Eiffel page, see

A Really Good Idea

by Bertrand Meyer

A variant of this article appeared in Computer (IEEE), as part of the Component and Object Technology department, in the February 1997 issue.

As this is the final installment of the "Component and Object Technology" department I will try to come back to the source, object-oriented development, and reflect on its contribution and future.

Wise people have at various times predicted or even announced the end of objects. As early as 1990 articles were appearing on the "Object Winter" theme, patterned after the   "AI Winter"  reported to have followed  the initial excitement over Artificial Intelligence. The theme has gained new vigor in the past few months. In Software Development (July 99, page 33) a review by Alan Zeichik of Clemens Szyperski's Component Software book states: "Whether we like it or not, in most situations, object-oriented programming has not succeeded in fostering code reuse, except in the most limited way". The lead article in Embedded Systems Journal's August 1999 issue was titled "Nuts to OOP" (see for comments and a link to the article).

Reports of its death are greatly exaggerated

Recently, IEEE Software has been the place of choice for death notices, with such pronouncements by its then editor-in-chief, Al Davis, as "We are all now witnessing the fall of the Object era" (Predictions and Farewell, vol. 15 no. 4, July-August 98, pages 6-9) or, in his final interview (15/6, Nov.-Dec. 98, A Golden Thread in Software's Tapestry):

"When I started consulting ten years ago, all my customers wanted to hear the word `object'. Now, none do, and I find that clients are much happier when they don't", followed by the final blow: "I think `object' has now gone the way of `structured'".

The comparison with structured programming is appropriate: in that case too a set of simple but profound conceptual principles enjoyed partial success — passage of some of the ideas into the fabric of daily software development, to the point of becoming so "obvious" that many practitioners do not realize that the ideas were once new and controversial — as well as a form of degradation, coming in part from the hijacking of the name, "structured" or "object-oriented", to describe mere graphical conventions for describing system structures, useful in themselves but a far cry from the intellectual discipline of the original ideas.

We should not underestimate the success part. David Taylor  wrote in Computer (May 1999, page 50) that

"by the year 2000 no one would talk about objects any more because the technology would be so thoroughly absorbed into the mainstream that no one would think to mention it".

This may be premature by a few years but is definitely the trend. All major developments in the software world integrate O-O aspects, or at least claim to do so. Almost all recent programming languages are O-O in some ways; even good old Fortran, in its latest version, has some timid support for data abstraction. This may be what Al Davis's consulting clients really mean: we know all about objects, don't bother us with this. But it's a sign of success, not rejection.

It's the only game in town

Whatever reservations anyone may have about some or other aspect of current object technology, it is still true that, as Grady Booch noted a few years ago, we don't really know any better; when it comes to building complex, evolutionary mission-critical systems, O-O solutions are our best bet. Nothing else has come to challenge them.

Nothing, not even Component-Based Development. As has been clear in this column, with all my enthusiasm for component-based development, leading in particular to the column's broadening of scope last year (from "Object Technology" to "Component and Object Technology") and to the special section on CBD (co-edited with Christine Mingins, in last July's issue), I find absurd the claim that (in Alan Zeichik's phrase) "objects are tired, components are wired". Components, to misquote Clausewitz, are just the pursuit of objects through other means. Components assume object technology; they use object technology; they promote object technology.

Every one of the currently prominent CBD approaches is directly rooted in objects, be it CORBA from the "Object Management Group", Enterprise Java Beans, or COM with its direct reliance on the "Vtables" of C++.

One can understand the Buzz of the Year phenomenon, if only as a way for consultants to renew their claims to expertise - we in the software field don't have pre-season sales and post-season discounts, so we need ever new ways to drum up business. There's nothing wrong with that; let's just not take it too seriously.

David Taylor, in the article already cited, noted that "Even Object Magazine changed its name to Component Software to maintain its cutting edge". That was quite amusing, since a couple of months after Taylor's article Component Software announced that it was merging with Application Development magazine, retaining the latter publication's title. Does this mean that components are out and we are now back to applications? Probably not. A marketing strategy is not a  technology trend.

Teaching those who believe they already know

The "We know all about objects, what else is new" attitude cited by Al Davis is indeed widespread. In my experience it is largely unjustified. While many engineers and managers are familiar with the basic goals of object technology, only a minority has really understood the deeper concepts and started to apply them thoroughly. This can make life tough for object technology consultants and instructors: as every parent and educator knows,  it is impossible to teach people something when they think they already know. (This may be the only absolute case of "learning disability".) I find that general intellectual sympathy with the principles of information hiding, data abstraction, taxonomy, reuse, systematic software construction - an attitude found fairly universally today - is in not a good predictor of whether the person will actually apply these principles in software development.

To anyone who has the opportunity of peeking at the way software is routinely written in companies large and small these days, the myth that object technology is now passé and we should move to something else sounds absurd. Not that the picture is doom and gloom; I disagree with those (often vocal in the pages of IEEE Software) who claim that we have made no progress at all in the past 30 years. We have better tools, better practices, a generally more serious attitude. But most of the industry is far from having integrated in its daily practice the deeper principles of object technology, and, more generally, many of the principles of modern programming methodology.

The next 99 software disasters

Perhaps the most striking example of what we still have to learn is the success of the so-called "windowing" Y2K technique. I don't have any actual statistics, but informal enquiries suggest it's one of the most commonly used "solutions" (note the quotes) to the Y2K issue. Windowing means that you don't touch files and databases using 2-digit dates; you just choose a pivot date, say 1960, and hack the programs so that whenever they use a 2-digit date code xy they do something like

    if xy < 60 then

      "Understand this as the date 20xy"


      "Understand this as the date 19xy"


("60" is just an example, as there can be no universal pivot date: an airline's Frequent Flyer System doesn't need to go back any earlier than 1970, but the airline's pension program may have to deal with people born in 1910.)

Now this is really clever. First we make the programs even more complicated than before, with all kinds of spurious tests, not to mention the possibility of added bugs. Second, we have just pushed the problem further, creating potential Y2K-like disasters for the next 99 years: since there is no universal pivot, every system has its own time bomb, ticking down to its own specific self-destruction deadline. Every year from now on (well, maybe not the next two or three - or has anyone chosen 02 as a pivot?) will be the opportunity for a mini-Y2K.

There is no excuse for such nonsense. It passes on to our successors the same calamity that our predecessors (in some cases ourselves) inflicted on us. But they at least had the excuse that it was the first time, no one knew, we were all learning. This time there is no such excuse; we should know better.

Lesson not heeded?

The Y2K mess as a whole is evidence that, for all the talk about objects having become mainstream, we still have a long way to go, and not only regarding the more advanced parts of object technology.

In this Department's column about the topic (Christopher Creele, Bertrand Meyer and Philippe Stephan, The Opportunity of a Millenium,  Computer, volume 30, no. 11, November 1997, pages 137-138), we pointed out that the millenium problem was the opportunity for a generalized opening-up and cleaning-up of major software system. All signs indicate that this has happened only in a minority of enlightened companies. Others have simply patched their code and are hoping for the best (including the hope that the patching process will not have introduced too many new bugs). For all these companies, object technology is still in the future.

Getting it right

What object technology? This is where we must again take a serious look at the supposed arguments against object technology. (You can find a set of links in the "pros and cons" section of the Cetus O-O links at

Although the criticism is officially directed at object technology, what it really addresses in many cases is C++. In an early article about object technology, Bjarne Stroustrup mocked the pseudo-syllogism "Ada is good; Ada is object-oriented; therefore object-oriented is good". What we have seen more recently is more "C++ is object-oriented; C++ is bad; therefore object-oriented is bad". This was very much the assumption behind an indictment of objects by Les Hatton (Does OO sync with how we think?, IEEE Software, 15-3, May-June 1998, pages 46-54), which Richard Wiener criticized in the same issue, taking Hatton to task for equating O-O and C++: "With extreme discipline programmers can use C++ as an OO language, but more often than not they use it as an extended C" (Watch Your Language, IEEE Software, 15-3, May-June 1998, pages 55-56).

The point here is not to start a "language war", especially since I am associated with another O-O language, Eiffel (more appropriately characterized as a method). But  it is legitimate for those of us who have been pushing full-fledged object technology to refuse to let that approach be blamed for limitations (perceived or real) of others whose applications of O-O principles is highly debatable.

If this is O-O...

Take, from one of the most frequently used C++ introductory textbooks (Ivor Horton, Beginning Visual C++ 6, Wrox, page 227), the following program for removing spaces from a string:

	//function to eliminate blanks from a string
	void eatspaces (char * str){
		int i=0;  /* copy to offset within string */
		int j=0;  /* copy from offset within string */
		while (( *(str + i) = *(str + j++)) != '\0')
			if (*(str + i != ' ')) i++;

If this is O-O development is, then I am ready to enlist in the anti-object battalion. I also start believing Hatton's reports of unchanged or decreased productivity and reusability. But of course this kind of style is the opposite of all that a serious object-oriented developer would do; it uses pointers and side effects throughout, mixes queries and commands (asking a question shouldn't change the answer!), and shows no attempt at abstraction. A boolean "expression" like (( *(str + i) = *(str + j++)) != '\0') is an open invitation to bugs and maintenance nightmares. (If you haven't been initiated to the great secrets of life, here's what the expression "means": str is the start of the chain. str + i is its i-th position. *(str + i) is the value at position i. j++ is j+1, except that evaluating this "expression" also increases the value of j by 1, but only afterwards, in contrast with ++j. ( *(str + i) = *(str + j++)) is an assignment of what comes after = to what comes before, so it replaces the i-th value of the string; but is is also an expression, whose value is what is being assigned, so that the whole enclosing expression actually returns true if and only if this newly assigned value is not equal to  '\0' which, as every kindergarten student knows by now, is the special character marking the end of any well-behaved C++ string. Wow!)

To me, this can't have anything to do with object technology or even with any modern view of software development. That the example is "in-the-small" doesn't matter: the in-the-large aspects of programming rely on the lower-level parts, and you can't get them right unless you get the small things right too.

Cases that give everyone a bad name

This may be the most serious problem of assessing the contributions of object technology: making sure that we indeed judge the technology — not partial, incomplete or even flawed implementations, which can give the whole field a bad name. The problem is not just  C++. While recognizing the major contributions of Java and UML, I have described elsewhere (  and some of the serious problems I see in  both approaches; I have also discussed the dangers of mixing the object paradigm with foreign ideas (such as entity-relationship modeling, or the C style of development) that, although respectable on their own, clash with object technology.

My own work has been based on a more systematic application of O-O principles. Let me illustrate the contrast through two examples of the (both discussed in Object-Oriented Software Construction, 2nd edition, Prentice Hall).

Many approaches still allow a direct field assignment of the form x.field = value. This is fine in non-O-O development, where you are dealing with structures, but completely incompatible with the O-O view that we are building little machines, each with its official control panel (the class interface) serving as the obligatory path to the internals. Direct field assignment mean that users of the machine can circumvent that interface and start rearranging the innards of the machine directly. This is simply not acceptable in modern software technology. The management solution - forcing all attribute declarations to be private - is unrealistic since it would lead to lots of useless "get functions".

This brings the second example: the Uniform Access Principle. It's at first a small notational issue, but (like many other problems related to data abstraction and information hiding) can take up gigantic proportions if not observed properly. The principle simply states that if a module, the "client", is  accessing a property managed by another module,  the "supplier", it should not matter to the client whether the supplier keeps the property stored or computes it on demand. So if I write my_house_loan.monthly_interest I should not have to know whether monthly_interest is an attribute, stored with every instance of the HOUSE_LOAN class, or a function, computed from some formula associated with the class. The purpose is obvious: keeping modules independent from each other's implementation decisions, and hence from variations in each other's implementations. Yet most current approaches force a separate notation for field access and attribute call. (This point is discussed further in the January 2000 Eiffel column of JOOP.)

The importance of being dogmatic

These comments may appear dogmatic. After all, one may ask, doesn't every bit help? Do we have to apply every tenet of the O-O canon, chapter and verse? I would tend to answer yes. It pays to be dogmatic here. It's hard to be "a little bit object-oriented". Little violations beget huge disasters. Y2K is the most visible example. As was pointed out in the November 1997 column, the year-2000 "bug" is not about the alleged stupidity of coding dates on two digits — a mere implementation choice similar to countless ones that programmers make all the time. It is about information hiding, or rather the lack thereof: thoughtless spreading the consequences of such a choice across millions (globally, billions) of lines of code, instead of having in each program a single point of access for all date-related computations. Like many other object-oriented principles, information hiding is, fundamentally, a very simple idea. No rocket science required; just seriousness, professionalism, care and thoroughness.

Remember this the next time you feel the urge to use a direct field assignment x.field = value. You may feel that in this case there's nothing wrong; you know exactly what you are doing, there is no possible precondition or invariant violation, it's a mere assignment and doesn't have to notify any observers and so on. And you may even be right - for the moment. But that's not an excuse. Think of the consequences of that violation, magnified by the number of cases in which the matter arises, the size of the project, its duration, the number of people who may have to use or take over your work. And apply the rules. Better yet, use an environment that's not just object-oriented in name, and forces you to apply the rules.

Being a hedgehog and a fox

The rules are not everything, but they are part of the approach.  Object technology has a little of the fox as well as the hedgehog, in Isaiah Berlin's metaphor: the hedgehog knows one big thing; the fox knows many small things. Like the fox, we must know and apply many small things — all the rules. We also know, not one big thing, but in fact a few big things, from data abstraction to inheritance to Uniform Access to Command-Query Separation to Design by Contract: the simple yet profound intellectual principles that this column has (I hope) occasionally been able to touch on and which, more than anything else, define the approach. Like structured programming before it, and in spite of being (like structured programming) subject to the phenomenon of vulgarization and watering down that inevitably goes with hype and success, object technology is, more than by anything else, defined by a few seminal ideas - a dozen or two altogether - that radically change one's view of software development. And it is defined, too, by the patient and obstinate application of little rules.

Successful object technology is a mix of the two. We must understand the intellectual principles for all their worth, letting them permeate our entire approach to software development, never losing sight of the bigger challenges. And we must also be foxes, never relenting in our application of the rules, however elementary and mundane.

All the evidence of our work over the past two decades, reinforced by the observation of large projects developed by our customers in the most demanding mission-critical contexts, is here to reinforce the view that if you do follow this strategy you will indeed get the advertized benefits. As Roger Osmond put it in a TOOLS USA keynote a few years ago: "Object technology brings you more than the hype would have you believe". Productivity, extendibility, reuse, reliability: you can achieve all of this. But only a minority of the industry has tried  seriously and without compromise. The experience of others - those who went at it half-hearted - is not a good argument to discredit the technology.

So my answer to the O-O critics, which will also serve as a conclusion to the Component and Object Technology Department, can only be the application to Object Technology of Gandhi's retort when he was asked for his thoughts about Western civilization: It would be a good idea.