This site contains older material on Eiffel. For the main Eiffel page, see

You can write, but can you type?

Bertrand Meyer

        The original version of this article appeared in JOOP (Journal of Object-Oriented Programming), vol. 1, no. 6, March-April 1989, pages 58-67. ©101 Communications, included with permission.

The subject of this month's column is a language issue, an important one for anyone preoccupied with software reliability: the notion of type in an object-oriented context. We'll see how this issue is addressed in Eiffel and examine some of the rationale behind the language's type system.

Explicit types have existed in programming languages long before object-oriented programming became popular, of course; such languages as Algol 60, Pascal, Modula 2 or Ada are commonly referred to as strongly (or ``statically'',) typed. Lisp is a representative of the other camp, that of ``untyped'' languages, more properly characterized (as will be seen) as dynamically typed. Other languages stand somewhere in-between, such as C which seems to have succeeded in combining the worst of both worlds.

Object-oriented programming makes the whole typing issue much more exciting than in classical languages by opening the way for a notion of type that is both more powerful and more flexible.

Why type?

To understand what typing means in an object-oriented context, and particularly in Eiffel, it is useful to first take a broader look at the role of types in programming, starting at the hardware level.

In most hardware environments, all operations apply to arguments which are of a universal, ``untyped'' form: plain sequences of zeros and ones. The same 32-bit sequence may be usable for example by a LOAD operation as representing an address, by an INTEGER_ADD as an integer, by a FLOATING_ADD as a real number, by a MOVE_STRING as four ASCII characters, and by an OR as a sequence of boolean values.

This feature of computers is not inevitable; nor is it universal, as shown by the existence of ``tagged architectures'' in which each value contains some indication of its intended use. If this indication uniquely defines the type of each such value among a set of possible types (such as integer, address etc.), we can say that the values are dynamically typed. But fully tagged architectures have not been very successful. The trend in hardware seems actually the reverse: making the machinery efficient by having it operate on data whose structure is as generic as possible.

This is fine as long as nobody makes any mistakes. But what happens if, as a result of a programming error, the hardware is asked to perform an operation that conceptually does not make sense, such as the integer addition of two addresses, or a floating-point operation applied to integer arguments? Two cases arise:

  • The machinery may be able to carry out the operation. This is what happens on most machines in the address addition example: addresses being represented in the same way as integers, the operation will be performed without the slightest hint of protest. (The only abnormal thing that might happen is integer overflow.)
  • The hardware may be unable to perform the operation because the operands have inappropriate values. For example, most computers require floating-point operands to be in normalized form. An attempt to apply a floating-point operation to an integer argument will usually trigger a hardware failure (``trap'') -- unless the binary representation of the integer value happens to satisfy the normalization constraint. If, in another example, a LOAD operation is erroneously applied to an arbitrary integer which it attempts to interpret as an address, the operation may fail if the result of this interpretation does not represent a valid address.

Of these two cases the second is of course the favorable one: in the first, an erroneous computation will improperly proceed as if everything were fine.

Both cases, however, correspond to abnormal behavior resulting from programming errors. The remedy must also be found at the programming level; it involves associating a type with every entity of the program text (variable, routine argument and the like). By matching the type of an entity with the operations attempted on this entity, the tools of the software development environment may be able to detect inconsistencies and avoid execution of illegal operations.

Three attitudes may be taken by the environment with respect to type errors:

  • No typing (the hopeful approach): ignore the problem. The environment does not perform any type checking and simply generates operations to be carried out by the hardware, whether or not they make sense. If they don't, behavior A or B as described above will result.
  • Dynamic typing (the bold approach): wait until run-time. Each entity has a type; the environment checks the applicability of every operation at run-time, just before the operation is carried out by the hardware. The effect is equivalent to that of working with a fully tagged architecture, where each value would carry a tag representing its type and the machinery would check operand tags before it carries out an operation; the only difference is that the tagging and the checking are performed by software rather than hardware means.
  • Static typing (the prudent approach): check before executing. Each entity has a type; the environment includes tools to check the applicability of every operation statically, that is to say without any need for execution. Often these tools are part of the compiler; however they can also be separate facilities, known as static analyzers.

Various intermediate combinations of these levels are possible. All three may make sense depending on the context.

``No typing'' is appropriate for a low-level language used as target language for a compiler: if proper checking has already been performed on program text written in a high-level language, and a compiler then translates the text into a lower-level formalism, it is usually not necessary to perform any new checking on the result of this translation (which may itself be executed by further compilation or by direct interpretation).

Dynamic typing is appropriate in an environment meant for experimentation or software prototyping, where the prime goal is to try out things and see the results as quickly as possible, without having a compiler or static analyzer stand between you and the execution of your latest idea -- but run-time type checking is still needed because if some inconsistency ensues (perhaps this latest idea was not that great after all) you want to know what happened.

Finally, static typing is appropriate if correctness is a prime concern and you want to put all odds in your favor before you ever attempt to execute your software. This is the case in ``production'' environments where it is essential to produce software that is as error-free as humanly possible. Studies have shown how crucial it is to remove bugs early in the software lifecycle; Boehm (Software Engineering Economics, Prentice-Hall, 1982) mentions that an error found when the software already operates may cost 100 to 1000 times more to correct than if it has been detected at the earliest stage. Static typing is the approach followed in Eiffel.

The language and the environment

Discussions of programming issues often confuse programming languages and the tools that are used to implement them in a specific context. Typical of this confusion are the expression ``compiled language'' or ``interpreted language''. Lisp is not any more an interpreted language (Lisp compilers exist, although most implementations use interpreters) than Eiffel is a compiled language (the language is certainly compilable, but may be interpreted as well). We must be careful not to introduce similar confusions when discussing typing.

The three policies just introduced were defined with respect to a software environment -- a combination of a programming language and supporting tools. If we restrict the discussion to languages, these notions still make sense. We may say that a language is statically typed if it is possible to produce tools that will perform full type checking of programs. It is dynamically typed if it is not statically typed in the preceding sense, but the type consistency of every operation may be fully checked at run-time before the operation is applied. It is untyped if none of the preceding applies.

The above characterization of Eiffel as statically typed refers to both the language and to the tools: Eiffel as a language is meant to be statically typed, and the compiler does perform the necessary checks.

Explicit typing

In a statically or dynamically typed environment every entity has an associated type which is checked either statically (by mere analysis of the software text, without need for execution) or dynamically (at run-time). How is this type determined? The most common method in typed environmnents, also used in Eiffel, is explicit typing: every program entity is identified by the programmer as having a specific type. For variables or routine parameters, this involves a declaration, of the form (using the Eiffel syntax)

	entity: TYPE

which is a way for the programmer to tell the world: ``I want entity to represent objects of type TYPE. I promise to be good and only apply legal TYPE operations to it. Should I ever falter in this resolve, the (static or dynamic) checker will be entitled to pass the harshest punishment on me''.

For some entities declarations may not be needed because the type is self-evident (or ``manifest'') from the name of the entity. This is true of constants of basic types: 33 has the manifest type integer. The grandparents of some JOOP readers may remember another example, the default declaration of Fortran variables; a Fortran variable is understood as representing integers if its name begins with any of the letters IJKLMN, reals otherwise. This is still explicit typing, relying on general language rules rather than specific declarations.

There is also another typing method: implicit typing. In this approach, initially introduced for the ML language (see Robin Milner, A Theory of Type Polymorphism in Programming, Journal of Computer and System Sciences, 17, pp. 348-375, 1978), explicit type declarations are avoided; instead, the type checking mechanism attempts to infer the type of each entity from the way the entity is used in the program text. For example, a variable used only in integer operations will be inferred to be of type integer. If a given entity is used in operations that have inconsistent type requirements, then the checking mechanism will report an error.

The implicit approach has been applied in particular to functional languages, where the types of interest are higher-level function types, such as INT x BOOL -> (INT -> BOOL); an object of this type is a function taking two arguments of types integer and boolean, and returning a result which is itself a function that takes an integer argument and returns a boolean result.

In a typed environment, the implicit/explicit distinction is orthogonal to the static/dynamic distinction; for example, you could in principle have implicit dynamic typing, although this does not seem to be a popular combination.

Implicit typing provides an unobtrusive approach to type checking; the aim is to obtain the advantages of typing (rejection of inconsistent programs) without the chores (necessity to declare all program entities). From a software engineering standpoint, however, the result may not be worth it. In explicitly typed languages, the declarations should not be just viewed as a burden on the programmer. They provide two major benefits beyond enabling static type checks:

  • Declarations are useful to the program writer because they force him to introduce some redundancy (for example when you use a variable as operand of an and after having declared it as boolean, you have said twice that its value is expected to be boolean). Redundant expression -- expressing the same idea in two or more different ways -- is an excellent aid for finding errors and inconsistencies, even without any software tool to perform the checking. This may be likened to standard methods in elementary arithmetic, such as ``casting out nines'' (checking quickly the consistency of the result modulo 9) when performing a multiplication.
  • Declarations are also of great help to the program reader because they provide formalized documentation about the intended use of an entity. This is in contrast with implicit typing: no doubt in a properly designed language the compiler may be able to infer the type of each entity correctly; but this also means that the human reader must mentally perform the same inferences.

The second benefit is particularly important if we consider types as they exist in modern programming languages: not just integers, booleans and reals as in the simple examples sketched above, but models of possibly complex objects from some external reality -- airplanes, documents, power plants, computer systems and so on. Then it becomes essential to tell the program reader whether a certain variable whose values will be [integer, string] pairs is of type BOOK (publication year, title) or PERSON (age, name), even though the internal structure may be the same.

The goals of typing

The preceding discussion sets the background for the discussion of types in specific programming languages and Eiffel in particular. Before proceeding, let's sum up some of the most important implications of this discussion.

First, typing is in a way a superfluous notion: every program that can be written using types can also be written in a totally untyped language. This means that if you never make any mistakes when writing software, and never have to read others' software (or your own after a while), then you may safely stop reading this column here. The advantages to be expected from types are essentially in these two areas:

  • Software correctness.
  • Readability, with the ensuing benefits (ease of maintenance, modification, debugging etc.).

Second, as noted, the notion of typing spans both languages and supporting tools. We must consider two separate issues when assessing a given environment: Is the language typed (that is to say, ``typable'') statically or dynamically? Then, if the answer to the first question is yes, do the tools of the environment actually perform all the theoretically possible static or dynamic checks?

The third observation is that static typing is always a pessimistic policy. A statically typed environment will reject some ``potential software'', that is to say some texts which we would have been allowed to execute, or attempt to execute, in a less demanding environment. that might in fact execute properly in some cases. A trivial example is a program extract of the form

	if user_input = yes then
		x := 0
		x := True

	x := x + 1

which will be rejected because x may receive a boolean value but then participates in an integer addition. If, however, the user_input for a particular run of the program happens to be yes, no harm will result. In static typing, we reject a program as soon as we fear anything could go wrong; this means that we may reject a program that would have worked in some cases, or even in most cases. This is the price to pay for obtaining reliability through consistency checking.

The situation may be compared to the use of driving regulations. I (generally) stop at red traffic lights even though in some cases driving through would be patently safe -- on a clear day when there is no other vehicle around. But having a simple, universal set of rules makes life safer and easier for everyone; the price to pay is that in some cases you can't do something even though it seems to make sense.

The issue is whether the limitations are bearable in practice: are we actually losing anything? In other words, does any of the potential software texts that a statically typed environment will reject describe computations that cannot be expressed with comparable simplicity and efficiency by type-safe software texts? Answers to this question, in the context of a particular statically typable language, can only be given by analyzing real applications written in this language.

The last important point to remember from the preceding discussion is that the whole type issue remains rather boring as long as we only deal with predefined types such as integers, reals and the like. Things get interesting when we get powerful type construction mechanisms as they have been made available in Algol W and subsequent languages such as Pascal, Algol 68, C -- and, of course, Simula 67 and its successors.

Types in an object-oriented context

In fact it would hardly be an exaggeration to say that even though discussions of typing have gone on for a long time, everything before object-oriented programming is a prelude and that only in the object-oriented context do types take on their real significance. Three key observations may be made in support of this claim in the Eiffel context:

  • The existence of a clear and powerful basis for defining the notion of type.
  • The possibility to define type hierarchies through multiple inheritance.
  • The small number of conceptually important operations (essentially two).

Let's consider these points in turn. Object-oriented programming (at least in its typed brand exemplified by Eiffel) directly rests on a theory of types: data abstraction. An abstract data type is a type defined by the list of applicable operations and the formal properties of these operations. The basic Eiffel structure, the class, is the direct correspondent of the abstract data type at the design and implementation level. (Eiffel also includes a few basic types, such as integer, character and the like. But the really important and original features of the type system apply to types defined by classes; so the discussion will be limited to these.)

Besides the availability of a sound theoretical basis, what opens the way to a whole new view of types is the existence of type hierarchies, defined through multiple inheritance. This makes it possible to define a new class as a specialization of one or more existing classes. For example, in the hierarchy illustrated by figure 1, TRAIN inherits from VEHICLE and AMORTIZABLE_EQUIPMENT. This means that the objects described by class TRAIN (the ``instances'' of this class) have some properties that they share with all objects of type VEHICLE, and some that they share with all objects of type AMORTIZABLE_EQUIPMENT. Trains have further specializations, such as PASSENGER_TRAIN and FREIGHT_TRAIN. The inheritance mechanism makes it possible to have a really interesting and meaningful type system. Individual types are not defined in isolation from each other, but instead by a constant process of specialization and combination.

The third basic reason why object-oriented programming makes typing so much more interesting is the simplicity of the conceptual framework, at least in Eiffel. Excluding standard operations (addition of integers and the like) and the three control structures (sequence, loop and conditional), Eiffel really offers two operations only: assignment and remote feature application.

  • Assignment is present because Eiffel is an imperative language. For entities x and y, you can execute
    	x := y

    which makes x refer to the same object as y, or makes x void if y was void. (In Eiffel the value of an entity such as x is not directly an object but a reference to a potential object; a reference that is not actually associated with an object is said to be void.) We shall see below what static restrictions should be imposed on such an operation in a typed context.
  • Remote feature application (called ``message passing'' in some object-oriented languages) is written as
    	x.f (...)

    and means: apply feature f to the object associated with x, if any. A feature is either an attribute (data item associated with the object) or a routine (computation to be performed using the object); in the second case it may have parameters. If an object applies a feature of its own class to itself we may talk about ``local feature application'' and the notation f (...) suffices; however this is simply a special case of remote feature application.

Remote feature application is essentially the only way to actually perform computations on objects of class types in the Eiffel context. This simplicity makes it possible to give a precise definition of what the goals of typing should be in this context:

Object-oriented typing constraint

The goal of static typing is to be able to guarantee, by mere static analysis of the text of a correct class or set of classes, that for every remote feature application x.f (...) any object associated with x at run-time will be equipped with at least one adequate feature (attribute or routine) to handle the computation of f.

This definition illustrates again that static typing is not necessarily needed by everyone. If your aim is to try out ideas and you are prepared to have executions that fail once in a while, you may not want to perform any static analysis. If, on the other hand, you want to write production-quality software and reliability is a prime concern (two of the assumptions behind the use of Eiffel), then you had better check beforehand. If x represents some kind of train and you execute


then run-time is definitely too late to check whether a routine emergency_stop is available for x.

The basic rules

How do we achieve the typing constraint? We must be able to check statically whether an operation is applicable to an object. Since all objects are instances of classes, the first step is pretty obvious: use explicit typing. In Eiffel every entity must be unambiguously declared as being of a certain type. For example, we may declare

	t, t1: TRAIN



Since each class introduces a set of features, static type checking will ensure that an operation such as pt.f is rejected unless f is a feature applicable to objects of type PASSENGER_TRAIN. Because of inheritance, however, this does not necessarily mean that f should be one of the features declared in class PASSENGER_TRAIN: it could be a feature inherited from an ancestor. Any feature declared in a class will be potentially available to all the descendants of that class.

For example class TRAIN could introduce an attribute number_of_cars, which is then automatically part of PASSENGER_TRAIN and FREIGHT_TRAIN.

This gives the first type rule:

Type rule 1

For a remote feature application x.f to be correct, where x is declared of type C, then f must be a feature declared in an ancestor of C.

Two important comments on this rule. First, the ``ancestors'' of a class are defined as the class itself, its parents, grandparents and so on (``descendant'' is the reverse notion); so f could of course be declared in C itself. Second, the condition given is necessary for x.f to be correct, but not sufficient. In particular, Eiffel supports information hiding: only those features declared as exported may be used in a remote feature application; so another necessary condition is that f should be exported by C. I will discuss information hiding, the export mechanism and its relation to inheritance in another installment of this column.

To get the full picture, we must take yet another aspect into account: polymorphism. This is essential for achieving the flexibility of object-oriented programming. The idea is simple: allow entities to refer to objects of not just one type, as in classical typed languages, but more than one. Polymorphism occurs through assignment, and also through parameter passing. In the above example, we could have an assignment of the form

	t := pt

where t, of type TRAIN, is assigned a reference to an object of type PASSENGER_TRAIN.

Clearly, to satisfy the typing constraint, we must restrict the extent of permissible polymorphism. The second basic type rule follows:

Type rule 2

For an assignment x := y to be correct, the type of x must be an ancestor of the type of y.

The rule also applies to passing y as actual argument to a routine of which x is a formal argument.

Dynamic binding

Object-oriented techniques provide an extra degree of flexibility through dynamic binding. Dynamic binding is what lies behind the apparently surprising phrase used in the above formulation of the Typing Constraint: for x.f to be correct, there should be at least one version of f available. How could there be more than one?

The answer is that a feature may be redefined. You can for example have a routine stop in class TRAIN, and a different implementation of this routine for PASSENGER_TRAIN (a passenger train is perhaps supposed to stop more smoothly). So after the execution of assignment

	if "Some run-time condition" then
		t := pt
		t := ft

Here t, although statically declared as a TRAIN, may dynamically refer to an object of type PASSENGER_TRAIN or FREIGHT_TRAIN. In the former case two versions of routine stop are available for the execution of a call of the form


Dynamic binding, as implemented in Eiffel, means that the choice is made on the basis of the dynamic form of t -- the passenger train version will be applied in the case mentioned.

Dynamic binding (which has far-reaching consequences on the architecture of software systems) is sometimes confused with dynamic typing. Here we have the combination of static typing and dynamic binding. The two techniques are complementary; given a remote feature application such as t.stop, then

  • Static typing means the ability to guarantee that at least one implementation exists for stop, regardless of what may happen to t through polymorphism.
  • Dynamic binding means that if there is more than one implementation, the best one will be automatically selected -- ``best'' in the sense of being directly adapted to the run-time form of t.

This is -- I believe -- the right combination.


The rules given above may appear sufficient to make an object-oriented language type-safe. But in practice they are not. The reason is that in any real use of object-oriented programming you will want to rely on classes describing general data structures, such as lists, queues, binary trees etc., sometimes called ``container classes''. In Eiffel these are provided by the Basic Library, a repository of reusable classes.

The typing requirement means that you will statically want to guarantee the consistency of any such data structure: a certain binary tree should contain only trains, or only books, but not a mixture of both types of objects. This appears to make it impossible to declare a general class BINARY_TREE: if we write a routine of this class, say insert, to be called under the form

	bintree.insert (value)
	-- (In practice the routine may need further arguments)

then we really don't know how to declare value.

It does not help to assume that the language's type system is ``rooted'' that is to say includes a class called something like ANY from which all classes are assumed to be descendants. (The Eiffel type system, by the way, is not rooted.) Declaring value of type ANY would mean that no type check is possible on binary trees: you have simply relaxed the requirements on binary trees to the point where a binary tree can contain a mixture of objects of arbitrary types. This means that even if the language is officially typed you lose all the benefits of typing in this case since the elements appearing in a binary tree are all assumed to be of the blandest, most general possible type.

This solution is unsatisfactory, but at least it does not violate the type system of the language; it simply uses it in a degraded form where every combination is legal. A much worse solution is to allow circumventing the type system at will by using ``casts'' in the C tradition. A cast is a forced type conversion: you would declare the type of value in class BINARY_TREE using a ``neutral'' type such as integer (in the C world, programmers would use the rather bizarre CHAR*, meaning, of all things, pointer to characters); then for actual calls of insert you would force the type system to accept your calls by ``casting'' the actual argument of insert to type integer. Clearly, an environment in which this kind of game can be played makes a mockery of any notion of static typing. Better have dynamic typing or none at all.

Stating the problem of container data structures in a typed environment fairly naturally suggests a solution: parameterized classes. This is known as genericity and is supported in Eiffel. (The only other well-known object-oriented language that has a similar facility is DEC's Trellis-Owl.) The idea is to allow class declared under the form


where the generic parameter T represents any type. In class BINARY_TREE, the argument of insert will be declared to be of type T.

There can be more than one generic parameter, as in a hash table implementation by a class


where the first parameter represents the type of table elements and the second represents the type of the keys used to index elements.

In the declaration of an entity whose type is a generic class, actual types must be provided for the generic parameters, as in




Genericity then makes type checking palatable in practice by reconciling the type constraints with the flexibility needed for general container classes. If b is of type BOOK and and t of type TRAIN, then it will be permitted to write remote feature applications of the form

	tb.insert (t)

	bb.insert (b)

but not, for example, bb.insert (t). To complete the picture, we add the inheritance-based brand of polymorphism: it is clear from the preceding discussion that whenever an entity of type TRAIN is expected, an entity of any descendant type will also be acceptable. So we can also write

	tb.insert (pt)

where pt is of type PASSENGER_TRAIN.

These facilities are so essential that one cannot seriously claim to have a typed environment for object-oriented programming unless genericity is supported.

In practice, one more facility is needed: constrained genericity, which can be illustrated by considering not just binary trees but binary search trees (where nodes are ordered according to some comparison criterion). The generic parameter T to BINARY_TREE represents an arbitrary type. In cases such as binary search trees, however, we need to assume the availability of some operations on this type. For binary search trees, each tree node has a value which must be comparable to other such values; this requires that operations such as greater_than and equal be applicable to entities of type T. But in object-oriented programming to describe a type with operations we should simply use a class. In Eiffel the corresponding facility is known as constrained genericity. Class COMPARABLE of the Basic Eiffel Library describes comparable objects with the routines mentioned above (greater_than etc.); the declaration of BINARY_SEARCH_TREE will begin by


where the arrow, read as ``should inherit from'', indicates constrained genericity. Then the declaration of an entity representing a binary search tree, such as

	employee_table: BINARY_SEARCH_TREE [EMPLOYEE] ...

is only valid if the actual generic parameter, here EMPLOYEE, is a descendant of COMPARABLE. Here this assumes that a comparison criterion, with the associated operations, is available on instances of class EMPLOYEE; this is the condition for storing such instances in binary search trees.

Similarly, we should probably constrain the second generic parameter of HASH_TABLE above so as to require that a hash operation, abstractly described in class HASHABLE, be applicable to it:


Constrained genericity is the last ingredient needed for a very powerful concoction: flexibility through polymorphism, redefinition and dynamic binding, reconciled with type-safety thanks to (possibly constrained) genericity and the inheritance-based typing rules.

Type redefinition

To completely cover the Eiffel type system would go beyond the scope of this column. As a conclusion, however, I do want to briefly address one further issue, especially since it has prompted some disagreement among people who have looked carefully at Eiffel from a theoretical standpoint. (See in particular recent discussions in the comp.lang.eiffel network newsgroup.)

The issue has to do with type redefinition and arises in various contexts. The simplest is perhaps redefinition of the type of a routine argument.

To see it on a simple example, assume that because of a heinous crime you have been sentenced to live for a year without access to either genericity or the classes of the Basic Library, most of which are generic anyway (the judge was humane, however, and stopped short of barring you from using Eiffel altogether). This is a doubly sad predicament, but staying away from genericity will help keep the example small. To manipulate lists of trains, you cannot rely on the list classes from the Basic Library and you must write a specific class TRAIN_LIST, with a routine

	insert (t: TRAIN)
		-- There might be other arguments,
		-- such as the insertion position

Next you need lists of passenger trains. Again, because you cannot use generic library classes, this will have to be a new class which inherits from TRAIN_LIST. In this class, insert will need to be redefined so as to take PASSENGER_TRAIN arguments:

	insert (pt: PASSENGER_TRAIN)

This form of redefinition is permitted in Eiffel; the rule is that when you redefine an entity (such as a routine argument or an attribute) in a descendant class, the new type must itself be a descendant of the original type, as here. This rule is the basis for a mechanism that appears specific to Eiffel, anchored declaration (which we won't cover here).

This rule makes the task of type checking more difficult. For example, the following extract is incorrect:

	t1, t2: TRAIN;
	tl.insert (t1)
	pl.insert (pt1)
	-- Everything fine so far
	tl := pl
	tl.insert (t2)
		-- This is what makes type checking more difficult

The last instruction inserts an arbitrary train (not necessarily a PASSENGER_TRAIN) into the list; however this list is still known under the name pl, that is to say it may still be handled as a passenger train list.

Such examples have led some people to think that the Eiffel rule for redefining the type of routine arguments and attributes is ``wrong'' and that, in fact, the reverse rule should be applied (a rule whereby in a redefinition the original type may only be replaced by an ancestor type). This is indeed the rule used in some formal semantics models of inheritance (see Luca Cardelli, A Semantics of Multiple Inheritance, in Semantics of Data Types, edited by Kahn and Plotkin, Springer-Verlag, 1984).

In practice, however, the type rule of Eiffel, known as covariant redefinition, is the one that seems appropriate. It does make type checking more difficult and has given rise to extensive discussions that are beyond the scope of this article.