http://qs321.pair.com?node_id=318257

This meditation is intended as an antidote to an over-enthusiasm that I see in some for all things OO. I have no intention of indicating that OO is not very useful. However it is a limited approach to realizing the real world in code, and it is worthwhile to understand some of those limitations.
I started intended to respond someone grumpily to Re: Often Overlooked OO Programming Guidelines, which stakes out the opposite extreme position. In particular it says that There is simply no such thing as "useless OO". and the basic points used to support this are:
  1. Everything in the real world is an object (class is the collective abstraction of object).
  2. Programming is the way used to resolve real world problems.
  3. In order to be able to resolve the problem, especially through a machine, you need a way to (observe and ) describe the entities concerned, and the process to resolve the problem.
  4. OO is one way to describe real world (or to be precise, to perceive it and then describe it.)
I disagree to a greater or lesser extent with all 4 claims. Here are some of my underlying views:

  • The world is not really made of objects: This is true both in a trivial and a profound way.

    First the trivial. Let's take something simple, like day and night. They are different, aren't they? Well, where do you divide the evening between them? There isn't an intrinsically clear line. And if you can travel in a jet, the boundary between today and tomorrow becomes confusing - you can get from today to tomorrow without experiencing night by flying around the world (or by standing at the right latitude). The world is full of vague boundaries like this, things that merge into each other. In fact if you examine even simple things, like chairs and you, at a microscopic enough level, there is always a fuzzy boundary in "things". And frequently the more that you know about them, the harder it becomes to say what they are.

    Now for the profound. The world that we are interested in is largely constructed of artificial social constructions. Speaking for myself, the vast majority of professional code that I have written has involved "fake" things like money (most of which doesn't physically exist), debts, holidays, contracted permissions, etc. In other words I'm dealing with "things" whose only reality is convention. Conventions whose intrinsic non-reality is demonstrated when they change over time, or depending on location, causing no end of headaches for software maintainers.

  • Figuring out the correct relationships between things is both arbitrary and hard: Mental models embodied in programs contain a mixture of things (of varying degrees of artificiality) that the program is about, and made-up concepts and parts internal to the programming system. When you set out to write a program it is not at all obvious what real things need to be included, in what detail, what they are (confusion over that leads to a lot of spec questions!), and so on. It gets even more arbitrary when you start to organize your program and decide whether you are going to, say, use a formal MVC approach with invented Controllers and Views in addition to the Models above.

    The fact that different teams of competent programmers can come up with different designs to tackle the same problem demonstrates the arbitrariness of these choices. Anyone who has had a design fall apart on them is painfully aware of how hard it is to come up with good choices.

    If it were as simple as saying that there is a simple, obvious reality that we just have to describe accurately, then we would do much better at software engineering than we do.

  • Even when relationships are clearly understood, it is not always clear how capture them with OO: This is put better in The Structure and Interpretation of Computer Programs than I can put it. (For those who don't know, SCICP is a true classic.) As a footnote there puts it,
    Developing a useful, general framework for expressing the relations among different types of entities (what philosophers call ``ontology'') seems intractably difficult. The main difference between the confusion that existed ten years ago and the confusion that exists now is that now a variety of inadequate ontological theories have been embodied in a plethora of correspondingly inadequate programming languages. For example, much of the complexity of object-oriented programming languages -- and the subtle and confusing differences among contemporary object-oriented languages -- centers on the treatment of generic operations on interrelated types.

  • The definition of OO is unclear: Do we allow single-dispatch? Multiple-dispatch? Single-inheritance? Multiple-inheritance? Do we have prototype-based inheritance? Some class-based model? Something more sophisticated (like Perl 6's roles)? Is everything an object? Do we call it object-oriented if you have lots of accessor methods?

    For every one of these choices I can name languages and OO environments that made that choice. I can name ones that didn't. I can find people who argue that each option is the "right" choice. Yet these choices profoundly alter what it means to be "object oriented". They alter the kinds of strategies that you can use. And, as indicated in the SCICP quote, each combination is unsatisfactory in some ways.

    Yet despite this, you can find plenty of people who are quick to argue that something is "not real OO".

And now allow me to address each of the original points in turn:
  1. Everything in the real world is an object (class is the collective abstraction of object). I think that I've argued that the real world isn't. And further, the "world" inside of our programs necessarily has a lot of stuff which has very little to do with the real world.
  2. Programming is the way used to resolve real world problems. First, my experience is that programming is more about communication and understanding between people than about what the program is supposed to do. Second, programs deal with a world at several degrees of remove. Third, I find that it is better for programs provide tools, not solutions. Oh, computers can apply the simple rules, but you have to leave complex problem solving to people. We're better at it.
  3. In order to be able to resolve the problem, especially through a machine, you need a way to (observe and ) describe the entities concerned, and the process to resolve the problem. An ongoing tension in good design is how much you can leave out of the model. For example, look at spreadsheets. Myriads of problems have been effectively solved with spreadsheets (often by people who didn't know that they were programming), even though spreadsheets are innately horrible at really modelling any of the entities which those problems were about.
  4. OO is one way to describe real world (or to be precise, to perceive it and then describe it.) This I mostly agree with. But I would point out that every variation of OO is a different way to describe things (both real and invented), and I also claim that none of those ways are completely satisfactory.
And to address the point that started all of this, anyone who really believes that There is simpy no such thing as "useless OO". should read Design Patterns Considered Harmful. Yes, adding OO can not only be useless, it can be actively counter-productive effort.

Disclaimer:

When I first understood OO I had a reaction that has been confirmed over time.

My background is in mathematics. Mathematicians can be broadly divided into people inclined towards algebra versus analysis. By specialty and subspecialty it is hard to make this division, but very few mathematicans have any problem telling you which side of the divide they are on.

Let me broadly describe each. Analytically inclined mathematicians like to form mental models of the topic at hand, from which intuitive understanding it is clear how to produce possibly long chains of calculations leading to results. Algebraically inclined mathematicians are more inclined towards abstracting out sequences of operations which have no meaning, but whose analogs have proven themselves useful in the past. This is not a question of ability. Any mathematician is competent at both kinds of thought. But will generally find one or the other far more congenial.

That said, my first reaction to OO was, I bet that this really appeals to algebraically inclined people. This impression has been strengthened over time (as well, several people familiar with both have agreed with me).

My personal inclination was towards analysis...

UPDATE: VSarkiss corrected me on the title of SICP (programs, not programming). Fixed.