http://qs321.pair.com?node_id=695773

In the last couple of years there seems to be an increasing interest (or hype) regarding run-time 'reflection' and/or run-time 'introspection'. (Are they the same thing?). I've been googling for the terms and reading, essentially randomly chosen articles and snippets, and opinion seems to range from it being the best thing since sliced bread to the most evil coding technique yet devised.

Update:corrected typos. Including the one stvn pointed out.

There seems to be a lot of discussion of what it can do--in terms of "you can determine the type of something at runtime"--but little on why you would want or need to? And why it must be deferred to run-time?

I'm having trouble seeing past the Ruby/Objective-C think that says: We can do it, and not many can, so let's make much of it, as a desirable or even necessary feature.

Premise: There's nothing that can be done with run-time introspection that cannot be done (better) by compile-time decision taking.

Counter arguments?

Preferably of the form: I use it to do X, because it's easier/quicker/cleaner/safer/more intuative/more maintainable than the other (please specify) approach I though about/read about/tried.

Pointers to existing discussion (of concrete uses) would also be appreciated. Thanks.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
  • Comment on Runtime introspection: What good is it?

Replies are listed 'Best First'.
Re: Runtime introspection: What good is it?
by stvn (Monsignor) on Jul 06, 2008 at 04:05 UTC
    In the last coule of years there seems to be an increasing interest (or hype) regarding run-time 'relection' and/or run-time 'introspection'. (Are they the same thing?).

    I think you mean "reflection" and yes, they are basically the same thing. Introspection tends to be more about "reading" and reflection tends to include "writing" or runtime generation of "things", but that is not a strict definition in any way.

    There's nothing that can be done with run-time introspection that cannot be done (better) by compile-time decision taking.

    First of all, this is silly, it is simply not that black and white a problem. Some problems can be solved very elegantly in dynamic languages like Perl/Python/Ruby, but are painful or just impossible in a stricter language like Java/Haskell/etc. and vice-versa.

    There seems to be a lot of discussion of what it can do--in terms of "you can determine the type of something at runtime"--but little on why you would want or need to? And why it must be deferred to run-time?

    Lets take a real world, highly useful usage of runtime introspection, a basic clone function (incomplete and simplified of course).

    sub clone { my ($value) = @_; if (ref $value) { if (blessed $value) { if ($value->can('clone')) { return $value->clone; } else { die "Sorry object is not cloneable"; } } else { if (ref $value eq 'ARRAY') { return [ map { clone( $_ ) } @$value ]; } elsif (ref $value eq 'HASH') { return { map { $_ => clone( $value->{$_} ) } keys %$va +lue }; } else { die "Sorry cant clone that $value"; } } } else { return $value } }
    Every single one of those calls to ref and to can is runtime type introspection. Could you defer that to compile-time? Sure, but only by adding more facilities to the language to support that. Here is a Perl6-ish version of the above code using multi-methods:
    multi sub clone (Object where { $_->can('clone') } $value) { return $value->clone; } multi sub clone (Scalar $value) { return $value; } multi sub clone (ArrayRef $value) { return map { clone($_) } @$value; } multi sub clone (HashRef $value) { return map { $_ => clone( $value->{$_} ) } keys %$value; }
    Now the compiler could aggressively compile your program to the point whereby it could find every point at which clone was called and look at what $value contains and "unroll" that code therefore making all the decisions at compile time. But that is a lot of work and a lot of static analysis on the compilers part, and that kind of analysis only works if the language your analyzing has a strong theoretical foundation. For instance, with the above multi-method code, in order for that to work, all those types must be part of the same type "set", and in order for the compiler to actually generate efficient code you must provide a branch for each member of that set, which I clearly am not. Maybe my compiler could infer those missing conditions? Well thats nice assuming it got them right, but how can i be sure? If you have ever really tried to hack with Haskell or Ocaml you will know exactly what I am talking about.

    Okay, so my point here is that you have a waterbed. If you push it down on one side (moving the runtime checks to compile time), it pushes up on the other side (now your compiler is much more complex and you have a type system to fight with). There is no one right solution to this problem, it is balance that has to be struck by the compiler writer/language designer as to where they want their complexity to be.

    -stvn
      Virtual functions are not generally thought of as introspection. There's a fairly conventional notion of "C++ without RTTI" that most people would not consider to be introspection, pretty much because there's no standard/portable/easy interface for runtime querying and possibly modification of metaobjects. I don't think BrowserUk is muddying the waters in using this common notion.
    A reply falls below the community's threshold of quality. You may see it by logging in.
Re: Runtime introspection: What good is it?
by Joost (Canon) on Jul 06, 2008 at 13:13 UTC
    Premise: There's nothing that can be done with run-time introspection that cannot be done (better) by compile-time decision taking.

    Except that compile-time decision taking can only be done at compile time. And many of the languages that that have no or only limited introspection also don't have a compiler available at run time.

    Once your framework becomes generic enough, it can become too constricting to force a static interface on its components. And the static interface may even turn out to be slower than the one that uses introspection and related techniques.

    Consider, for example, an OO-relational mapping layer where the fields are all accessible via methods (or public properties, if you must):

    my $object = ....; $object->field1 = 2; my $d = $object->field1; # ...
    In many languages, this is much faster than the alternative:
    my $object = ....; $object->set("field1",2); my $d = $object->get("field1");
    But when you do need to have generic routines (for instance, to print the contents of some object), you need some functionality like:
    for my $field ($object->fields) { print $object->$field(); }
    Otherwise you're stuck with implementing all the "reflection" stuff in the API, which generally means the API will be ugly and slow.

    Aside: all of this was already well understood and implemented in the 70s with Smalltalk. Why the static OO guys seem to think runtime inspection, "duck typing" and meta programming is something newfangled and scary is beyond me.

      Why the static OO guys seem to think runtime inspection, "duck typing" and meta programming is something newfangled and scary is beyond me.

      Java dragged C++ kicking and screaming to the easy half of Lisp, circa 1972.

        Now that should someday be a chapter's introductory quote in a programming book yet to be written.

    A reply falls below the community's threshold of quality. You may see it by logging in.
Re: Runtime introspection: What good is it?
by syphilis (Archbishop) on Jul 06, 2008 at 14:00 UTC
    There seems to be a lot of discussion of what it can do--in terms of "you can determine the type of something at runtime"--but little on why you would want or need to?

    One of the nice things about being able to "determine the type of something at runtime" is that it enables one to change the "type" on the fly - which is precisely what perl does wrt numification of strings, where a "string" (PV) type is changed to IV, UV or NV as needed.

    Regarding perl, I've also found that being able to determine the "type of something at runtime" has ramifications for operator overloading. That is, operator overloading often benefits from being able to distinguish between object, PV, IV, UV and IV. (It may, in some cases, even *rely* on being able to make that distinction.)

    Whether these are *good* things is not something I want to argue about - though obviously there's plenty of support for this behaviour ... or it wouldn't exist to begin with.

    (I should also add that I'm not really familiar with "reflection" and "introspection". Apologies if I've missed the mark.)

    Cheers,
    Rob

      Whilst selecting codepath depending upon the (sub)type of a scalar at runtime is definitely a form of introspection, the fact that it is under compiler/interpreter control, rather than programmer control, makes it a somewhat different animal to the normal.

      Indeed, I would say that run-time dispatching upon the base type (HASH/ARRAY/SCALAR/CODE etc.) of a reference is likewise, something of a 'special case'.

      The types of reflection that I'm more intrigued by the need for, are those provided by the use of UNIVERSAL::ISA and UNIVERSAL::can and similar. These seemed to be used to provide for 'generic programming' ala C++-style templating solutions. In simplistic terms, as a substitute for providing essentially copy&paste dedicated methods, or resorting to MI and/or deep inheritance trees.

      An alternative to introspection for dynamic languages is compile-time code generation.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Tsk tsk, you can't declare special cases that favor your side of the argument this late in the game.

        Whilst selecting codepath depending upon the (sub)type of a scalar at runtime is definitely a form of introspection, the fact that it is under compiler/interpreter control, rather than programmer control, makes it a somewhat different animal to the normal

        First of all, it happens at runtime and it is introspection, so therefore it is runtime introspection. It may seem that the programmer is not explicitly asking for it to be done (as in the my example above with ref/can), but the programmers choice of language features have caused the interpreter to do runtime introspection just the same. Ignorance of the underlying technique used by the language to accomplish the things you ask for does not make it any less what it is, which is runtime introspection.

        Indeed, I would say that run-time dispatching upon the base type (HASH/ARRAY/SCALAR/CODE etc.) of a reference is likewise, something of a 'special case'

        Nope, wrong, no special cases allowed, you made a pretty clear statement, lets not muddy it up.

        Even that aside, lets take the case of OCaml. Ocaml is a very strongly typed language and the OCaml compiler spends quite a lot of time rigorously analyzing the code to make sure it is well typed and optimizing it as much as possible. Why does it spend all this time? Well because once compilation is finished, OCaml discards pretty much all of the type information. Yes, that means ref ocaml_variable == 'ARRAY' is completely impossible to do.

        So (IMO anyway) it is not fair here to add a special case here, because this is just simply not possible in other languages which don't have runtime introspection available. If this were some kind of universal language feature, then maybe, but it is not.

        The types of reflection that I'm more intrigued by the need for, are those provided by the use of UNIVERSAL::ISA and UNIVERSAL::can and similar. These seemed to be used to provide for 'generic programming' ala C++-style templating solutions. In simplistic terms, as a substitute for providing essentially copy&paste dedicated methods, or resorting to MI and/or deep inheritance trees.

        Yes, these things are far more exciting, however I fail to see how they are any different from the clone example I provided above? Sure if everything is an object, then I can use polymorphism instead of manual type introspection to fake polymorphism, but just as I said above, just because the runtime system is doing it for you and you are not explicitly asking for it doesn't make it any less.

        An alternative to introspection for dynamic languages is compile-time code generation.

        Well, no, that is not 100% true. Compile-time code generation has it's limits, some things just simply cannot be known at compile time (user input, information from the network, etc), and depending on what you are doing with those things, you can't always generate enough code to handle all those cases. And even if you could, it is likely that runtime introspection would be simpler code and quite possibly more efficient as well. Generating many reams of code to replace a simple bit of introspection seems a silly tradeoff to me.

        As I said above, the best of both worlds is the ideal. Some things are better expanded at compile time, while others are better introspected at runtime. Anyone who has spent any time writing code in very strongly typed languages like Ada/Haskell/OCaml/etc. will have encountered inflexibility that has caused them to have to write complex code for the compiler that could be solved simply and elegantly at runtime. And anyone who has spent any time writing code in more dynamic languages like Perl/Python/Ruby know that sometimes you have to write silly and inefficient code to do something that the compiler really aught to be able to figure out on it's own.

        -stvn
Re: Runtime introspection: What good is it?
by moritz (Cardinal) on Jul 06, 2008 at 17:36 UTC
    Premise: There's nothing that can be done with run-time introspection that cannot be done (better) by compile-time decision taking.
    sub what_does_it_return { return rand(1) > 0.5 ? [] : {}; } # bad example, but illustrates the essence

    Sometimes your data comes from the outside, (for example through a serialization), and if you don't know its structure in advance, you have three choices:

    1. Press everything into the same structue (for example through normalization, and then store everything into SQL-like tables)
    2. Create wrapper classes that let you use a fixed static type
    3. Create dynamic classes at run time, and work with reflection / run-time inspection

    Number one is unhandy and ugly, and number two basically means building a (primitive) meta-object protocol that allows run time inspection on top of the existing object model.

    If the object model of your programming language is powerful enough, there's no need for that. You can just use native objects for data with a format that's not known at compile time. Which implies less code to write, specifically one abstraction layer that you don't need anymore.

      1. Use compile-time overloading.

        Eg.

        use classA; use classH; sub what_does_it_return { return rand(1) > 0.5 ? bless( [], 'classA' ) : bless( {}, 'classH' ); } my $o = what_does_it_return( ... ); $o->method( ... );

      Viola. No introspection needed.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        Sigh. Okay, lets try and type your code, cause if we don't check your type usage at compile time then it could just randomly blow up at runtime, which is not acceptable in most cases (and in worst case could open your code to exploits).

        sub what_does_it_return () return classA | classH { return rand(1) > 0.5 ? bless( [], 'classA' ) : bless( {}, 'classH' ); }
        So, assuming our language allows it, we can say that your function will return a type "union", meaning a value of either a classA or a classH type. So, now, lets try and use the variable that is returned.
        my classA | classH $x = what_does_it_return();
        So, the $x variable must be typed the same as the return value of the function, so it is again a type union. So now lets use $x.
        $x->something()
        Okay, so thats fine assuming that classA and classH all respond to the same methods right? Nope, it is not. Lets look at the definition for classA::method.
        package classA; sub something (classA $self) { ... }
        Our $x is a type classA | classH not classA, so it does not pass the type constraint for it's own method. This can't be good.

        But wait, maybe you made classA and classH both derived from the same superclass, lets call it classX. Lets re-type your function now:

        sub what_does_it_return () return classX { return rand(1) > 0.5 ? bless( [], 'classA' ) : bless( {}, 'classH' ); }
        Does this buy us anything? Nope, failed again, because there is no "something" method in classX, so the compiler generates an error when you try and do this:
        my classX $x = what_does_it_return(); $x->something() # compile-time blowup, classX doesn't have the "someth +ing" method!

        Okay, so maybe you don't care about types, does this invalidate my points? Nope, because your code still makes the assumption that classA and classH are 100% interchangeable in all cases, they they can be easily substituted for one another anywhere in your code and Just Work. This is an ideal case, and one that only works in very restricted cases and pretty much in non-trivial programs only. The moment your bring in outside code that knows nothing of the interchangeability of classA and classH you open yourself up for a lot of errors, errors that will almost certainly happen at runtime (remember you gave up compile-time type checking already).

        -stvn
        A reply falls below the community's threshold of quality. You may see it by logging in.

        It should be noted that this code:

        $o->method( ... );
        requires runtime introspection to work.

        Perl does not determine the method being called at compile time, instead it will lookup in the @ISA in a depth-first search to find the package which has a method of the name "method". The -> is an operator, that operator is a function just like any other function either built-in or defined by you. By calling the code of that operator you are explicitly telling Perl to do some runtime introspection to find and execute a method for you. So, hmm, I think maybe that saying:

        Viola. No introspection needed.
        is not quite correct.

        -stvn
        That way you add "alien" methods to your data structures, which are the wrappers I wrote about.

        The problem with wrappers is that they aren't very re-usable. In the example above you assumed that the sub what_does_it_return is in your own code base, or easily overridable. That's not necessarily the case. So you have to hope that whoever wrote the module you're using has built proper, reusable and maintainable wrappers.

        In contrast when you have introspection built into your language's OO model, you can just use that, and it will work fine.

        Another problem with overloading is that it doesn't scale very well. If you have a million data objects, why should you overload them all if you just want to call a method on one of them?

        Both of my points have in common that you need to plan in advance. You need to know or guess how the result objects of, for example, a de-serializer will be used. But you can't, because in the general case the one who uses your module is cleverer that you and, and more creative, and most of all he's crazier. He'll get some weird ideas of what to do, and there won't be wrappers in place, so he'll have to resort to something different.

        So runtime introspection is at once lazy and humble - lazy because you don't try to guess in advance all use cases, and humble because you don't think you can guess them all.

        That's the reason why Perl 6 has a pluggable object model that allows introspection - if something is missing or gone wrong, it's relatively easy to add, fix or replace afterwards.

Re: Runtime introspection: What good is it?
by Anonymous Monk on Jul 06, 2008 at 02:17 UTC

    The project I am leading at this point is a combination of c# and java. We use the reflection (or introspection) to make code generic. This saves project effort a big time.

    You said :"...There's nothing that can be done with run-time introspection that cannot be done (better) by compile-time decision taking...". yes and no, some of the reflection code can be really smart, but when you done it at compile time, it ends up with ... something like a long life of if elsif.

    The downside I suspect is performance. Still wait to be seen as the project is going. Nowadays you don't care whether it is slow, you care whether it is bearable.

      C# and Java? What Joy! Have fun.
        Hi. Been working on that reply since Jul 06, 2008? :)
Re: Runtime introspection: What good is it?
by Pic (Scribe) on Jul 06, 2008 at 18:21 UTC
    Premise: There's nothing that can be done with run-time introspection that cannot be done (better) by compile-time decision taking.

    I believe that JIT is a counter-argument to your premise. It's not strictly relevant to the kind of code you're interested in, but the kind of optimizations JIT does (which are a great performance boon) simply cannot be done compile-time.

      Sure they can -- just not as well.

      Okay, you can't use the polymorphic inline cache strategy at compile-time, but you can predict which variant will get called the most and emit instructions to redispatch if necessary. (I don't know of any JITs which recompile the dispatch when the call characteristics change, but I can imagine that it's possible. Factor may; I think I read something about that.)

Re: Runtime introspection: What good is it?
by dragonchild (Archbishop) on Jul 07, 2008 at 02:20 UTC
    Please create a system that fulfills all of the following requirements:
    1. The system must make decisions based on a series of rules.
    2. These rules must be changeable on the fly.
    3. The set of acceptable rule formats must be changeable on the fly. In other words, a new type of rule must be addable during run-time.
    This is not an academic exercise - it's the very essence of a trading system. It also happens to be a subset of the requirements for Prolog (and similar languages).

    Implementing #1 is easily done with a set of if-statements if you can assume a set of rules known at compile-time. Implementing #2 is easily done with a data-driven set of functions if you can assume a set of rule formats known at compile-time. #3 is the sticky wicket.

    If you allow for run-time introspection, then you can easily build this using function factories. I would be very interested in hearing a solution that is not implemented on top of some form of run-time introspection. These systems tend to be written in a language that provides run-time introspection (either to the programmer or to the compiler). If they aren't, then the programmers tend to write an interpreter which, within it, provides run-time introspection. I haven't heard of a system that meets all three requirements and doesn't use run-time introspection as a key piece to solve the problem.


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
        In other words, write an interpreter which is the alternative I spoke of to languages with runtime introspection. Or, uncharitably, creating a language with runtime introspection when forced to host in a language without it.

        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: Runtime introspection: What good is it?
by philcrow (Priest) on Jul 07, 2008 at 19:27 UTC
    I've been producing and consuming YAML lately. If you have objects serialized to text, you need some form of introspection to reconstruct them with a generic parser. The YAML parser does not know or care about the code in those other modules. It just assumes that the names in the input correspond to classes currently loaded by whomever called the parser. It relies on the language's introspection system to convert the strings in the YAML into calls to constructors.

    I'd be happy to hear of a generic parsing method for this type of introspection, as it could make my code more efficient. But I think giving up introspection would require giving up genericity.

    Phil

    The Gantry Web Framework Book is now available.

      First. Thanks for being the first to post a real application.

      I going to assume that what you are doing is passing the blessed handles to live (populated) instance of hash(or array)-based objects to yaml, and having it return a string that contains:

      1. The name of the class;
      2. The attribute name/current value pairs of the instance.

      In essence, exactly the same as passing a hash to Data::Dumper, Data::Dump, Data::Dumper::Serial.

      This is a convenient side-effect of using Perl's hashes as the basis of OO. And it's not something that I would want to give up. Indeed, it's one of two primary reasons for my eshewing InsideOut implementations. Forcing the user to hack the source on those rare occasions when it it necessary to look inside an object simply are not worth the loss of convenience, all the extra work or the abysmal performance.

      But, it is only a convenience. It is perfectly possible to serialise (say) an inside-out object, or as I used once for a application that needed millions of instances and I needed to save space, a blessd scalar containing a packed string of the instance data. You simply have to provide a toString() and fromString(), or STORABLE_freeze() STORABLE_thaw() methods in each of your classes. Extra work, but perfectly doable.

      With things like Moose and Class::Std (if your that way inclined), or many of the other OO frameworks, these can (or could) even be generated for you from the Class definition.

      So, whilst convenient, useful and very usable, this use case doesn't contradict my premise.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        I left out one detail -- sort of on purpose, for which I apologize. My recent work was in Java where I needed the reflection package to have a hope of completing the task. Our goal is to be able to generate and use the YAML data in java for day to day work while turning to perl to handle disasters in our clients.

        My definition of introspection or reflection, which I take as synonyms, is anything a Java programmer would need the reflection package for. Perl provides many different syntactic ways to reach the same effects, I call all of those reflection. Thus, in my book, using a string to bless an object into someone else's class is reflection.

        The most popular reflection users these days are then Object Relational Mappers (ORMs). They need to manufacture classes at run time, for this they need the language's reflection system.

        Phil

        The Gantry Web Framework Book is now available.
Re: Runtime introspection: What good is it?
by sgifford (Prior) on Jul 09, 2008 at 04:06 UTC
    Here are some of the things I've used run-time reflection of some kind for, in various languages:
    • When how to perform an operation depends on the type of more than one thing. For example, if you have a Shape class and want to implement Shape::intersection to compute the intersection of two shapes. Dispatching based on one of the shapes is easy; $shape1->intersection($shape2) will find the type of $shape1 and dispatch appropriately. But unless the other shape's type is known at compile-time, there will need to be some kind of run-time type information available to know how to compute the intersection. If $shape2 is coming from a database or the network, its type certainly won't be available at compile-time.
    • Loading things at run-time. For example, I wrote a streaming database system in Java that started by reading a configuration file, then loaded the classes named there. It was best to look at the class as soon as it was loaded and make sure it was a type I could deal with, so more information was available for error reporting.
    • Automatically providing information for things like GUI editors and SOAP interface generators. It is convenient for these systems to be able to walk a class's methods to see what it can do, to make that functionality available over a network connection or a GUI.
    • Displaying information. I have a program that manages several different types of objects. I just added on a GUI displayer, and the easiest way to do that was to write a little display module, and have it say "if this is type 1, display it this way; if it's type 2, display it this way; etc...". It could have been done through subclassing, but it would have been much more complicated.
    • Working around quirky behavior. I may know that a particular class will handle a particular situation incorrectly, so I can detect that class at runtime and try to work around the problem. For example, if you know a particular class has trouble with unicode, maybe you strip it out before sending it to that class.

      This got rather long. Testimony to the thoughtfulness of your use cases; thank you again.

      Here are some of the things I've used run-time reflection of some kind for, in various languages:
      • When how to perform an operation depends on the type of more than one thing. For example, if you have a Shape class and want to implement Shape::intersection to compute the intersection of two shapes. Dispatching based on one of the shapes is easy; $shape1->intersection($shape2) will find the type of $shape1 and dispatch appropriately. But unless the other shape's type is known at compile-time, there will need to be some kind of run-time type information available to know how to compute the intersection. If $shape2 is coming from a database or the network, its type certainly won't be available at compile-time.

        (Starting with the bit I've emboldened)Sorry, but the second half of that sentence is obscuring the situation.

        You will have to know what type of shape is is, in order to instantiate the object. True regardless of whether you have:

        • A generic Shape class that (say) represents all shapes as lists of vertices. In which case you would call a single constructor for all shapes coming from the DB or network will be of type Shape and that will be known at compile time.
        • Specialised (sub) classes for each type of shape (Rect Circle Polygon). In this case, something within the data input will identify what shape this set of data represents, and you will then you will call the particular constructor for that shape. And all of those constructors will be known at compile time.

        This is not introspection. Because you cannot introspect an object until it exists. And you cannot construct it into existence if you need to introspect it to do so.

        This is simply data-driven code. You read the data, inspect a field with that data, and then dispatch to a constructor based upon what you see there. This can be done in good ol' introspection-less C something like:

        char * buffer = malloc( ... ); read( source, buffer ); int type = buffer[ 0 ]; void *o; switch( type ) { case RECT: o = makeRectFromString( buffer ); break; case ELLIPSE: o = makeEllipseFromString( buffer ); break; ... }

        And if you can do it this way in C, you can do it in any other language, including those that support reflection.

        For your intersection problem, any language that supports method overloading, C++, Java etc., will allow you to code methods within your Rect subclass with signatures of:

        class Rect; bool intersect( Rect* ); bool intersect( Ellipse* ); bool intersect( Polygon* ); ...

        So that invoking someRect->intersect( someShape ); will get to the right code without introspection.

        In Perl, which doesn't support method overloading, you would have to dispatch internally to the (single)intersect method, but that's a Perl OO-model limitation.


      • Loading things at run-time. For example, I wrote a streaming database system in Java that started by reading a configuration file, then loaded the classes named there. It was best to look at the class as soon as it was loaded and make sure it was a type I could deal with, so more information was available for error reporting.

        This is the 'plug-in' scenario. Loading a class at run-time from a filename read at run-time. This part is data-driven. It cannot be reflection, since there is nothing in memory upon which to reflect.

        For the "make sure it was a type I could deal with" part of the equation, if all plug-ins are derived from a base class, then the only check required is to verify that the class loaded is derived from that base class.

        This can be done through Java.Lang.Class or Java.Beans.Introspector. But, as I found in several sources whilst refreshing my latent memories of these, there are costs:

        The Costs of Usage

        Reflection and Introspection are powerful tools that contribute to the flexibility provided by the Java language. However, these APIs should be used only as needed and after taking into account the costs associated with their usage:

        • Reflection and Introspection method calls have a substantial performance overhead.
        • Using reflection makes the code much more complex and harder to understand than using direct method calls.
        • Errors in method invocation are discovered at runtime instead of being caught by the compiler.
        • The code becomes type-unsafe.

        What's the alternative? try{ ... } catch{ ... }

        Once the class is loaded, try instantiating a (minimal) instance, and exercising the required methods. Catch and report any errors. You can even check that the loaded class methods return sensible values--And that's something that no amount of reflection can do. All your validation is performed immediately after loading.

        The rest of your main application can be written against the plug-in base class. As you can pass or use an instance of a derived class anywhere you can use an instance of its base class, your main application will written in terms of direct method calls, and be compile-time type-checked. No further need for run-time type checks or try/catching.

        You simply define all your main application method calls as taking instances of the base class, and pass instances of the run-time loaded derived plug-in class.

        The advantages are all the opposites of the costs listed above. And, as you do not need to use reflection, you can use this technique in any language that supports runtime loading of classes and exception handling. Eg. Perl 5.


      • Working around quirky behavior. I may know that a particular class will handle a particular situation incorrectly, so I can detect that class at runtime and try to work around the problem. For example, if you know a particular class has trouble with unicode, maybe you strip it out before sending it to that class.

        I think what you are saying here is that you can use reflection to construct a workaround to the quirk, not detect the need for it. If you know enough to write code to use the reflection APIs, you know enough to write a non-reflective solution--if that is possible. The question then becomes, is a non-reflective solution possible, and that will depend very much on what the quirk is, and what language your using.

        But the classic solution to this is to construct a subclass that inherits from the quirky class and override the troublesome methods. For your unicode-unaware example, you convert from unicode on the entry to the subclass methods, call the superclass method, and convert any returned strings back to unicode on the way out.

      • Displaying information. I have a program that manages several different types of objects. I just added on a GUI displayer, and the easiest way to do that was to write a little display module, and have it say "if this is type 1, display it this way; if it's type 2, display it this way; etc...". It could have been done through sub-classing, but it would have been much more complicated.
      • Automatically providing information for things like GUI editors and SOAP interface generators. It is convenient for these systems to be able to walk a class's methods to see what it can do, to make that functionality available over a network connection or a GUI.

        If you code each method with a (Java-style) toString() method, this problem is 'solved'.

        Yes, I know that doesn't solve the problem for third party classes that either fail to define a toString() method, or define one and return "Verboten!Keep your nose out sucker!", or worse :)

        More seriously, if I supply symbol files for libraries to my Symbolic debugger, then if can decode and display the contents of structs and the like. Similarly, the built-in GUI editor in the only IDE I ever spent any serious time with (PWB), utilised those same symbol files plus other compiler generated files to do some pretty remarkable things including single stepping the code backward as well as forward.

        But I will admit that these are both good use cases for reflection. The questions it leaves me with are:

        1. Is it necessary or desirable to load up every class, of every application, with all the overhead that reflection entails, in order to meet these use cases?
        2. Or would it be better to follow the PWB practice of placing this information in an ancillary file during compilation, and only loading it for those (few) use cases that need it?

          Or in the case of a dynamic language where compile-time is the early stages of run-time, have a run-time switch on the interpreter that enables reflection? Or perhaps a per-class or per-module pragma that enables it?

      (My) conclusions

      1. I have no doubt that all of these use cases could be met using a language (like C) that has no introspection capabilities, if the need was strong enough. It might entail using compiler generated ancillary files, or in effect, constructing a limited reflection capability.

      2. As I think I've shown, three of the five above can not only be met using standard OO mechanisms, I would say that those three would be better met that way.

      3. The other two, (grouped together and shown last), are strong use cases for reflection, where that capability is available.

        Though whether they are common enough to warrent the inclusion of introspection in a language is open to debate. If the reflection is done, as in Java, by (as I understand it), pulling apart the bytecode at run-time, the costs when it is not used are negligable.

        But in languages or frameworks where introspection must be support through the retention of compile-time parsing tables or the construction of code tree decorations, the costs in terms of space and time can be significant enough that they should only be done on demand, through the use of compile-time switches or pragmas, not by default.

      4. In general, I think that just as inheritance was overused and abused in the poineering days OO, and still is by novice users, so introspection is equally easily abused. And 5 or 10 years from now, it will probably have gained as bad a rep. when people step back from its rising popularity on the basis of RoR and resurgent interest in Objective-C.

        Like any technology, used sparingly with understanding of the costs, it will serve some use cases in ways that nothing else can.

        But if it is overused, just because it can, to solve at run-time use cases that are better served by compile-time solutions, I see it suffering a backlash as the mainenance costs become evident.

      I hope that if anyone read this far, they will find some of this as useful as I have.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Hi BrowserUK,

        Glad to be able to make you think a bit! Let me try to respond to a few of your comments. But first, we should be clearer about our definitions of reflection. I'm including in my definition finding the type of an object (using Java.Lang.Class in Java or typeof in C++ or ref in Perl), and testing whether an object inherits from another object (using instanceof in Java or dynamic_cast in C++ or isa in Perl. If you don't consider these reflection, and are only thinking of getting the methods and member variables of a class, some of my examples don't really use reflection.

        You will have to know what type of shape is is, in order to instantiate the object
        Right, this part doesn't use reflection.
        For your intersection problem, any language that supports method overloading, C++, Java etc., will allow you to code methods within your Rect subclass with signatures of:
        class Rect; bool intersect( Rect* ); bool intersect( Ellipse* ); bool +intersect( Polygon* ); ...
        So that invoking someRect->intersect( someShape ); will get to the right code without introspection.
        This is called "dynamic dispatch" IIRC and the behavior I would like and expect, but unfortunately not the behavior exhibited by either Java or C++. Both determine which overload of a function/method to call at compile-time, and if all you know about the object is that it's a Shape* it will always call the overload for that type. For example, in C++:
        #include <iostream> using namespace std; class Shape { public: virtual ~Shape(){}; }; class Polygon : public Shape { }; class Circle : public Shape { }; void ShowType(Shape *obj) { cout << "Shape" << endl; } void ShowType(Polygon *obj) { cout << "Polygon" << endl; } void ShowType(Circle *obj) { cout << "Circle" << endl; } int main() { Shape *shape; shape = new Polygon(); ShowType(shape); shape = new Circle(); ShowType(shape); }
        outputs:
        Shape Shape

        To make it work, we have to use runtime reflection:

        void ShowType(Shape *obj) { if (dynamic_cast<Polygon*>(obj)) ShowType(dynamic_cast<Polygon*>(obj)); else if (dynamic_cast<Circle*>(obj)) ShowType(dynamic_cast<Circle*>(obj)); else cout << "Shape" << endl; }
        outputs:
        Polygon Circle

        This is why the distinction between knowing the type at compile-time versus runtime is important; if the compiler knows the type it can call the correct overload, otherwise it will not.


        For the "make sure it was a type I could deal with" part of the equation, if all plug-ins are derived from a base class, then the only check required is to verify that the class loaded is derived from that base class.
        Right, but IIRC the class loading code returns a Class and you have to use runtime type checking to determine if it is the right sort of class.
        Once the class is loaded, try instantiating a (minimal) instance, and exercising the required methods. Catch and report any errors. You can even check that the loaded class methods return sensible values--And that's something that no amount of reflection can do. All your validation is performed immediately after loading.
        While this is possible, it is error-prone, and violates the concept of putting things in exactly one place. If I add a new method to my class, I have to remember to go add a check for that method to all places where it is loaded dynamically. If it is loaded dynamically from 3rd party code, I have to notify those parties to check for this new method. The maintenance cost is much higher than the runtime cost of doing this check, IMHO.

        I think what you are saying here is that you can use reflection to construct a workaround to the quirk, not detect the need for it
        I'm actually simply saying to detect a need for it, by looking at the type of the object.
        the classic solution to this is to construct a subclass that inherits from the quirky class and override the troublesome methods
        Easier said than done if your code is a library being used by other programs who are creating the object in question and passing it in. All users of your code would have to switch to your subclass, then switch back when the bug in the original code is fixed, which is a maintenance nightmare.

        As far as the cost of keeping type information and reflection information around, I'm not quite sure what the cost is in different languages. I think it's quite low for C++ RTTI, and I think Perl has to keep that information around anyways, so it's also quite low. But I haven't really seen reflection overused; it seems to be just inconvenient enough that it doesn't get used unless there is a genuine need.

Re: Runtime introspection: What good is it?
by tilly (Archbishop) on Jul 11, 2008 at 17:47 UTC
    Multiple thoughts. First of all Why monkeypatching is destroying Ruby has a decent, if obvious, discussion of the dangers of run-time code that modifies existing run-time code. Certainly dynamic run-time generation of code has dangers which need to be clearly understood and avoided.

    Secondly there are often conflicts between different forms of run-time dynamic stuff. For instance see Why breaking can() is acceptable where I try to explain the conflicts between how Perl defines UNIVERSAL::can and AUTOLOAD. So you can't really use the dynamic code introspection in Perl unless you know that other techniques are not being used, or at least have an idea how they might impact you.

    Thirdly I'm going to dispute that it is better to do things at compile time than at run time. And the reason why is that at run time you have more information than you do at compile time. For example you have information about what code paths you will and will not execute, and so don't waste time dealing with what you don't need. Yes, I am talking about JIT, but JIT goes a lot further than most people realize. Dynamic Languages Strike Back has a lot to say on this topic that you might like. In particular if you combine introspection with aggressive JITing, you get the opportunity to achieve more aggressive optimizations than you could afford to do at compile time. Why? Well the problem is that at compile time there is no end to the number of combinations you might have to worry about, and if you try to optimize all of them you wind up with an extremely large executable that hits performance problems because it takes up too much memory. But when you go JIT you can see the 2-4 combinations that really get used and optimize those.

    Of course using that as an argument for using introspection in Perl is seriously disingenuous since Perl 5 does not do JIT and is unlikely to ever do JIT. :-)

    Now where have I, personally, done stuff at runtime using things like introspection and reflection? Truthfully, not often. But when I've done it, it has been useful. For example in one place in my current reporting framework I have a way that objects from lots of modules can be passed into a particular method in another module. There are several useful methods that they might implement. When I load the first module I don't know what others might exist and I don't know what will be passed in, so I leave the decision as to whether to call the method to run time where I check that the method exists by calling can, and then do one thing if it does and another if it doesn't.

    Were there other ways to accomplish the same thing? Of course. But it seemed to me that the best way to do that was at run time since at compile time I simply did not have sufficient information to know what might be passed in. In another language at compile time there would have been more information and it could have been done then. However it is in the nature of the beast that this method is only called once per program run. Unless you want to add a separate compile phase (which introduces its own overheads and problems), doing this at compile time instead of run time gains you nothing and would require more overhead. So it stands as a counterexample to your thesis that it is always better to do things at compile time.

      1. JIT.

        I'm going to reject JIT as a counter argument to my premise on the basis that:

        • If you do what JIT does at compile-time, it isn't Just In Time.

          Java bytecode is frequently compiled on a different platform to where it is run. It's not practical to translate to machine code for an unknown (number of) target platform(s).

        • What JIT does is not under the control of the (application) programmer.

          Whilst it is possible to adjust one's application programmming style to gain (more) benefit from JIT on a specific platform, and a particular implementation of the runtime on that platform, generically, JIT is beyond the control of the application programmer.

      2. ... so I leave the decision as to whether to call the method to run time where I check that the method exists by calling can, and then do one thing if it does and another if it doesn't.

        This is the 'plug-in' scenario.

        You could also do:

        sub Another::Module::particularMethod { my $o = shift; ... eval{ $o->method( ... ); } if( $@ =~ q[^Can't locate object method "method"] ) { do{ oneThing() }; } else { do{ anotherThing() }; } ... }

        Still a run-time decision. But, it can be done this way in any language that supports exceptions. No need for the inclusion of RTTI tables, or picking apart the bytecode.

        Is there any advantage to doing it this way?

        I think yes. Just because a class has a method named X, doesn't mean that X is what you think it is.

        1. That it takes the same number of parameters as you're expecting.
        2. Or the same types of parameters you're expecting.

          With some reflection APIs (eg. Java), you can discover both of these. At a considerable cost of decompiling the byte code at run-time. And at the further considerable cost of programming the logic in your code, to iterate the known public methods, with the particular name you're interested in and then check the number, and types of the parameters they expect, and the type they return.

          But even then, having done all of that discovery, you still don't know whether it:

        3. Will actually implement the same semantics as you want it to.

        Even after you've been through the laborious process of run-time discovery, when you (or whomever) eventually gets around to invoking the method, it may still raise an exception--either an 'expected' one due to bad input, or an unexpected one due to it's semantic being entirely different to what you are hoping for. Ie. Instead of calculating some statistics, it trying to wipe your harddrive.

        So, when you eventually do get around to calling the method, you're going to have to wrap the call in an exception handler anyway. So why not skip all the slow, laborious and run-time costly discovery, and just try invoking it?

        Simpler (less), clearer (it worked or it didn't; rather than: it might work(or not), it still might work(or not); it still might work(or not); it worked(or not)) code.

        Same final effect.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        I'm going to reject your rejection of JIT on the basis of the fact that there are things you can do with JIT that you simply cannot do at compile time. And furthermore while it is true that what changing what JIT does is beyond the control of the programmer, deliberately taking advantage of its full capabilities is not.

        A runtime type check followed by a runtime branching operation is exactly the kind of code that JIT can optimize away if you have a good JIT system.

        However I reiterate that JIT is a red herring in the case of languages like Perl that don't have it.

        Going to your exception solution, that solution has a major drawback. There are lots of possible reasons why there could be an exception, and your code has swallowed most of them. Easily fixable, granted, but not without adding more code and obscuring what is going on. And it is easy for a programmer to forget that they need to do that - I've seen many forget exactly that, including you just now.

        Not to mention the fact that if Perl made a minor change to its error message, then your code would break. Not that Perl is likely to do that, but they haven't promised they won't, and they have documented how UNIVERSAL::can works.

        Furthermore your criticisms strike me as unrealistic. If I define a plugin API, I expect to have things passed into it that are designed to be plugins. Yes, it is possible (but unlikely if you use descriptive method names, which I try to) that some random module might implement methods named the same as what I expect in my plugins. But if so then it still doesn't matter because no sane programmer is going to be passing it into my module as a plugin. (I can't solve the problem of insane programmers, and I refuse to try.)

        Thus trying to use something that isn't a plugin as a plugin is not a problem that I'm going to waste code protecting against.

        Now we have the problem of dealing with a badly designed plugin that doesn't do what it is supposed to do. Before you even consider doing that, you need to understand your problem domain. My problem domain is that I am writing plugins for use in my own module. If the plugin doesn't do what it is supposed to, that is a bug that I will fix. There is, therefore, no need for me to protect against that case. The same would apply for many of us.

        A problem domain that more closely mirrors what you're saying is one where you're writing a popular application which random third parties will add plugins to. But even there you can defend the position that it is the responsibility of the plugin author to make sure they follow your API, and not yours to code against the possibility that they didn't.

Re: Runtime introspection: What good is it?
by Anonymous Monk on Jul 07, 2008 at 01:58 UTC
    You know what? you don't need while, for, blah... with if and goto you can do all those. So?

      True. It was called Fortran IV (amongst other names). It was very simple to learn, very fast to compile and produced blindingly fast execution. It also had functions and even "exception handling", and allowed you to write well abstracted and well structured code.

      It's problem was that it didn't enforce those. So, without mind numbing discipline, it became too easy to write complex, unweildy, tangled spagetti like balls of string.

      Even with good discipline, each programmer tended to code each of those missing control structures in different ways. Even the simplest if X then Y else Z endif structure became:

      if not X goto 10 do Y goto 20 10 do Z 20 ...

      Nest a few of those and see what I mean about spagetti. Another term was double negative coding. With each programmer reinventing each of the common control structures in their own coding style, over and over again, maintenance became a nightmare. Throw in a few bugs, the inevitable design changes and programmer turnover, and you can quickly see how things evolve. Add to that static memory allocation and common blocks, and the need for something better is obvious.

      ...

      That's a counter argument. Sadly missing in this thread.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Runtime introspection: What good is it?
by sundialsvc4 (Abbot) on Jul 08, 2008 at 18:52 UTC

    This is one of those things where, “if you need it, nothing else will do.”

    “Compile-time determinations” are exactly that:   they occur when the source-code is compiled and are forever-after fixed into the resulting executable. This is very efficient (and therefore, very desireable), if you know that the thing you are dealing-with truly will not change. But if there is a possibility that the thing is external to you, and therefore “beyond your control or at-least your timetable,” compile-time binding is fairly useless.

      Compile-time determinations” are exactly that: they occur when the source-code is compiled and are forever-after fixed into the resulting executable.

      You're ignoring the flexibility of what constistutes "compile-time" in dynamic languages like Perl.

      See also the subthread starting at Re: Runtime introspection: What good is it? which discusses how compiled to binary (static) languages achieve dynamic language-like flexibility through the use of parsers without giving up their compile-time type correctness, or succumbing to building a run-time eval capability.

      I must admit, I'm surprised to find you, as an "planning is paramount" advocate, coming down on the side of making ad-hoc codepath descisions at run-time :)


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Runtime introspection: What good is it?
by gaal (Parson) on Jul 10, 2008 at 15:49 UTC
    RPC is easier to implement with introspection, for similar reasons to the YAML example.