Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^3: Multilevel flexibillity

by adrianh (Chancellor)
on Jun 25, 2003 at 16:10 UTC ( [id://268894]=note: print w/replies, xml ) Need Help??


in reply to Re: Re: Multilevel flexibillity
in thread Multilevel flexibillity

Agree completely.

However, I don't think that's the position being put forward by Abigail-II (it's not how I'm reading it anyway).

The "problem" with flexible architectures is that they are sometimes misapplied to simple problems. Having an infrastructure more complex than the domain requires adds overhead and complexity that causes more problems than it solves.

A three-tier XML/XSL based system is fine and dandy if you need to run a multi-platform, multi-lingual e-commerce system with workflow management and a CMS. You need the infrastructure to separate concerns and make the system comprehensible and maintainable.

If you just need to print "hello world" it's overkill - and carrying around the unnecessary infrastructure overhead makes the code harder to understand and maintain than a simpler system.

Over the last couple of years I've been getting more and more enthusiastic about agile development methodologies like extreme programming - where you only add the infrastructure in at the point when you need it.

As weirdly counterintuitive as this initially seemed, it works amazingly well in my experience. If you keep your code tight and well factored adding the complexity when you need it isn't hard. By avoiding the complexity until you need it you get to work with a smaller and simpler codebase that allows you to develop faster.

Replies are listed 'Best First'.
Re: Re^3: Multilevel flexibillity
by tilly (Archbishop) on Jun 25, 2003 at 16:44 UTC
    I agree with you (and Abigail-II) that attempting to build flexibility when you don't need it is not a good idea. Never would have thought of disagreeing with that.

    I am just saying that flexibility and complexity have a more complex relationship than just being traded off. If you attempt to achieve flexibility by embedding decisions everywhere in switches, well I guarantee it will always cost you. But I have seen many cases where you can both get simplify code and make it more flexible at the same time. Furthermore I think it important to point this out because in these cases the programmers often have trouble seeing the possibility because the choices you make seem counter-intuitive.

    For a concrete example, take a look at Re: Re (tilly) 6: To sub or not to sub, that is the question? and compare the original and my rewritten version of get(). The rewrite is both shorter and more flexible. Furthermore with no visible code it manages to add a number of features that the author wanted.

      This is an attempt to formalize the argument of Abigail-II - it obviously has a flaw, but can be an starting poing for further analysis. First we need a to define the complexity of a design - I would take for it the Kolmogorov complexity (in Perl) i.e. the character count of the shortest Perl program complying with it. For the definition of design I would take an additionall set of rules that the program has to comply with.

      Now that is sure that a problem without any additionall requirements on the solution program is of less complexity than one with some additionall requirements. This of course holds when the requirement is the plugin architecture.

      The problem is if the Kolmogorov complexity is really the complexity perceived by humans.

        Considering the difficulty that humans have in telling whether they have the shortest solution (see your average golf game for evidence, or articles that can be found here for the theory), it is certain that the Kolmogorov complexity is not perceived by humans. Underscoring that is the fact that things which bring you closer to that ideal solution often make the code harder to understand, not easier.

        Furthermore you are attempting to specify the complexity of the design used to satisfy the requirements in terms of the requirements given. But haven't you ever seen two pieces of code designed to do the same thing of vastly different complexity?

        My understanding of the issue is rather different. Mine is shaped by an excellent essay by Peter Naur (the N in BNF) called Programming as Theory Building which I was was available online (I read it in Agile Software Development). It isn't, so allow me to summarize it then return to my understanding.

        Peter's belief is that the programmer in the act of programming creates a theory about how the program is to work, and the resulting code is a realization of that theory. Furthermore he submits (and provides good evidence for) that much of the way that other programmers do and do not successfully interact with the code may be understood in terms of how successful they are at grasping the theory of the code and working within that. For instance a programmer without the theory will not be able to predict the correct behaviour. The programmer with will find that obvious, and will also have no trouble producing and interpreting the relevant piece of documentation. The mark of failure, of course, is when the maintainance programmer does not grasp the theory, has no idea how things are to work, and fairly shortly manages to get it to do various new things, yes, but leaves the design as a piece of rubble. Therefore one of the most important software activities has to be the creation and communication of these theories. How it is done in any specific case need not matter, that it is done is critical.

        So a program is a realization of its design, which functions in accord with some theory, and the theory needs to be possessed by developers (and to some extent users) to understand what the program is supposed to do, and how to maintain it. How does this shed light on the problem of a plug-in architecture?

        It is simple. A program with an internal plug-in architecture is a program whose theory embodies a generalization. Adding the generalization takes work, yes. But with the right generalization, many things that your theory must account for become far easier to say. (The wrong generalization on the other hand...?) If you have enough special cases that are simplified, the generalization pays for itself, and being able to find then work with such generalizations is key to being able to work efficiently. Just like a self-extracting zip can be shorter than the original document. There is overhead to including the decompression routine, but it saves you so much that you win overall.

        Of course I am describing what can happen, if things turn out right. Generalizations are not always good. To the contrary when used by overenthusiastic people, they often become conceptual overhead with little return. (You are in a maze of twisty APIs, all alike...)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://268894]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others chanting in the Monastery: (4)
As of 2024-04-25 23:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found