http://qs321.pair.com?node_id=221083


in reply to On Flyweights... (with sneaky segue to data modeling)

...which leads to inefficient SQL and poor system performance. This pattern results in code of the form:
$family->new_address($house_number,$postcode); ... (...somewhere deep inside the family object...) foreach my $person (@{$self->{members}) { $person->_new_address($house_number,$postcode); }
Not the best example in the world, but I hope you get the general idea: the SQL that would normally be constucted with joins becomes decomposed and inefficient. And before anyone jumps up and down pointing out that they'd never implement code this way, I'll say this: most of the Monks that frequent and contribute to the Monestary are what I would consider to be (at least) above average programmers, in fact, Monks who've been here a while (even just lurking) would have picked up enough ideas, tips and tricks to stand head and shoulders above their cow-workers (or cow-students), and, indeed, would never implement decomposed SQL like the example, but if such object patterns are implemented, others will use them to write such decomposed database access. I know, I've seen it happen at almost every code shop I've ever worked in.

I don't believe that there is a definitive way around this problem, but if one considers the tables as the classes, the rows as objects and group the SQL according to the needs of business logic to create "aggregated" methods, there will be a much less scope for less able coders to write decomposed methods. I realise that's a lot of forethought before coding begins (sorry about that) but it pays off in the end when building scalable, performant systems.

rdfield

Replies are listed 'Best First'.
Re: Re: On Flyweights... (with sneaky segue to data modeling)
by herveus (Prior) on Dec 19, 2002 at 13:40 UTC
    Howdy!

    Implementation of the normalized data model does not demand that consequence. If you are using a single RDBMS as your back-end store, you can craft code to take advantage of SQL joins. On the other hand, one also needs to consider the load to be supported. The failure to take advantage of the efficiencies the RDBMS might offer may be overshadowed by other efficiency gains found in not depending on the back-end. One could conceivably have different entities stored using different mechanisms. DBI can make that practical.

    My main focus is on the data model itself. For many problems, a good data model is critical and will make the implementation choices clearer. Certainly, the design does not stop there; one has to consider whether the persistence mechanism(s) are homogenous or heterogenous.

    Class::DBI has appealed to me as a very handy layer to puy between a collection of objects and the data-store. Your mileage may vary.

    yours,
    Michael

Re^2: On Flyweights... (with sneaky segue to data modeling)
by adrianh (Chancellor) on Dec 19, 2002 at 11:23 UTC

    I agree that a one-to-one mapping between objects and tables is not always a good idea (not to say it's always a bad idea either - sometimes there is a one-to-one between tables and business objects).

    However, I don't think that was the point herveus was making. We're not talking about the process of mapping relational databases to objects.

    herveus was saying that there can be useful insights in looking at class/object hierarchies in the context of database design and normalisation.

    In particular, refactoring an object hierarchy with lots of objects with duplicate state, to one with fewer objects that share state (aka flyweight pattern) is basically normalisation under another name.

    That's how I read it anyway :-)


    Update: Judging by the reply it looks like I was misinterpreting herveus, hence stricken text.