Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re^4: Maybe database tables aren't such great "objects," after all ...

by DStaal (Chaplain)
on Mar 28, 2011 at 12:50 UTC ( [id://895897] : note . print w/replies, xml ) Need Help??


in reply to Re^3: Maybe database tables aren't such great "objects," after all ...
in thread Maybe database tables aren't such great "objects," after all ...

In addition to ELISHEVA's excellent post: One of the basic tasks of a database is the ability to atomically do actions in multiple tables and/or records when needed. If your database can't do that, you need to look for a different database. Most databases can also update multiple records in a table at the same time by different processes/queries, without one update getting in the way of the other. (Subject to volume, structure, and resource limitations. There are a couple of low-end databases that don't do that one, and may be useful in some situations as long as you are aware of that limitation.)

So, yes, that would be an ignorance of the tools issue: The tools should be able to handle those situations, when used correctly.

Replies are listed 'Best First'.
Re^5: Maybe database tables aren't such great "objects," after all ...
by jordanh (Chaplain) on Mar 28, 2011 at 17:16 UTC
    It still seems to me that sometimes you'd want to operate on a number of objects inside a single transaction and other times in different transactions.

    This would expose the nature of the underlying database at a high level if you were to explicitly commit at a high level.

    I think ELISHEVA makes a good point. To design these things, you'd need someone with excellent DB and Object knowledge, but I'm wondering if you might also need that kind of knowledge to use the Objects effectively.

    Update: I'm probably talking nonsense here. I can't really think of where you wouldn't want to encapsulate the commits with the updates, although perhaps you could get some efficiencies by deferring them across object references. That could be built into the toolkit, too, if you were clever.

      It still seems to me that sometimes you'd want to operate on a number of objects inside a single transaction and other times in different transactions.

      Yes you will. And sometimes that will mean you change a single record in a single table, and sometimes that will mean you'll change a dozen records in a dozen tables. My statement and yours do not map to each other.

      Stop thinking of data in the database as 'objects'. It's not. It's data. It knows nothing about what code should operate on it. Organize it based on it being data, in the best form to store and retrieve it.

      And code your objects to get their data (possibly through some database API of some sort) from the data store and use it in the most effective way for the program. Write the API to handle transferring it in a way that's not visible to the (object) programmer.

      Then maybe you'll stop getting hung up on the idea that you are storing your objects in a database.

      Will there be limits? All this is abstraction. Eventually the data is being stored as a series of bits. Eventually the program is being run as imperative assembly code. Stress the abstractions hard enough, you'll run into places where they can't hide what they are abstracting. That's the nature of abstractions. But don't start confusing one layer of abstraction in one domain with a different layer of abstraction in a different domain just because they are both abstractions.