Beefy Boxes and Bandwidth Generously Provided by pair Networks
The stupid question is the question not asked

Mixing Mysql and Perl

by SavannahLion (Pilgrim)
on Jan 09, 2004 at 22:40 UTC ( [id://320243] : perlquestion . print w/replies, xml ) Need Help??

SavannahLion has asked for the wisdom of the Perl Monks concerning the following question:

Usually, when I wrote SQL queries, they were pretty simple, very small scripts that would run for specific situations.
A long time ago, I tried to write a database application utilizing MySQL but that failed :( Mostly due my inability to configure MySQL at the time. I was eventually forced to use Access for that application.

I'm now trying again, this time with Perl. And I've got the MySQL server running quite nicely :)
The question I have at hand is what is the practice of seperating the MySQL statements from the Perl script itself. I remember, from an early SQL class that there is the practice of seperating such a program into three modules. DB Backend, UI Frontend, and... er... glue(?). Change the backend, and the frontend stays the same. And vice versa.

Searching around the web, I don't find very many examples of this practice, if any at all. Are there any examples showing the seperation between the three modules in a practical example

The book I have from the SQL class discusses it on an accademic level, which is rather useless. I understand where you draw the line. What I don't understand is the fine working details. For example. I have Table A. Do I create multiple functions to extract only the necessary information I want from Table A. Or is it easier to create one Universal function to pull the table rows, and use the, "glue," to extract the elements I want and take a performance hit?

Thanks for your patience.

Prove your knowledge @ HLPD

Replies are listed 'Best First'.
Re: Mixing Mysql and Perl
by Ovid (Cardinal) on Jan 09, 2004 at 23:18 UTC

    Basically, you're describing a multi-tiered application with clearly defined responsibilities for each tier. While there are many ways to go about this, one of the most popular is to use an object persistence framework. One that is fairly easy to use is Class::DBI. This module allows you to create objects that model your database tables. You can read a short tutorial at, if you're interested.

    It's important to remember, though, that this is not always a good fit for your code. Objects are heirarchical in nature; MySQL is not (this is sometimes referred to as the "object-relational impedance mismatch"). Class::DBI sidesteps this issue by ignoring it and leaving it up to the programmer to deal with. However, it's dead simple to use, easy to integrate with an existing database and I'm a big fan of it.

    Tangram is another object persistence framework that attempts to properly map the heirarchical nature of objects to databases. The documentation describes various strategies that Tangram supports and the pros and cons of them. Here's an interesting quote from the documentation:

    One of the paramount issues about mapping inheritance is how well the mapping supports polymorphism. Any Object-Oriented persistence facility that deserves its name needs to allow the retrieval of all the Fruits, and return a heterogeneous collection of Apples, Oranges and Bananas. Also, it must perform this operation in an efficient manner. In particular, polymorphic retrieval should not cost one SELECT per retrieved object.

    The issues that it describes are ones that Class::DBI has unless the programmer takes care to work around this.

    Though I understand that Tangram can be fitted to an existing database, this is probably not optimal. Instead, you define a class heirarchy and allow Tangram to design the database for you. It seems pretty clean and Tangram is now being actively developed once again, so if you see older information on it, be aware that it may be out of date.

    For more information, check out This is an excellent introduction to the pros and cons of many Perl OO Persistence strategies. In particular, it will tell you whether or not MySQL is supported.

    Update: I almost forgot to mention Pixie. From the docs:

    my $pixie = Pixie->new->connect( 'dbi:mysql:dbname=test', user => $user, pass => $pass); # Save an object my $cookie = $pixie->insert($some_object); undef($some_object); # Get it back my $some_object = $pixie->get($cookie);

    Pixie basically says "forget about a framework, I just want to store an object." And that's what it does. Thus, you don't need to worry about designing the database. Simply use your objects and store and retrieve them as needed. No muss, no fuss. I've not used it, but it sounds very interesting.


    New address of my CGI Course.

      Pixie basically says "forget about a framework, I just want to store an object." And that's what it does. Thus, you don't need to worry about designing the database. Simply use your objects and store and retrieve them as needed. No muss, no fuss. I've not used it, but it sounds very interesting.
      Problem with stuff like that, is your objects and your tables don't always conincide. Most of the time they don't. For instance, say you have 4 or 5 types of users that all inherit from the same base object. Their base data may be the same, but the extended part of it may store differently. You wouldn't store ALL the data in one table and you may not wish to have certain columns laying about if your user types are grossly different.

      Pixie may be great for simple things, but if things get complex, modules like pixie would lead into an "interface represents your database model" which is a pitfall.

      Play that funky music white boy..

        sporty wrote: Problem with stuff like that, is your objects and your tables don't always conincide.

        sporty, have you ever looked at Pixie? I don't know what you mean by "objects and tables don't always coincide". The Pixie philosophy is simple: we want to persist objects. And that's all it does. There's no complicated framework or worrying about how the tables relate. You store objects. You retrieve objects. That's it.

        You also wrote "interface represents your database model". The underlying database is completely independant of the object interface and, in fact, that was one of the design goals. Pixie doesn't care what your interface is. However, I could be misunderstanding what your concerns are, so if you care to elaborate, I'd love to hear what you have to say.

        Disclaimer: I've not used Pixie either. I toss it out there because Piers Cawley (who's done a lot of work on Pixie) recommended it and I have a lot of respect for him.


        New address of my CGI Course.

Re: Mixing Mysql and Perl
by neuroball (Pilgrim) on Jan 09, 2004 at 22:59 UTC

    I am not all that sure, but you might mean MVC. Which stands for Model-View-Controler.

    You can find more information on this web site.

    Now to your question about the how-to. There Is More Than One Way To Do IT. I normally have a lot of predefined SQL strings with placeholders saved in strings, which I than feed to subs which do the work. I.e. One sub which Inserts, Updates, Deletes, and another sub which Selects.

    You also might want to take a look at a rather un-academical guide to SQL.


Re: Mixing Mysql and Perl
by exussum0 (Vicar) on Jan 09, 2004 at 22:49 UTC
    Borrowing from abstractions i've seen before...

    You have an object that represents something. Let's say cars. So lets say you want to get the VIN, year.. various data on the car. Your car object would have a fetch method (car->fetch()). Your fetch method would contain a CarDao object, (carDao->fetch()). Both would return a CarDto object, that contains the data you expect back, (carDto->getVin(), carDto->getYear()). If there were interfaces in perl, the CarDao and Car would be of the same interface.

    Ideally, you can have your sql statements in scalars, near the top of your object, private and do prepares on them so that you have the sql in one place for that object.

    In practice, for large projects, this works nicely. If you change your db from mysql to oracle (or soap outside your network), you only have to rewrite your Dao objects. If you change the return type, you change your dto objects and what refers to them for their data.

    For small and some medium projects, it's overkill. When working with a team, it feels right too.

    Play that funky music white boy..
Re: Mixing Mysql and Perl
by soon_j (Scribe) on Jan 09, 2004 at 23:03 UTC
    There are no definite rules on implementing MySQL on Perl at least on a programmer's point of view. Whatever best fits your style, do it. Whether you create multiple functions or you simply make a universal that's completely up to you as long as it does the same thing. In Perl they say, "there is more than one way to do it"... and it's definitely true. If you want to take a significant performance hit, you may as well study on the techniques of optimizing MySQL queries liking making wise use on indexes perhaps. Till then... you are what you do. Keep it that way if you do it best.
Re: Mixing Mysql and Perl
by borisz (Canon) on Jan 10, 2004 at 13:21 UTC
    You are locking for is Apache::PageKit where your content, code and view is seperated (MVC). It works perfect with most databases and it has a easy example shipped with the source. You have to look at the example source Apache-Pagekit-1.12/eg/ while browsing to the examplesite somewhere at To hear the ahhh's and ohh's.
Re: Mixing Mysql and Perl
by mattr (Curate) on Jan 10, 2004 at 15:35 UTC
    For a long time I rolled my own subroutines based on reading the DBI manual over and over, but I've had a good experience with Class::DBI recently which gives you some object orientation. Some people swear on DBIx modules which I haven't tried.

    It does seem a useful thing to put the SQL all in one place and be sure a table schema change won't fry you (i.e. you could access fields via a hash of column names). Often I have a routine for picking up one field as a scalar and one row as a hash. Stash that SQL away! Some people put sql into separate resource files or dictionaries, that's another way and it separates it even more. If you are just starting I'd say read the DBI manual a few times, otherwise you will not know the basis of most Perl database code.

    To answer the question about Table A, the only thing I think is going to give you a massive performance gain is to use GROUP BY if you need to do a big request. Otherwise get a row at a time and read fields off as you like. The other way is to use an object-oriented module and if I remember correctly Class::DBI lets you tell which fields must always be fetched and which are to be done lazily.

    Another thing to consider is object persistence (maybe overkill for you now) which is provided as mentioned by some frameworks, SPOPS comes to mind. These things usually let you forget about loading and saving objects it is all transparent, theoretically. Anyway what you use and how much you want these things to stay in the background is a matter of style.

    Anyway the DB layout has little to do with the UI as far as I can see, it is based on the information design or underlying metaphors you have set out in advance. In fact I wouldn't do UI design until after that stage. For the UI anyway I would recommend HTML::Template because it has always been great for me.

    I'd like to mention also SQLite which seems to be useful and is a fast SQL db in a file.

    Anyway listening to your needs I would probably recommend at this stage learning the DBI and rolling a couple utility routines using it, and as far as UI goes use a template system.

Re: Mixing Mysql and Perl
by blokhead (Monsignor) on Jan 10, 2004 at 06:17 UTC
    <plug type=shameless> For what it's worth, I've written and recently released Class::Tables, which you might find useful. It's aimed to be really really really easy to use. You access the database via persistent object classes (same as Class::DBI), but with a lot less fuss and setup on the Perl side of things (at the cost of supporting more complex table relationships). I'd love to hear any feedback, too. </plug>

    I'd recommend you take a look at both Class::DBI and Class::Tables and see which suits you better.


Re: Mixing Mysql and Perl
by l3nz (Friar) on Jan 10, 2004 at 14:11 UTC
    In my experience, the best way to go is to keep the complexity as low as possible while mantaining full focus on what you want to do. Will you work alone on the project or will more people work on it? Will you port the project to a different database back-end in the future? How big is the project? how complex its data set?

    I usually go for the easiest thing to do that will match nicely the problem space. This is very often avoiding multiple layers of abstractions and objects, unless the project is very big and/or you're prepared to document it all in a apt way. Don't go for a solution that makes solving the problem harder at all, and try to keep separate things that can live one without the others. Don't go for a solution that everybody is chatting about or you read books about just because it's fashionable or academically "in".

    One of the reasons why Unix is still around is that it makes it simple to create a "solution toolbox" that something else can then use.... so it makes solving the "big" problem a task of solving smaller ones and the glueing it all together. Re-use what you can, you won't be the first person ever trying to solve the problem you're facing.

    Don't go for performace at first; unless your design is infeasible, CPU cycles cost less than your own time. And half-solved computing messes tend to get worse with time. Design for simplicity and understandabilty, play with the problem space, then decide. Perl can be a very good tool about this.

    Just my euro 0.02... :-)

Re: Mixing Mysql and Perl
by ellem (Hermit) on Jan 10, 2004 at 02:08 UTC
    This book might be of some use.
    There's more than one way to do it, but only some of them actually work.
Re: Mixing Mysql and Perl
by TomDLux (Vicar) on Jan 11, 2004 at 03:06 UTC

    The DBI modules isolate you to a large extent from the differences between databases. DBI::Class, as well as some other modules, furthers that isolation.

    HTML::Template or Template::Toolkit or similar modules separate the presentation structure from the logic.


      s/DBI::Class/Class::DBI/, yes?
Re: Mixing Mysql and Perl
by diskcrash (Hermit) on Jan 12, 2004 at 04:29 UTC
    Yo SavannahLion,

    In past projects I have considered a few guidance issues:

    1. Where there is the luxury of being able to separate parts of the engine (as in real-time applications) the first Perl task grabs data from a raw data source, munges it into useful fields, formats rows and inserts the simplest possible rows into the MySQL table. This first part often has nasty time dependent characteristics, so in past apps I had too keep the processing light. The second Perl task comes along at some leisure, reads the rows, formats graphics and reports, or sorts data, and builds up output for web pages or other MySQL tables. So here the time cycles drove the architecture and divided the Perl/MySQL tasks.

    2. The second design driver was failure modes. If a task or system fails, can the rest of the system still carry forward to a degree and provide some value until another part is restored? So one task might live on machine 1, write data to machine 2 and a task on machine 3 presents it. If 1 fails, users can at least get to existing data. If 3 fails, 1 is still recording data, even if it is not available for presentation. You might provide redundent "2" machines so there is no single point of failure. The professional systems use tools for message queueing to make this process even more reliable.

    3. The final reason for task separation is scalability. You might be able to add additional systems for back/middle/frontends that divide and process the traffic as demand grows. If you start "divided" its easier to grow like parts in parallel.

    Let us know what else you discover - also code it and try it!


Re: Mixing Mysql and Perl
by paulbort (Hermit) on Jan 12, 2004 at 19:53 UTC
    I've been writing a Perl app (part CGI, part back-end) with lots of SQL mixed in, and our project has pretty much settled into using straight DBI with a hash to contain all the prepared statements, like this:
    use DBI; use DBD::whatever; # I use Pg, you use MySQL, same idea my $cn = DBI->connect( 'YOUR PARMS HERE' ) or die "Couldn't Connect!"; my %st; $st{'GetThings'} = $cn->prepare(<<ENDSQL); SELECT Thing_One, Thing_Two FROM Things WHERE Thing_Name = ?; ENDSQL $st{'AddThing'} = $cn->prepare(<<ENDSQL); INSERT INTO Things ( Thing_One, Thing_Two, Thing_Name ) VALUES ( ?, ?, ? ); ENDSQL sub closedb ( foreach my $stmt ( keys %st ) { $st{$stmt}->finish; } $cn->disconnect; } # Main Code $st{'GetThings'}->execute('Albert'); # fetch* here. # ... # All Done, clean up now: &closedb; __END__

    Then you can easily copy or re-arrange SQL, and just call &closedb when you're finished with db access in the program. It's not OO, it's probably not even a polite way to use global variables (nothing is passed in to &closedb), but it's easy to read, and fast to code, and I like it.

    If you want to try different SQL, just copy the statement you want to mess with, change the key on one copy to something else (eg, 'DoNotLoseOriginal'), and rock.

    Spring: Forces, Coiled Again!