Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

speeding up web development

by stonecolddevin (Parson)
on Jun 23, 2008 at 20:58 UTC ( [id://693599]=perlquestion: print w/replies, xml ) Need Help??

stonecolddevin has asked for the wisdom of the Perl Monks concerning the following question:

Howdy all,

I'm sure this question has been asked time and time again, but I've found that this process is somewhat of a stair-step one needing improvements and optimizations as one goes along.

I'm looking to speed up the time it takes me to develop web applications (ranging in scale from small-medium to large and beyond [hopefully]). As I've mentioned, stair stepping seems to be the appropriate approach to this, gaining experience and extremely valuable knowledge the whole way.

I've already taken to Catalyst as my web framework darling, but there are still a number of customizations I find myself having to do with each application I build. I'm attempting to put together a small "library" or "engine" that I can consistently re-use per project with minimal alterations. However, building this codebase

  1. Seems slightly redundant when using a framework that is designed to speed things up
  2. takes a good deal of time in itself to put together
  3. can become difficult to manage and quite pasta-esque if not put together correctly in the first place, thereby completing a fairly vicious cycle

I come to you monks asking for wisdom about how I could speed this process up? "This process" being the one that includes creating a consistent codebase to use from client to client and site to site, including (and figuring out which tests and what tests I need to implement in the foundation code and final product) and I think a large factor would also be database "migrations" (regarding the latter, I've taken a look at DBIx::Class::Schema::Versioned, but would really like to find or put something together that's somewhat like a common ground between RoR's database handling and Jifty's database schema control.

I have a feeling this node will have multiple parts, but I've got to get my shovel in the ground somewhere and decide how far I need to dig after that.

Thanks all!

UPDATE: as I suspected it would, this got the gears turning and produced more questions in my head :-)

First of all, I'd also like to ask about a fairly system independent (and host independent) caching method. Cache::Memcached is super awesome, but I don't know how many shared hosts will support that. What's next best? Cache::File or some such? And begging yet another question, what's to be cached? Yes, Google Ads surely, mostly static front/splash pages, login pages, the obvious, but what else can I cram into that little treasure box to speed things up

I'd also like to address the idea of extendability through something like JSON (SOAP isn't so good nowadays I hear), but do I do it with AJAX? RESTfully? Or as I mentioned JSON? So many to choose from, how do I decide?

WARNING: NOT NECESSARILY PERL SPECIFIC I'd also like to mention I've got an OK method for updates that at least allows for stability in the sense that I can roll back to a previous version if something breaks; I use a central SVN repo for a given project, check it out into my local box, and check it out onto the host (given they support SVN, which brings up another point), and svn update on the production host when updates are running smoothly. But, what if the production host doesn't allow/support SVN? Do I create a whole separate directory on my local box for "stable/production" versions that I svn update as I would on the production box running SVN, and just FTP the files?

Please forgive the incompleteness of this thread, I'm coming up with new questions and ideas as things tumble around in my head.

UPDATE: I'm all about learning and posting what I've learned, and I came across this reply and parent node. I liked the idea.

meh.

Replies are listed 'Best First'.
Re: speeding up web development
by perrin (Chancellor) on Jun 23, 2008 at 21:56 UTC
    A couple of answers:

    Catalyst doesn't really do anything application-specific. It maps URLs to method calls, period. The rest is up to you, so it makes perfect sense that you will develop your own set of reusable code for building your sites. It's a positive thing and you shouldn't try to avoid it.

    There is no such thing as automatic migration between versions of a database schema and there never will be. If you change the name of a column, how will any program figure that out? It won't unless you tell it. You can use the various diff'ing tools like SQL::Translator (that's what DBIx::Class is using) to generate a starting point, but I've always found it to be a lot simpler and more foolproof to just use direct SQL. If you search this site, you'll see discussions on how to keep track of SQL scripts for upgrading your database and test them.

    The cache framework you're looking for is CHI. What should you cache? Things that are slow. If nothing seems slow, don't cache. It's always a headache.

    Your updating/svn question deserves a whole separate thread, but it has also been discussed before on the site, so I'd suggest some reading first.

      CHI looks a lot like Cache. From what I can see though it's goal is abstracted (by that I mean, "any" type of caching through this module) caching? Is this close to correct?

      I appreciate the to the point-ness of this thread, my big issue has been digging through all the cruft of "advice" I've found in the past.

      I agree in total with regards to Catalyst being a url->method call mapper. My question was geared more towards advice as far as spending time on creating my fairly completely customized code library or keeping it more general and using "standardized" things that are CPAN approved and such, if that makes any sense. I think it would probably come down to the needs of each application, or at least an average of each application's needs, so as to incorporate all that is needed. I doubt that will be something that anyone can answer one way, but I'm by no means a well versed web developer and I want to make sure my practices become practical and not detrimental to my work and building bad habits in the future.

      UPDATE: Durr, of course CHI looks like Cache::Cache. It's intended to be an evolution of it.

      I'd also like to clarify what I meant by migrations, and I'm sure I'll find something on here to do the dirty work that I'm speaking of, but I'm quite intrigued at how there's not much SQL directly involved in Ruby on Rails migrations and not having to open up a database editor and get dirty with SQL. God knows I love SQL, but I can only do so much at once without throwing things across the room. Hopefully this clarifies my wants a little

      meh.
Re: speeding up web development
by dragonchild (Archbishop) on Jun 24, 2008 at 13:57 UTC
    With regards to deployment, this is what I have done in the past. Everything here is scripted and it's expected that the script will run from the box being deployed TO.
    1. There is a directory that everything is deployed to. Call it /prod. Everything happens here.
    2. There are two subdirectories - /environment and /application.
    3. /environment is the environment in which /application runs. Within it are subdirectories that have a unique name. I use the timestamp of creation in YYYYMMDDHHMISS form. You can do whatever.
    4. Within each subdirectory is a complete environment. A copy of Perl, all modules, and everything else you'd depend on /usr/bin/perl to provide. If you're paranoid enough, it should also contain its own copies of the various libraries, like libz, libjpeg, libpng, etc. You build one of these every time the environment changes.
    5. /application is the actual application deployment. Within it are subdirectories that have a unique name. I'd the same scheme as for /environment. There is also a current softlink that points to the current deployment.
    6. When you deploy the application, you hardcode the environment that this specific deployment uses. Once it's deployed AND TESTED, you flip the current softlink over to it and restart the application (bounce Apache, etc).

    This system, as complicated as it sounds, was designed with the following in mind:

    • You never touch the system perl. Ever.
    • If something goes wrong with anything, you can rollback to last known good. This includes bad CPAN modules.
    • You know exactly when things were deployed and how.
    • You know every single dependency.
    • You can build on one machine and, if your machines are homogenous, rsync the others.
    • You can even deploy this over NFS.

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      This is a great plan! If you ever decided to publish the scripts supporting it - then please pass me a note :)

        Better yet, why not collaborate on getting it published? Maybe refine it for public use on the way if necessary?

        meh.
        Perl::Install is 2/3 of the heavy lifting.

        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

      That's certainly thorough.

      May i ask what the point of rebuilding everything in the environment directory? Unless I'm being naive I see another layer of possible error there. Please, enlighten me if I'm mistaken though, this looks like a promising method!

      meh.
        First off, you only update the environment when there's an change to the environment. For example, if a module's required version goes up. Or, if you now need a new module. Or, and this is kinda weird until you think about it, you no longer need a module. As for why, there are several reasons.
        • To make sure that each environment version is completely self-contained.
        • To make sure that each environment version is correctly built against itself.

        But, the most important reason is that you have a "last known good" to fall back to when something breaks. This means that you don't alter an environment after it's been deployed against. Otherwise, you don't have anything stable.


        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
Re: speeding up web development
by zby (Vicar) on Jun 23, 2008 at 21:17 UTC
    Ad. "Seems slightly redundant when using a framework that is designed to speed things up" - Catalyst tries to be a framework designed to speed things up - but it was nowhere stated that it is a complete and closed codebase. You are invited to publish your expansion code and make it more complete :)

      That is an excellent point, and brings me to this: in that case, how can I optimize the time I spend on customizing it to my own needs so that I'm not doing things that have already been done, doing them in a less tested and more shoddy fashion, and actually get some work done and not spend all day trying to implement sub X in the time I could be getting an entire site/skeleton up and going?

      I realize development always takes time, but I'd like to use what's already been done to tie in what I use on a normal basis together with Catalyst, thus eliminating at least a small bit of brain hemorrhaging :-) (I liken it to LEGOS, put the blocks together as opposed to super gluing a bunch of plastic together to make the block)

      meh.
Re: speeding up web development
by gloryhack (Deacon) on Jun 24, 2008 at 06:57 UTC
    Regarding "But, what if the production host doesn't allow/support SVN?":

    If you're on a Linux workstation, you can use either afuse (in userspace) or shfs (in kernelspace) to mount the remote system via SSH. Then just checkout to that mounted location, and away you go. I do this sort of thing a few times a week. It's also very handy for running diffs without downloading, and doing quick edits of remote files in your local editor. I'd hate to be without this functionality.

      Or you can svn export the files and then transfer them with scp/sfpt/ftp (but there's no excuse to send unencrypted passwords over the wire in 2008, so no ftp).

      /J

        ++ for this.

        I don't think I'd have an issue sftping from my svn box to the destination production box. I think I could whip something up to do this for me if I got bored.

        meh.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://693599]
Approved by moritz
Front-paged by moritz
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (2)
As of 2024-04-26 06:00 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found