Re: So, what *is* the best way to deliever dynamic content nowadays?
by arhuman (Vicar) on Jan 04, 2002 at 20:32 UTC
|
- To monolith-ize or not?
It's a matter of taste, but I personnally like the (usual) idea of several scripts using common libraries.
(database access, presentation, system interaction...)
By the way I see no additional difficulty on managing cookie/session via several scripts instead of one.
(did I miss something?)
- To XML-ize or not?
By making a clear distinction between the data and the presentation, XML is a "must have" nowadays, furthermore a lot of emerging technologies use it as input/output format or at least as a pivot format.
One advantage I grant to XML over standard templating system, is that XML will provides you more tools (parser, checker, transformer...) than any templating system.
It would be cool to use DTD to validate (for a particular browser) HTML code
produced via XSLT translation from XML...
The question is much "how to do it ?"
The article gives an interesting way, axkit may be a good option too,
anyway I'm sure some people here will probably give you the best advices on XML.
- (not quite as perl-ish) MySQL or Postgres?
Postgres! if your goal is to do the things the right way.
Even if (beccause?) I've been using MySQL for 2 years at the office, I'd recomment MySQL only when speed is REALLY needed
Foreign keys, trigger, views... make the DBA life so easier.
"Only Bad Coders Code Badly In Perl" (OBC2BIP)
| [reply] |
Delivering Dynamic Content
by dbrunton (Novice) on Jan 05, 2002 at 07:23 UTC
|
To monolith-ize or not?
The short answer: yes. Some parts of our development want to be monolithic,
and some don't. It's like Perl- some parts of Perl want to be OO, and some
parts simply do not. mod_perl was a piece
of this answer for our project (it's pretty monolithic) but it was too hard to
make it spit out HTML all by itself. HTML::Mason pretty much solved that problem
for us, and then we were left with the single remaining problem of how to
represent the underlying data. Which is what your next two questions were
about, so I'll stop at that for a minute.
To XML-ize or not?
This was an easy one for me, and I hope I can infect you with the zeal of the
newly converted. If you can find a way to have your database do the things
databases tend to be good at (e.g. indexing, search, etc.), and to keep XML
around because it lets you have arbitrarily deeply nested data structures and
other such niceties that Perl REALLY LIKES, this seems like an ideal
solution to me.
After playing with an early version of Kwin Kramer's XML::Comma, I liked it so much that I tracked
him down and took a solemn oath to forever uphold the honor of XML::Comma and
to never use PHP again. Or something. It really is insanely cool.
Check out this post or just read my short summary:
XML::Comma uses XML for two different purposes. First, it stores documents in
untyped XML. Secondly, it uses a special flavor of XML to specify a single API
for indexing (I'm using MySQL), storage (XML on the filesystem in my
case), inverted indexing (MySQL again in my case), compression (GZip for me),
Encryption (HMAC + symmetric key encryption here), and is capable of extending
this interface to everything but washing your dirty undies.
What this means for me is that when I use the Comma API to create a document
a la:
my $doc=XML::Comma::Doc->new(type=>'Foo');
$doc->name()->first('David');
$doc->name()->last('Brunton');
$doc->phone()->area_code('509');
$doc->phone()->num('5551212');
$doc->store('main');
That last $doc->store() gives me (based upon my XML DocumentDefinition) a file
on the filesystem that is gzipped, encrypted, HMAC'ed, sorted, and generally
kept track of. It gives me corresponding fields in my database that are typed
and indexed according to my specifications. It creates an inverted index for
my search engine. And it does it many thousands of times every second even on
my desktop machine, not to mention the quad Athlons we deploy on.
I highly recommend checking it out. I did. But be careful. I started out
just playing with it in my spare time, and now it's become my full time job to
play with it ;)
MySQL or Postgres?
I use Comma with MySQL. We've done some benchmarking of Postgres, but haven't
figured out a good reason (performance or features) to switch yet. Besides, I
think XML::Comma is still Postgres/MySQL agnostic, so barring the use of
any of the arcane indexing stuff, I think I could probably switch without much
effort. | [reply] [d/l] |
Re: So, what *is* the best way to deliever dynamic content nowadays?
by lachoy (Parson) on Jan 04, 2002 at 23:37 UTC
|
This sounds like an excellent fit for OpenInteract. (I'm a little biased, but still...) It provides much of the infrastructure you need for web applications -- authentication/authorization, security, centralized URL->action-mapping, easy data access (DBI, LDAP, ...), presentation framework using Template Toolkit, etc.
Since you're using a relatively low-powered machine, it would probably be good to spend a few minutes looking at the common mod_perl usage of lightweight proxy servers sitting in front of heavyweight mod_perl servers. This way you don't have to start many mod_perl servers that eat up your memory.
As for databases, PostgreSQL is more featureful than MySQL, easy to setup (as long as you're comfortable with ./configure; make; make install), updated often, low maintenance and well supported with not only DBD::Pg but also ODBC and JDBC drivers. I don't even use MySQL anymore unless someone specifically requests it.
Chris
M-x auto-bs-mode
| [reply] |
Re: So, what *is* the best way to deliever dynamic content nowadays?
by perrin (Chancellor) on Jan 04, 2002 at 23:37 UTC
|
I don't think there's much memory tradeoff between CGI::Application and separate modules. You should be preloading all of this in startup.pl so it gets shared. Personally, I like to separate functionality into separate modules according to major site functions. For example, I might have a module for updating your user settings, and another one for browsing one of the data types published on the site. I also model the data objects as separate classes, so these modules I'm talking about are really just the "controller" part of a model-view-controller design.
As for XML, I have found uses for it in data exchange but not in page generation. I just can't see the compelling argument for it. TT2 templates are easier to write than XSL. It's trivial to write a template that spits out an XML version of your data for a feed. Validation - well, TT will not compile an invalid template either. I can't see a good reason to use the much slower XML approach, except maybe to plump up your resume. | [reply] |
|
Hi Perrin,
Have you benchmarked recent XSLT solutions for this pre-conceived "slowness"? They are very very fast these days...
As far as the advantages, I'll paraphrase Robin Berjon on this... The advantage he sees of using XSLT is that it's a very well thought out system, with a lot of history (DSSSL) behind it. With XSLT you aren't constrained by flat structures like many templating systems - and by that I include hierarchies where you have to "manually" iterate over the tree. With XSLT you just declare what bits of your data are meant to look like, and run it. But it's more than that... With XSLT you are not creating text. That is what every single other perl templating solution does - it takes some data and a template and generates text. This does not lead to efficient pipelining (which is a technique that most people in the perl world aren't really familiar with, because of this weakness, but a very useful technique nonetheless). With XSLT you take some a data structure, and a template which describes a new data structure, and you get a data structure at the end of it. Not text (and not really XML either - you have to think outside of that box :-)
Oh, and one other benefit: multiple vendors.
Matt (with a v.fat C.V.) ;-)
| [reply] |
|
I haven't benched the latest round of XSLTers, but the performance on XSLT has historically been pretty abysmal. Most of the time people (read "Cocoon") say they have solved the problem by caching the output, but not all output can be cached. I doubt that XSLT can be very fast when actually parsing XML (for the data and the style sheet), so presumably the biggest speed wins would come from caching the stylesheet as perl code, and from generating the data in a perl data structure that the XSLT processor understands instead of actual XML (thus skipping the parsing step). Are we there yet? And if we are, aren't we kind of re-inventing the wheel? Perl doesn't need XML to make generic data structures.
Your point about not being constrained to a linear mode is actually one of the things I hold against XSLT, because all of the HTML monkeys I know like to think about page design in a linear mode. They don't want to specify a style for the product price; they want to write some HTML and say "the price goes here." It's just more intutitive to non-programmer types.
I can see value in pipelining for working on data, but I would do all of that data mangling before I get to a page generation stage, so the template tool itself doesn't need to support it.
Anyway, I have happilly used XML for other things and I don't usually take stabs at XSLT, but since he asked for opinions... I try to keep my preferences from coloring my templating guide too much (which now desperately needs an update, with new versions of AxKit, TT, Apache::ASP, etc. out).
| [reply] |
|
Re: So, what *is* the best way to deliever dynamic content nowadays?
by sparkyichi (Deacon) on Jan 04, 2002 at 23:21 UTC
|
| [reply] |
Re: So, what *is* the best way to deliever dynamic content nowadays?
by markjugg (Curate) on Jan 06, 2002 at 02:39 UTC
|
My two cents:
I use CGI::Application in combination with Postgres and I'm very happy them both. I like organization that CGI::App encourages. In that framework, it's easy to create a single code line that powers multiple sites at the same time with different template sets and parameters. Because a maximum amount of code is in modules, potential re-use is maximized.
I like Postgres because it's feature-rich. I've also used MySQL and found that it's often much faster to code a solution using Postgres. The one thing I hear in favor of MySQL is a "speed difference". In my real world application of using Postgres for dozens of projects for a website development firm, the speed has always been good. I'd throw mod_perl at a project before I would switch to MySQL. :)
You can see an example of CGI::App and Postgres in action in my Cascade project, a content management system.
-mark | [reply] |
Re: So, what *is* the best way to deliever dynamic content nowadays?
by mpeppler (Vicar) on Jan 04, 2002 at 21:53 UTC
|
MySQL or Postgres...
I'd use Postgres (well - actually I'd use the free Sybase 11.0.3.3#6 release if this is on linux :-)
Michael | [reply] |