diego_de_lima has asked for the wisdom of the Perl Monks concerning the following question:
Hi Monks!
I'm probably facing a simple problem that many of you have already solved long time ago: how to store and read many translations of a software.
Our application is a Web 2.0 software running under Linux/Apache/mod_perl/Postgres/BerkeleyDB for cache/etc. This application has about 1000 different expressions (from 10 to 200 character each). And we are plannig to prepare it to have multiple languare support (each user that logs in can choose it's prefered lang, just like phpPgAdmin).
So my question is: which is the best way to store all this data.
My fisrt choice was to have it on simple PM files, on hashref structures, wich is the fastest way to access the data, but having multiple languages simultaniously on mod_perl would increase memory consumption, having a hash of thousands of expressions in memory when each CGI may use only a few of them.
My second choice was to have it chached on a BerkeleyDB file (which I already use for other caches), tied to a hash. This would solve the Apache/mod_perl memory problem, but is this fast enougth?
So, before I begin working, what are you monks doing to solve this? Any of my ideas or a different one?
Thanks!
Diego de Lima
I'm probably facing a simple problem that many of you have already solved long time ago: how to store and read many translations of a software.
Our application is a Web 2.0 software running under Linux/Apache/mod_perl/Postgres/BerkeleyDB for cache/etc. This application has about 1000 different expressions (from 10 to 200 character each). And we are plannig to prepare it to have multiple languare support (each user that logs in can choose it's prefered lang, just like phpPgAdmin).
So my question is: which is the best way to store all this data.
My fisrt choice was to have it on simple PM files, on hashref structures, wich is the fastest way to access the data, but having multiple languages simultaniously on mod_perl would increase memory consumption, having a hash of thousands of expressions in memory when each CGI may use only a few of them.
My second choice was to have it chached on a BerkeleyDB file (which I already use for other caches), tied to a hash. This would solve the Apache/mod_perl memory problem, but is this fast enougth?
So, before I begin working, what are you monks doing to solve this? Any of my ideas or a different one?
Thanks!
Diego de Lima
2006-02-09 Retitled by planetscape, as per Monastery guidelines
Original title: 'Multi-languare web app'
|
---|
Replies are listed 'Best First'. | |
---|---|
Re: Multi-language web app
by thedoe (Monk) on Jan 23, 2006 at 17:18 UTC | |
Re: Multi-language web app
by ruoso (Curate) on Jan 23, 2006 at 17:18 UTC | |
Re: Multi-language web app
by Tanktalus (Canon) on Jan 23, 2006 at 17:31 UTC | |
by diego_de_lima (Beadle) on Jan 23, 2006 at 18:29 UTC | |
Re: Multi-language web app
by pKai (Priest) on Jan 23, 2006 at 18:12 UTC |
Back to
Seekers of Perl Wisdom