good chemistry is complicated, and a little bit messy -LW |
|
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Mesmerizing Monks,
I use Template::Toolkit to serve dynamically generated web pages. My basic set-up is a MySQL db, a bunch of T::T templates, and a Perl script to extract data, populate the T::T variables, and spit out the appropriate page. I'm about to start a new website and plan to use T::T and a db, BUT, this time search engine ranking will be important, whereas it never has been before for the private, internal, sites I usually work on. So I'm thinking about how using T::T might affect this. For example, if a page is called via a form that uses POST to my_central_script.pl, then every page on the site has the same url, www.my_domain.com/cgi-bin/my_central_script.pl. Not very search engine friendly, and not to mention the fact that I doubt spiders are going to submit my forms to find these pages. This can be corrected by using either GET for forms or links of the form And http:/www.my_doman.com/cgi-bin/my_tt_script.pl?a_id=xxx&b_id=yyy is the url of that page. Question is, what does this do to search engine indexing, if anything? Do the major search engines like this kind of url? Now, the data in the db will change only very rarely, and there will only be a few hundred (ok, uner 1,000 anyway) possible combinations of data. That is, under 1,000 possible distinct web pages. So I'm wondering about the urls' impact on that. So for this site, I could, in theory, use T::T as a generator of pre-formatted static html pages. Enter the data into the db, make the templates, write a script that will cycle through generating all possible combinations of data, and store the output pages as static html files. Perhaps the search engines prefer nice, ordinary urls like http://www.my_domain.com/aid_xxx_bid_yyy.htm with links on them to other pages like
So,what is the best way to use T::T when SEO is important? Thanks. Time flies like an arrow. Fruit flies like a banana.
|
|