Beefy Boxes and Bandwidth Generously Provided by pair Networks
P is for Practical
 
PerlMonks  

How to handle encodings?

by DreamT (Pilgrim)
on Mar 06, 2009 at 15:35 UTC ( [id://748893]=perlquestion: print w/replies, xml ) Need Help??

DreamT has asked for the wisdom of the Perl Monks concerning the following question:

Hi!
I'm building a software with the usual three components:

- Perl source code
- A MySQL database
- HTML templates

All these components have their "own" character sets (Perl uses it's internal character set, MySQL uses it's own collation and the HTML templates have their character sets). Of course we're using the same character sets (currently latin1).

But, suppose that we do third party connections to other systems that uses other character sets, or the HTML templates "need" to have another character set. This can lead to consequences when POSTing data to the Perl source code, or when INSERTing data in the MySQL database.

One way is of course to convert the data when needed, but I'm looking for a way to handle "foreign" character sets in a more standardized way. So, my questions are:

- How do you handle this?
- Is it a good idea to convert incoming data to Perls internal format when processing it, and do vice versa when printing/storing processed data?
- Does the format of the file containing the Perl code itself matter?
_ I've looked att the Encoding::Guess module, is this an option to decide the format of the incoming data?
My concluding question: What is the best way to deal with different character sets in a system?

Replies are listed 'Best First'.
Re: How to handle encodings?
by moritz (Cardinal) on Mar 06, 2009 at 15:46 UTC
    - How do you handle this?

    I keep everything in UTF-8, since it's something that's universal and that's understood by nearly every program

    - Is it a good idea to convert incoming data to Perls internal format when processing it, and do vice versa when printing/storing processed data?

    Yes, it's the way to go IMHO. You can use IO layers and Encode to do it for you.

    - Does the format of the file containing the Perl code itself matter?

    If there are string constants in that file, and you concatenate them to the data, it does matter. So you should decode these string constants (or keep the perl files in UTF-8, and use utf8; which does the decoding for you).

    I've looked att the Encoding::Guess module, is this an option to decide the format of the incoming data?

    No. Guessing encoding is not reliable, and you should avoid it by all means. Make sure that all your interfaces have a way to specify the encoding.

    My concluding question: What is the best way to deal with different character sets in a system?

    Keep all data internally in a consistent format, and recode at the boundary between what you consider "internal" and "external". The internal encoding should be a Unicode encoding (like UTF-8, UTF-16{l,b}e) so that you won't have any information loss during recoding. Unicode aims for round-trip conversions between non-Unicode charsets and Unicode, and for all common encodings it pretty much succeeds.

        Definitively worth a read. It inspired me to write this article with a similar intention, but more focused on Perl programming.
      I've looked att the Encoding::Guess module, is this an option to decide the format of the incoming data?

      No. Guessing encoding is not reliable, and you should avoid it by all means. Make sure that all your interfaces have a way to specify the encoding.

      I wouldn't be so harsh on Encode::Guess. It definitely can be useful when applied correctly to the right problems, and I think its man page does an okay job of saying what its strengths and weaknesses are.

      I agree that using it as a "do-all" for every multi-encoding task would be wrong; ideally, all your inputs will provide some sort of declarative or unambiguous evidence about the encoding being used, but for inputs that don't provide that, you may need all the help you can get (doing "offline" research/investigation to understand the data) in order to figure out what encoding the data is using, and Encode::Guess can help in such cases.

      Once you understand your data well enough, and you understand how Encode::Guess handles it, you may actually find it worthwhile to use the module in a production pipeline, to route data according to what the module can tell you about it (in the absence of any other information) -- but doing so without thorough testing would be a mistake.

        A typical place where Encode::Guess falls down (through no fault of its own) is in differentiating one variant of iso-8859 from another.

        Who's to say if chr(250) is "Č" (ISO-8859-2) or "Θ" (ISO-8859-7)?

        Without prior knowledge, you're up the creek without a paddle. So I agree wholeheartedly with Moritz's suggestion of converting everything to UTF8, while you still know what encoding it is in.

        (graff - I know you're too wise a monk to have been suggesting otherwise, but I wanted to provide a simple example of just how limited Encode::Guess can be.)

        Clint

Re: How to handle encodings?
by webshop (Acolyte) on Mar 09, 2009 at 10:39 UTC
    Another important issue when working with different character sets is to avoid double encoding, i.e. encoding a string to UTF-8 which already is UTF-8 encoded, and to avoid mixed encoding within the same string or document, in which case there will be bad characters dsiplayed and guessing the right encoding will then be impossible.
      Ok, thank you for all your answers! I will look into using utf-8 as much as possible.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://748893]
Approved by Corion
Front-paged by ikegami
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (5)
As of 2024-04-19 23:44 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found