Well, let me take this opportunity to make a more philosophical point: in my experience, it's better to always specify a character encoding when converting from bytes to characters and back (I usually only make an exception in short scripts and/or when I know everything is ASCII). Even though it may seem a bit tedious and verbose, consider the alternative: if there is a default everywhere, then users will not get used to having to choose an encoding. Much confusion has been caused by programmers sometimes not even being aware of where en- and decoding processes are taking place. For example, AFAIK early versions of the Java standard libraries made this mistake, and many places where Strings were converted to and from arrays of bytes (especially on I/O), a default encoding was used, which as far as I can remember was just the platform default, for example Latin1 (causing issues when the files were actually encoded in e.g. CP1252 or UTF-8). If you look at it this way, then maybe you can see how specifying an encoding explicitly is like coding defensively.
| [reply] |
Thank you.
More philophically too and in a few word : I want nomore convert something to other encoding thing. It is a problem of past millenary (see my other post). No more latin, cp999... Either perl evolves, either it will die. And I shall die with it, as I have so many scripts in perl...
| [reply] |