http://qs321.pair.com?node_id=808627


in reply to Re^6: any use of 'use locale'? (source encoding)
in thread any use of 'use locale'?

Yes, Unicode support is not as good as it could be. We are in a transition phase from ASCII, the various ISO-8859-n encodings, and several multibyte encodings to Unicode. ASCII is about 35 years older than Unicode, the ISOs are still about 10 years older. The biggest problem of Unicode is that a char is no longer the same as a byte, which breaks at least 35 years of code. (At my current job, nobody knows Unicode. They still talk about ASCII, and will continue to do so for at least the next decade. So, introducing Unicode breaks 40 to 50 years of code.) And to make things worse, all Unicode encodings except for UTF-8 typically contain lots of NUL bytes, breaking even more code that expects NUL bytes only at the end of strings.

A CPU (and all of the other hardware) has no problems with Unicode. It's not a hardware problem at all. So, the problem must start at the operating system:

Nearly all of our current and legacy file systems assume that a char and a byte are the same, and often they also assume that a NUL byte marks the end of a filename. So, we need to change the filesystems. Very often, UTF-8 can be used instead of ASCII, leaving only some problems of byte lengths vs. character lengths and of all those old byte-based characters above 0x7F. In fact, we need to know what encoding is used for each filename, or at least for each instance of each filesystem. The operating system needs to take care of the different encodings, and offer a Unicode-based API for the filesystems. Windows has ASCII and Wide APIs for this purpose, but as far as I understand, Wide means UCS-2, which is only a subset of UTF-16 and does not cover the entire Unicode set. ASCII has no support for Unicode. I'm not quite sure weather Linux has an 8-bit-transparent API that is able to pass UTF-8 or has a real UTF-8-based API.

So, now that we can have filenames and especially directory names in Unicode, $ENV{'PATH'} must be able to contain Unicode characters, and some other environment variables, too. So, we need a Unicode environment, preferably with support for Unicode keys. As far as I understand, Windows offers a UCS-2 environment to "Unicode" programs and an ASCII environment for non-Unicode programs. Linux provides an 8-bit-clean environment and lets each program decide about the encoding of the environment.

As for the environment, the command line arguments must be able to contain Unicode. The same game here, Linux passes a NUL-terminated array of bytes and lets the program decide about the encoding, Windows offers two APIs depending on how the program was compiled and linked.

All of those really basic things about running a program are not yet complete. I simply do not know any operating system that treats each and every string passed to its APIs as Unicode.

A completely different problem are text files of all kind, starting with what we call "plain text", scripts, source code, logs and so on. For each text file we read or write, we need to know its encoding. Current operating systems can not give us the slightest hint about the encoding. HTML and XML have a default encoding and may contain hints about a different encoding. So, I/O in text mode is a huge and unsolved problem.

Networking: IP, TCP and UDP are all about stuffing bytes into tubes and collecting those that fall out of other tubes. ;-) No problem so far. The problems arise at higher levels, where the protocols start working with text strings. Think about the unfortunate punycode used in DNS. Think about e-mail accounts. E-Mail and HTTP have at least a Charset header, solving the problem of the content. But headers are still ASCII. E-Mail-Adresses are passed in the header. Think about FTP. I don't know how FTP would or should handle Unicode filenames.

If we could throw away all old and existing systems and simply start a new set of operating systems, file systems and network protocols, everything would be easy and simple: Store a charset (and a content-type) with each and every file, and use some Unicode encoding instead of ASCII.

Some newer languages took their advantage of not having legacy sources. Perl is older than Unicode, and has a big legacy of old code that has to be supported. Perl 5 is about as old as Unicode, but Unicode was simply not relevant when Perl 5 was released.

Sure, it would have been nice to have Perl 5.000 with full Unicode support, but what operating system would have been able to run it?

What operating system can currently provide perl with a complete Unicode environment (%ENV, @ARGV, STDIN, STDOUT, STDERR, open, opendir, mkdir, rmdir, unlink, ...)?

All Unicode problems are still transition problems. Your hypothetic "everything-is-Unicode"-flag could be implemented some day, when all Perl Module authors (or at least those of the major modules) have changed their code to fully support Unicode, and when Perl can use a Unicode API on all major operating systems.

Look at DBI and the various DBDs. The first DBI version having a little bit of Unicode support is 1.38 dated 2003-Aug-21. DBD::Oracle got some Unicode support in 1.13 dated 2003-Mar-14, but to get real Unicode support, you needed at least Oracle 9, released in 2001. DBD::Pg got Unicode support with version 1.22, dated 2003-Mar-26. DBD::ODBC had no Unicode support at all until I started messing with its code and the Windows API and published a patch 2006-Mar-03. After some discussions on dbi-users, Martin J. Evans cleaned up after me and released DBD::ODBC 1.14 dated 2007-Jul-17 with minimal Unicode support. DBD::mysql got the first parts of its Unicode support in 3.0004_1, dated 2006-May-17.

And now, file APIs. Perl on Windows uses the ASCII APIs for file I/O, probably because using the Unicode APIs would break lots of code, especially when it comes to command line arguments and the environment. And perhaps because until recently, Perl supported Windows 9x lacking the several parts of the Unicode APIs. On other systems, there aren't even APIs where programs can talk in Unicode with the operating system.

So, what can be done?

We won't be able to make a big jump forward, flip a switch and have all Unicode problems solved. But we can make small steps. Every journey begins with a single step.

Expect a few more years until Unicode has truely become universal, and a few more years for all code writers to keep up. I think that the major problems at the O/S and network level need to be solved first, before we can change Perl. Windows could be a good test environment, because it already has Unicode APIs.

Alexander

--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)