http://qs321.pair.com?node_id=1095805


in reply to Re^7: Speeds vs functionality (utf8 csv)
in thread Speeds vs functionality

Heck, if I were implementing a CSV parsing module, I'd probably have separate code for the case of single-character separators, quotes, and escapes. Because the reasonable way to implement CSV parsing efficiently is rather different between when "quote" is a single character and when it is more than 1 character.
So I see no problem having a whole separate module for dealing with multi-character quotes. Use the standard module if you don't have to deal with such. Use the other module when you do. Each module is simpler because the multi-character one doesn't have to also try to include code to maximize efficiency for when a quote is a single character.

Do you mean character or byte?

I think you're using "multi-character" when what you actually mean is a single character (i.e., a single Unicode code point) that is encoded using multiple bytes in any one of the Unicode character encoding schemes:  UTF-8, UTF-16, UTF-16BE, UTF-16LE, UTF-32, UTF-32BE, and UTF-32LE. I don't think you truly mean a user-perceived character that consists of two or more Unicode code points (e.g., g̈ — U+0067 LATIN SMALL LETTER G + U+0308 COMBINING DIAERESIS).

In my Academy Award Best Picture winners example, every CSV metacharacter is a single character. The field separator character is 🎬 (U+1F3AC CLAPPER BOARD), and both the string delimiter character and the string delimiter escape character are 🎥 (U+1F3A5 MOVIE CAMERA). These two characters are, or course, encoded using multiple bytes in every one of the Unicode character encoding schemes. In UTF-8, they're encoded using four bytes. In UTF-16, they're also encoded using four bytes (two surrogate code points). And in UTF-32, they're encoded using four bytes, naturally.

I'd like to see a truly Unicode-conformant CSV parser/generator module in Perl 5. It would leverage Perl's existing Unicode and character encoding capabilities; it wouldn't roll its own encoding handling. It would parse already-decoded CSV records. The input to the finite-state machine would be Unicode code points, not bytes. (More ambitiously, the input to the FSM might be any arbitrary user-perceived character, or extended grapheme cluster.)

Why not?

Replies are listed 'Best First'.
Re^9: Speeds vs functionality (utf8 csv)
by tye (Sage) on Jul 31, 2014 at 21:12 UTC

    I was never considering single-byte anything. Writing code in Perl means that I don't have to (unlike writing code in XS). Yes, I actually meant what I said. Yes, I realized that your example was using multi-byte single-character tokens.

    The reason that single-character vs. multi-character (usually) leads to different approaches is because [^"\\]+ as part of a regex works fine for those single-character quote and escape values (respectively) but isn't even close to what you have to do if either of those is multi-character.

    And you are quite wrong about:

    One glance at the source code and it's obvious the author doesn't mean single character; he means single byte.

    For one, the author of Text::xSV didn't have to think about multi-byte characters. Their module is written in Perl so, unless they do something moderately strange or stupid, then multi-byte characters "just work" (provided the user of the module does the little bit of extra work to ensure that Perl has/will properly decode the strings/streams being given to the module).

    Looking at the code for Text::xSV in some detail, I see that 90% of the uses of the separator character would work completely fine with a separator that is even composed of more than one multi-byte character. There is one important place where the code would break for a multi-character separator (but that, indeed, continues to work for a separator that is a single multi-byte character):

    my $start_field_ms = qr/\G([^"$q_sep]*)/;

    Now, fixing the unfortunate hard-coding of the quote character is probably quite a simple task. And that would probably be sufficient to make the module work fine on multi-byte quote characters. Certainly much easier than trying to get multi-byte character support into a much more complex XS module.

    Why not?

    Because you haven't done the tiny bit of work to fix Text::xSV? Or the small amount of work to write a simple CSV parser in Perl?

    No matter. I'm almost done writing my new CSV module.

    - tye        

      For one, the author of Text::xSV didn't have to think about multi-byte characters.

      Technically, true, but he did have to think about proving a means of providing decoded input. I don't see any.

      As a result, the separator can only be in U+0000..U+007F for UTF-8 files (assuming the claim that it only supports one-character seperator is correct), and it can't handle UTF-16le files with character U+0Axx, etc.

        Yeah, fixing the module to allow a file handle to be given instead of just a file name is quite in line with the trivial work that I noted might be required.

        Though, I suspect that Perl provides a way for declaring a default encoding for all file handles, perhaps related to "locale" settings. So I'm not even convinced that your objection is even technically correct. (Though, if Perl does not provide such a feature, perhaps you should look into providing one, IMHO. :)

        I'm actually a bit surprised that open does not already support (according to my recent scanning of the documentation):

        open my $fh, '<:encoding(UTF-8) foo.csv'

        which would have also been a route that would have worked with the unchanged Text::xSV.

        but he did have to think about proving a means of providing decoded input

        No, the author didn't have to think about that. The author just needed to allow a file handle to be given, even if the reason for allowing such had nothing to do with the author thinking about decoded input. I very often support taking a filehandle not just a filename, and very rarely is that due to me having thought about encodings.

        - tye