Beefy Boxes and Bandwidth Generously Provided by pair Networks
No such thing as a small change
 
PerlMonks  

Re: DBD::CSV limitation versus aging/unmaintained modules

by tilly (Archbishop)
on Jan 15, 2004 at 16:13 UTC ( [id://321593]=note: print w/replies, xml ) Need Help??


in reply to DBD::CSV limitation versus aging/unmaintained modules

Don't rule this a bug in DBD::CSV until you do more investigation. It is quite possible that there is a single mis-matched " in the file, and DBD::CSV is interpreting that oddly, and you lose lots of lines. Try taking the CSV file and cutting it in halves, see if each half is handled normally, and then do a binary search to find the data confusing DBD::CSV. After you find that, then decide whether it is doing the wrong thing, and if it is, then you can now send a useful bug report.

If there is a data problem, running through the file with Text::xSV might spot it very quickly since it doesn't try to DWIM around errors and gives fairly detailed error reporting.

As for your code, you do not handle commas, returns or double-quotes properly in a quoted field. Doing it right is surprisingly tricky. I would suggest using a module. Here are some options. Text::CSV is simple to use, but doesn't handle embedded returns properly. Text::CSV_XS is much faster and does handle embedded returns, but it has to be compiled. Text::xSV also handles embedded returns, but is much slower. Unfortunately no two of these have compatible APIs.

  • Comment on Re: DBD::CSV limitation versus aging/unmaintained modules

Replies are listed 'Best First'.
Re: DBD::CSV limitation versus aging/unmaintained modules (lazy)
by tye (Sage) on Jan 15, 2004 at 18:11 UTC
    Try taking the CSV file and cutting it in halves

    Gah! One could probably find the problem with much, much less effort if one weren't pathologically opposed to the use of debuggers. q-:

    But seriously, in this case I'd build a hash of record IDs returned by Text::CSV and then use the method that "works" and report the records where Text::CSV starts/stop seeing records:

    #!/usr/bin/perl use DBI; tdbh = DBI->connect("DBI:CS­V:") or die "Cannot connect: " . $DBI::errstr; my $sth = $tdbh->prepare("sele­ct * from ofphl"); $sth->execute(); my %dbi; my $rec; while( $rec= $sth->fetch() ) { $dbi{$rec->{id_field_name}}++; }; open FILE, "< file.csv" or die "Can't read file.csv: $!\n"; my $has= 1; $|= 1; while( <FILE> ) { my $id= ( split(/,/) )[0]; # Assuming ID is first field; if( !$has != !$dbi{$id} ) { print "DBI ", ( $has ? "stopped" : "started" ), " at record $id.\n"; $has= !$has; } }

    Note that you might need to concatenate more than one field if there isn't a unique ID field.

                    - tye
      For those who don't know what tye is joking about, see Are debuggers good?.

      On the debugging suggestion, you're right that that is a faster approach. It wasn't the one that immediately came to mind for me, but that's life.

      Of course I still suspect that running through the file with Text::xSV, once, will find your error pretty fast if there is an error in the file.

      Hmm I thought that I already found where the problem is - record 3964, this is the one after which DBD::CSV stops noticing more records.

      I just can find what exactly DBD::CSV finds wrong about that record/line, and more importantly - why doesn't it emit any kind of warning when it finds those 'corrupted' lines.

        It is possible that Text::xSV can point it out for you. The biggest likelyhood is an unmatched " causing it to read the entire rest of the file as one really, really long line. (It keeps switching from quoted to nonquoted and back as it hits ", and always hits the end of line inside quotes and includes the return in a field.)

        If you post 3 lines from the file (before that line, on that line, and the next line) I should be able to visually spot it. But before you post, verify that DBD::CSV thinks that those 3 lines are only 2 rows.

        Ok, Thanks tilly, after reading your reply I finally started to see what is wrong about that line, it contains something like this:

        ,"Description description "hi world" rest of description",
        And overly-smart modules fail to parse that(not surprisingly).

        While it's easy to state that such file is badly formatted, it've been emitted from large Oracle-based system and there's nothing I can do about it ( not that I would pursue such noble cause now that I solved the problem on my side ).

Re: Re: DBD::CSV limitation versus aging/unmaintained modules
by demerphq (Chancellor) on Jan 15, 2004 at 22:10 UTC

    Unfortunately no two of these have compatible APIs.

    Been a while since I used any of these (although for speed I used CSV_XS and for flexibility I use your xSV), but I seem to recall that CSV_XS and CSV are interface compatible. What am I forgetting?


    ---
    demerphq

      First they ignore you, then they laugh at you, then they fight you, then you win.
      -- Gandhi


      Hmm..my memory was bad. With Text::CSV you pass in the line to parse() while with Text::CVS_XS you pass in the filehandle to getline(). However looking at it I notice that Text::CSV_XS also offers the parse() interface. So I was wrong, they are compatible after all.

      However naive code using the parse() interface will break on embedded newlines. If you want to use Text::CSV, then I'd strongly recommend writing your own getline() function, and then using that.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://321593]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (3)
As of 2024-04-19 17:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found