Beefy Boxes and Bandwidth Generously Provided by pair Networks
There's more than one way to do things
 
PerlMonks  

Re^2: DBD::CSV and really bad legacy flat file

by harleypig (Monk)
on Jul 19, 2005 at 02:25 UTC ( [id://475950]=note: print w/replies, xml ) Need Help??


in reply to Re: DBD::CSV and really bad legacy flat file
in thread DBD::CSV and really bad legacy flat file

I can easily convert the file ... my boss doesn't want to convert it. Until the new code is up and working the old code has to be able to work. I don't feel like setting up a script to copy and convert the db everytime it's updated, I've got too much else on my plate.

Harley J Pig
  • Comment on Re^2: DBD::CSV and really bad legacy flat file

Replies are listed 'Best First'.
Re^3: DBD::CSV and really bad legacy flat file
by Tanktalus (Canon) on Jul 19, 2005 at 02:36 UTC

    To be honest, I've done these conversions a few times. Including converting from a human-typed table (which was autoconverted into HTML via Lotus Domino) to an RDBMS. I did my development, against direct orders, while the table was still being updated. All I did was write the tool to convert, and then develop everything around that "sample" data, and then, once the switch was made, the "original" was considered frozen, I redid the conversion (took 10 or 15 minutes), and then put my database live.

    So, the question is, will this legacy flat file continue to live, or is it eventually going to be replaced with this something new?

    If it is going to live, and you're going to need to continue to read directly from it, you may be able to subclass DBD::File somehow to fake this - it may not be as fast as working on converted data, but it may still be faster than converting all the data, only to work with a subset of it. Or, at the least, it means you'll only have a single source for data, rather than working from an "unofficial" data source.

      If I can make an on the fly conversion quickly enough I would, but I don't know enough about DBD::File to be willing to take the time.
      Harley J Pig
Re^3: DBD::CSV and really bad legacy flat file
by jhourcle (Prior) on Jul 19, 2005 at 03:45 UTC

    If you're not converting it, then what are you doing with it?

    I just went over parsing it (which you said you wanted) -- it's what you do inside the loop that determines what you do with it after that.

    If you need to write the format back out, that's easy too.

    So... if you could please explain a little more about what it is that you're trying to do, I could probably give an answer that you might find more useful. (and not the short view -- I know you're trying to use Text::CSV_XS or DBD::CSV, but why are you trying to use them -- what's the main objective?) I know Text::CSV_XS is for manipulation of CSV, but what sort of manipulations are you trying to do?

    When the boss tells me to do something, and that I have to do it a certain way, I always come back to one quote:

    "We're the technical experts. We were hired so that management could ignore our recommendations and tell us how to do our jobs." -- Mike Andrews in alt.sysadmin.recovery 10 October 2000 <eUJE5.880$ln6.119642@news.flash.net>

      I need to duplicate the functionality of an existing script which is entirely self contained and uses no modules of any kind and is written with a perl4 mindset and is a PITA to maintain and update and modify.

      How the data is handled doesn't matter as long as it stays in the same format, as far as my boss is concerned. I don't have any problems with the duplication of functionality, it's just parsing the data with existing modules rather than writing my own, which almost always takes longer, which is what my boss doesn't want.

      Harley J Pig

        If speed is a significant factor, then Perl probably isn't your best language.

        In fact, if any of the modules get a speed up, it's probably because they're using compiled parts. They would actually have more overhead, because they need to be made more generic so that they can be applied to generic uses, and to deal with problem input that might not be a factor in your situation.

        It may be possible to get a speed up by streamlining the logic of your program, or by trading off memory or disk for speed.

        From what you'r describing, in dealing with legacy code that's difficult to maintain, I'd suggest the Perl Medic book, even if the front page to the site is rather horrible.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://475950]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (6)
As of 2024-04-19 11:10 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found