http://qs321.pair.com?node_id=384503


in reply to Re: I sense there is a simpler way...
in thread I sense there is a simpler way...

Random Walk and calin, thank you both for your replies. Creating a deep data structure (hash of arrays) was the step that eluded me.

Is gobbling an entire file into an array considered bad form? My datafile is roughly half a megabyte in size, so I figured memory was not an issue. I can see however that reading the file line by line makes for more scalable code.

What was bugging me about my own code was that I in fact had an N+1 pass approach, where N was the number of duplicated keys. I was reading the file once, and then cycling several times over the array.

calin, you are right about the fact that there can be more than a single duplicate for any textual field, so the code needs to account for this.

Again, thanks for the time the both of you spent looking at my code, it is much appreciated!