I can do this with 2 passes over the file, but I was looking for a way to do this in one pass as files are very big.
This is what I intend to do.
Imagine a txt file with the following data
Some garbage
More garbage
data -start <some string> \
-intermediate <some string> \
-intermadiate <some string> \
.
.
-end <some string>
Some garbage
More garbage
data -start <some string> \
-intermediate <some string> \
-intermadiate <some string> \
.
.
-end <some string>
Some garbage
More garbage
data -start <some string> \
-intermediate <some string> \
-intermadiate <some string> \
.
.
-end <some string>
.
.
.
I want the output file to contain
data -start <string> -end <string>
data -start <string> -end <string>
data -start <string> -end <string>
.
.
.
The catch? After removing intermediates, there will be lots of duplicates, which I want to remove.
In my current flow, I read in the file, write out an array, and then unique the array
2 pass process seems to be a waste of time.
If I can get a one pass algo, it will be great!