Well, what you had would read the next 16K of data from the file and append it to $buffer each time through the loop. Was your intent to process the file 16K at a time? In that case, you would still have to remove the fourth argument to sysread as it was causing new data to be written to the end of $buffer.
Perl will allow you to read line-at-a-time or to slurp all of the file into an array with each element containing one line from the file. If you use sysread, breaking the data into lines is up to you. You would first have to break the data into lines by splitting on '\n' and then split the record into fields by splitting on '|'. It would be highly likely that your 16K read won't end at a record boundary so you would have to add code to merge the remaining data from one read with the beginning of the next. I'm not sure whatever you came up with would be more efficient than perl's line mode buffering.
|90% of every Perl application is already written. ⇒|