Slurping the files into hashes will consume (assuming two million entries) about 22 megabytes JUST for the data. That doesn't seem unreasonable by todays standards, but it won't scale; someday someone will be sorry they used that approach as the dataset grows. 22 megs of phone numbers will take around 44 megs of total space including Perl's internal overhead (very approximately, assuming 11 bytes per number, plus hash and sv overhead).
Plus, if the files are slurped, that means each time someone starts up the program, it will have to do all the work all over again. If it's slurping into an array, it'll have to be sorted so it can be searched efficiently (the sort will be O(n log n), and searches will be O(log n) if a bin-search is used. If it's slurping into a hash, that operation alone consumes O(n log n) to hashify, every time the program starts up... and then searches will be O(1) after that. Every time the script starts, it'll have to re-slurp, and re-organize.
| [reply] |