|Problems? Is your data what you think it is?|
|( #3333=superdoc: print w/replies, xml )||Need Help??|
I wrote one of these in C years ago, before I knew Perl. We got the job because the mailing house didn't fancy paying hundreds of thousands of dollars for a commercial US address deduplicator that didn't work particularly well on Australian addresses. The job took months, was a fixed price contract, and I think we lost money on that one. I remember that squeezing out high performance when de-duplicating millions of addresses was a challenge.
The obvious general approach is to parse the addresses into a canonical internal form -- then use that to compare addresses. This sort of software is necessarily riddled with heuristics and ambiguities and can never be perfect -- for example, does "Mr John and Bill Camel" mean "Mr John" and "Mr Bill Camel" or "Mr John Camel" and "Mr Bill Camel"? For performance reasons you can't afford to compare every address with every other one, so you need to break them into "buckets" and compare all addresses in each bucket. How do you choose the buckets? Not sure, but I remember bucketing on post code worked out quite well for us.
This thread may be of interest: Fuzzy matching of postal addresses on comp.lang.python of 17-jan-2005
Update: Kim Ryan has years of commercial experience in this field, so I suggest you check out his CPAN modules.
Update: See also: Re^3: Split first and last names (References on Parsing Names and Addresses)
In reply to Re: De Duping Street Addresses Fuzzily