Passive FTP should be the default, but for some reason it is not with Net::FTP. ncftp used to default to active FTP, but as of 3.0 it came to its senses and defaulted to passive which is why it is working.
The reason why wget works is that your OS ships with a patch (a la Red Hat) or a default configuration file (a la FreeBSD) which overrides wget's active FTP default. (Note that if passive mode is set in a system-wide configuration file such as /etc/wgetrc or /usr/local/etc/wgetrc, there is (currently, as far as I know) no way to turn it off. As this post demostrates, there is no --active-ftp flag.)
It seems like a simple problem with a quick fix you can apply and move on. But its really a symptom of a larger problem: the growing number of second-class, unequal peers on the Internet--those that cannot accept incoming connections. NAT (or more accurately, PAT) is the worst offender.
Of course, NAT is a necessity in today's Internet. There simply are not enough IP addresses to go around, and ISPs will charge you more for more addresses. Many people have multiple computers. But by adding multiple computers, you sacrifice inbound connections, you sacrifice the ability to connect to peers that also cannot accept inbound connections.
Seems like a trivial problem. After all, a home user probably doesn't ever need anyone connecting to his or her computer. Except it breaks the peer-to-peer nature of the Internet. We're already seeing NAT's drastic effects on P2P software. Firewalled users simply cannot connect to other firewalled users. The consumers are shut off from each other. One small step towards a Secure Internet.
Of course, any reasonably knowledgeable computer user would be able to work around the limitations of NAT. Especially us Perl programmers.
I don't know what firewall you are using, but Linux has the ftp_conn and ftp_conntrack modules for permitting active FTP. OpenBSD/FreeBSD's pf can use an FTP proxy. Some SMC routers actually replace PASV in TCP streams on port 21 with P@SV, forcing active FTP. Stand-alone residential firewalls may have similar capabilities, check your router documentation.
Once you've poked a hole through your firewall, active FTP should now work. And all the broken programs that default to active mode.
Sure, you could switch to passive mode, and it may work well. In passive mode, the server does the bind/listen/accept (see perldoc -f bind and man 2 bind) dance. Some feel that this is suboptimal because bind "is actually easier than selecting the first unused file descriptor, but since it is not so important for web servers, some operating systems decided not to optimize it.". The FTP server gets stuck with doing all the binds in passive FTP, when the expensive syscalls could be distributed throughout each client (in active FTP mode). But of course, the server still has to choose a local port in active mode, so I don't know how much truth there is to the claim that active FTP is superior, performance-wise.
However, when mirroring a large FTP server using wget, I've noticed that passive mode sometimes gives me errors, well into the mirroring process, after thousands of files have been downloaded. I can't recall the exact error, but I think it was about no ports left. The transfers then fail. I've never had this problem with active FTP.
For what its worth, in my experience active has proven far more reliable. So you might consider setting up your firewall to support active FTP if you encounter any trouble with passive. As a side-effect, Net::FTP will then begin to work out of the box. Though for most of us, switching to passive FTP as described by other replies here is the best option. But I hope this reply was useful for those who want to know a little more about this situation.