And for the first problem you want Net::FTP::Common.
Jenda
Always code as if the guy who ends up maintaining your code
will be a violent psychopath who knows where you live.
-- Rick Osborne
Edit by castaway: Closed small tag in signature | [reply] |
As to bullet #2.... there's no way to check the correctness of the file without having a remote checksum to verify against. You can, however, if you have some control of the destination server issue a SITE command to calculate that checksum and compare it to your local version. (Which is a very good idea, and I'm glad you're considering doing it--TCP doesn't guarantee end-to-end error-free transfer) | [reply] |
Ummm... no. Sequence numbers and packet checksumming in TCP *do* guarantee error-free transmission.
--Rhys
| [reply] |
No, in fact they do not guarantee error-free transmission. Ignoring the possibility of multibit errors that result in the same checksum with bad data (IP checksums aren't particularly rigorous, on purpose for speed reasons, and errors can result in bad data that still matches the checksum) the TCP checksum is a per-hop checksum, as routers may, and some do, recalculate and reset it when sending packets on to their next destination.
Checksums are generally done as packets come into a router, on over-the-wire data, to validate the packet, and will note some (but not all, by any means) errors. Packets then hit router memory, and if the checksum is regenerated it's done against the in-memory copy. If this in-memory copy is corrupt, for example because you have a bad RAM cell, transient power issues, or just cosmic rays, the checksum will be generated against this now-corrupt data and there will be no way to detect, as part of the transmission, that the data has gone bad. ECC and parity memory, if the router has it, will catch some, but again not all, instances of this.
This isn't theoretical. I know of cases where this has happened, and the only thing that caught the fact that the data was being corrupted in-transit by a router with bad memory was the fact that DecNET does do end-to-end checksumming of files transfered and it was yelling about bad data transimissions that the TCP streams didn't note.
If the data is important enough to go to some effort to validate the destination copy, then there's also the non-zero possibility of some sort of man-in-the-middle data alteration.
You can certainly argue that failures or attacks such as this are really, really unlikely. On the other hand, do you want a financial institution trusting that it won't happen when moving transactions against your bank account?
| [reply] |
There is no easy way to do remote file authentication.
You can however send the file, followed by the MD5 checksum of the file in a signature file. (There are plenty of Perl sources on the net that does MD5 checksum.) The remote system has to somehow (via cron?) compute the MD5 checksum of the received file, and write an error/success file, which can be picked up by your FTP client program.
There is another method, which is to create a custom version of ProFTP server by adding implicit MD5 checksum (or CRC checksum, etc) at both the client and server end, and client and server can verify checksums of the received and sent file to tell whether the file has arrived properly (server after received the file, automatically compute the MD5 hash and send to client as part of the acknowledgement packet). That sound's like a very interesting approach. I will most likely to follow the second approach, and have some fun coding. :-)
| [reply] |