Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re: Handling huge BLOB fields with DBI and MySQL

by Coruscate (Sexton)
on Jan 23, 2003 at 10:29 UTC ( [id://229273]=note: print w/replies, xml ) Need Help??


in reply to Handling huge BLOB fields with DBI and MySQL

I know I'm a little late in commenting on this post, but better late than never :)

I must be tired, or I momentarily blanked out, because I was looking at the upload() function and saying "But that doesn't make sense: he's trying to do an INSERT on the same ID for every length $maxlen, which should definitely fail because ID is a primary key". Then I remember that it was an auto-incrementing column: so I looked at the INSERT statement and (yay!) noticed that the id was not explicitly being passed. So I realized that one file had multiple rows within the table, identifyable by the name of the package. (sname). I then went on to find out that the data was retrieved via the fact that the IDs would be in the correct order when sorted. Bingo, I found the main key, everything made perfect sense.

Then I thought of one more thing and I looked for something in the code to sastisfy my personal needs, to no avail. Anywhere within that code, is there something that stops a person from uploading the same softname and filename more than once? In this case, there would really be two files in the DB with the same key identifier (the name column). Then, if a person were to attempt to download by that name, they'd get 2 files concatenated into one, with no warning. So should I go to bed because I completely missed a line in the code somewhere that dies if the package exists in the DB or am I right? And since this code is almost a year old right now, I suppose it has been updated since then? :)

Replies are listed 'Best First'.
Re: Re: Handling huge BLOB fields with DBI and MySQL
by gmax (Abbot) on Jan 23, 2003 at 11:02 UTC

    Actually, there is nothing in the database design preventing a double entry in the repository. Only the application is enforcing data integrity.

    You notice that the script is deleting the current version of the application stored, in order to avoid such duplicate.

    The real repository has a further control, a UNIQUE key on "name" + "vers" (which is defined NOT NULL) + the file CRC calculated before the update, so that duplicates are harder to pass. This rule relies on the assumption that all application versions are incremented automatically at each build.

    Add to this that the ones allowed to update this table are all seasoned developers, who are using the same application to update. They are all disciplined and each one is in charge of a different application, so I didn't have a single case of double entries in this paradigm.

    Thanks for your comments.

     _  _ _  _  
    (_|| | |(_|><
     _|   
    

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://229273]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (3)
As of 2024-03-29 01:59 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found