http://qs321.pair.com?node_id=752115


in reply to Re^2: insert label into file
in thread insert label into file

The repeat values in your hash mean that you cannot use them as primary keys for identifying a data set. Clever data structures in Perl will not help you if you are trying to use them as the keys in your database. There are a couple possible solutions here, depending on your needs. The most natural solution is to use whatever you index your database with to key your hash. Can you provide your db table structure, some sample data from your original hash and explain how they are related?

Replies are listed 'Best First'.
Re^4: insert label into file
by grashoper (Monk) on Mar 20, 2009 at 17:45 UTC
    well let me clarify a bit here. My hash is backwards, data file example is below. Database has columns for Date,Login,SearchLoad,etc, Date column is a date/time login real searchload real when I read file into hash I get timevalue->columnname, I want the reverse which would be columname->timevalue. In order to fix my data file, I want to add a label Date to the first value which is a timestamp for each record throughout the file, then I want to read the file into a hash so I can input it into the database which is currently empty and only contains the columns I want to use for the table. The sql for the table creation is in the bottom of this comment.
    05/12/2006 12:21:10 AM,Login,19.748396,SearchLoad,13.459355,SearchCoun +t,64:0.660950,SearchResults,12.988677,SearchSave,6.138828,SearchDelet +e,1.682419,SearchDetails,10.114545,TaxLoad,3.995745,TaxResults,567:5. +287602,TaxDetails,3.975717,ClientAdd,2.643801,CMALoad,2.603744,CMASav +e,14.350635,CMADelete,1.021467,ClientDelete,0.731053,Logout,2.733931, +05/12/2006 12:54:07 AM,Login,18.015907,SearchLoad,13.259065,SearchCou +nt,64:0.620893,SearchResults,12.187526,SearchSave,5.327662,SearchDele +te,1.111600,SearchDetails,9.914256,TaxLoad,3.755399,TaxResults,567:5. +047257,TaxDetails,4.055832,ClientAdd,2.583715,CMALoad,2.834077,CMASav +e,14.350636,CMADelete,0.690993,ClientDelete,0.610879,Logout,2.743947, +05/12/2006 01:21:15 AM,
    database stuff..
    USE [performancetesting] GO /****** Object: Table [dbo].[AllResults] Script Date: 03/20/2009 1 +3:35:27 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[AllResults]( [Date] [datetime] NOT NULL, [Login] [real] NULL, [SearchLoad] [real] NULL, [SearchCount] [real] NULL, [SearchResults] [real] NULL, [SearchSave] [real] NULL, [SearchDelete] [real] NULL, [SearchDetails] [real] NULL, [ClientAdd] [real] NULL, [CMALoad] [real] NULL, [CMASave] [real] NULL, [CMADelete] [real] NULL, [ClientDelete] [real] NULL, [Logout] [real] NULL, [SiteCode] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NUL +L ) ON [PRIMARY]

      Your language is a bit unclear, so I apologize if the following is off point. I think the key collision is resulting from you wanting to store multiple records at once. Given that you have a fixed number of fields per record, comma delimited, you should be able to just process one record at a time. If you want to deal with them all at once, an array of hashes (perllol) really is the only logical data structure. The thing that strikes me as most odd with the provided log file is that it uses the same delimiter for fields and records. Assuming you have the entire logfile read into $string, the following should be a decent prototype for what you want to do.

      while ($string) { my @array = split /\,/, $string, 34; my %hash = (Date => @array[0..32]); store_in_db(%hash); if (defined $array[33]) { $string = $array[33]; } else { undef $string } }

        actually one record at a time is fine, I don't really follow your example.. ok string contains the data, and @array is going to hold the data seperated into pairs, but why string,34; I don't understand that part, then next line is doing what, I just don't get it. I will look at fixing my data file over the weekend and read re-read perllol hopefully if i read enough it will start making some kind of sense. thanks for looking, perhaps after I have read some more this will make more sense to me.