In attempting to load a log file into SQLite, it took 20 minutes of disk thrashing. Super search mostly pointed to the fact that databases typically include a loading utility, but it appears that SQLite doesn't have one. RTFM didn't help much, and I was about to post the speed question in SOPW when I found the answer on SQLite's benchmark page. The trick is to do it all in one transaction. The speed improvement was over 200x for loading 50k rows in my test case. Here it is in case someone else needs to know:
# first attempt
...
my $start = time;
my $dbh = DBI->connect("DBI:SQLite:$dbfile") or die;
my $sth = $dbh->prepare( qq(INSERT INTO 'logentries'
('col1', 'col2', 'col3', 'col4') values (?,?,?,?) ));
$sth->execute($_->[0], $_->[1], $_->[2], $_->[3]) for @rows;
print "et: ", time - $start, " sec\n";
# et: 1082 seconds
# single transaction
...
my $start = time;
my $dbh = DBI->connect("DBI:SQLite:$dbfile") or die;
$dbh->do('BEGIN');
my $sth = $dbh->prepare( qq(INSERT INTO 'logentries'
('col1', 'col2', 'col3', 'col4') values (?,?,?,?) ));
$sth->execute($_->[0], $_->[1], $_->[2], $_->[3]) for @rows;
$dbh->do('COMMIT');
print "et: ", time - $start, " sec\n";
# et: 5 seconds
Update:
Setting AutoCommit => 0 as suggested by Christoforo gives the same speed improvement. As I suspected there are limits on transaction size as noted by Jenda.