Re: fetch row or fetchall
by borisz (Canon) on Nov 09, 2004 at 20:42 UTC
|
Use the database to return only 100 lines. See LIMIT.
select * from t_something limit 100;
| [reply] [d/l] |
|
Note that LIMIT is not SQL. It's a MySQL extention. Don't use it if portability is required or if you're not using MySQL.
| [reply] |
|
It works at leaast with mysql, postgres and sqlite. But Thanks for that surprising tip.
| [reply] |
|
SELECT TOP 100 * FROM table
As ikegami said, it isn't portable. But if you use stored procedures for your queries, or something like SQL::Abstract, you should still be able to sluff the limiting to the RDBMS.
But, to answer the OP's question...
From a memory-conservation POV, you are best looping to grab the first 100 lines. From a performance standpoint, you'll really have to benchmark, because it depends a lot on your RDBMS configuration. Even though the query has returned, all that means is that the DB has located the data -- it hasn't sent it, yet. So, asking for all 10_000 lines might be a significant performance hit, especially if the results will be sent over a slow net link.
The only issue with the loop approach is that you must make sure that your query returned at least 100 records, else handle the problem of running out of data. Something like:
while ((my $row = $sth->fetchrow_hashref) && (@results < 100)) { push
+@results, $row }
warn ("Wasn't able to get 100 records") if (@results < 100)
| [reply] [d/l] [select] |
Re: fetch row or fetchall
by tachyon (Chancellor) on Nov 09, 2004 at 22:08 UTC
|
See gmaxs analysis of many performance related questions Speeding up the DBI. Fetchall arrayref is often the fastest option. As noted with MySQL LIMIT is often a good clause to add.
| [reply] |
|
| [reply] |
|
mysql> describe select count(*) from global_urls_http;
+------------------------------+
| Comment |
+------------------------------+
| Select tables optimized away |
+------------------------------+
1 row in set (0.00 sec)
mysql> select count(*) from global_urls_http;
+----------+
| count(*) |
+----------+
| 9908618 |
+----------+
1 row in set (0.00 sec)
I will guarantee you that pulling 10 million odd rows just to get the count above will take longer than 0.00 sec :-)
Pulling back 10,000 rows just to get the count and save an extra query has some potentially very undesirable side effects.
Assuming 512Byte records the base data is 5 MB - even with a disk transfer speed of 50 MB/sec this is a minimum 1/10th second (probably more like 1/2 a second in the real world) just to pull that data off the disk. Given most DBs ability to execute hundreds of queries per second two queries is likely to be sinificantly faster as the expense of pulling 100x as much data as you really want is quit real.
Anyway by the time you get that data into a perl array it is probably 10 MB or more. Now this may not seem like a problem until you get your head around the fact that Perl essentially never releases memory back to the OS. It does free memory but typically keeps that memory for its own reuse. So why does that matter? Well if you have 10-20 long running parallel processes (mod_perl for example) the net result is an apparent memory leak over time. As each child makes a 'mega' query it it grabs enough memory for the results. The net result is that each child grows to the size of the largest query it has ever made.
| [reply] [d/l] |
|
|
No. Doing a second count query is not "inheritantly more inefficient" than fetching all the results from one query. Suppose you have 1 million rows but only want 100. Which do you think will be more efficient - a) fetching 100 rows and then fetching the number 1 million (not a million rows, just the number 1 million) in a second query for a total fetch of 101 rows OR b) fetching 1 million rows for a total fetch of 1,000,000 rows?
| [reply] |
|
| [reply] |
Re: fetch row or fetchall
by steves (Curate) on Nov 09, 2004 at 20:46 UTC
|
You can still count the rows and only use the first 100:
your loop could keep the first 100 but continue counting
until fetching is complete. It's about the same as fetching
all of them using a DBI
fetchall call, but fetching would probably (but not
necessarily depending on the underlying database driver) use
more memory in your application.
| [reply] |
|
"your loop could keep the first 100 but continue counting until fetching is complete"
I would rather do a seperate query for count(), for performance. Although you are skipping, not using the fetch result, they still get transferred across, and waste band width.
| [reply] |
Re: fetch row or fetchall
by tinita (Parson) on Nov 10, 2004 at 13:18 UTC
|
I've got the same problem, and I can just say that it depends all on the data and how complicated the where clause is.
my select statement could return 200_000 rows as maximum (maybe more), so in my case I don't have an option; I have to use LIMIT
(or do a count first) otherwise these rows will cause the memory to grow too high (even before $sth->fetch* the rows take memory).
In your case, if the where clause is complicated, the query will be slow,
so doing
select count(*) from table where ...;
select bla from table where ... LIMIT 100;
will be slower than
select bla from table where ...;
It's probably the best to benchmark it on your table as results highly depend on your table size, data and oher things. | [reply] [d/l] [select] |
|
Note that the DBI
RowCacheSize database handle attribute can often help
you balance your memory/speed tradeoff for large queries. But
it depends on the underlying database -- not all can make use
of it. In practice, I have found some significant differences
in query time/memory when changing this for queries that
return large numbers of rows.
For significant database applications I find it nearly
impossible to write SQL that works across all. I always
end up having to make use of database-specific functions or
other constructs. If your application is only going to run
on one database it's probably fine to make use of
LIMIT, TOP, or Oracle's ROWNUM. It
depends on whether your priority is speed and function or
portability across databases.
| [reply] |
Re: fetch row or fetchall
by rupesh (Hermit) on Nov 10, 2004 at 04:12 UTC
|
Select TOP 100 * from table
Rupesh.
| [reply] [d/l] |
|
| [reply] |
Re: fetch row or fetchall
by herveus (Prior) on Nov 10, 2004 at 15:27 UTC
|
Howdy!
It's not clear from your question whether or not you need the total number
of rows the query might have returned.
If you need the first 100 rows from a query that could return several orders
of magnitude more rows, do the fetchrow 100 times and then ->finish the
statement to tell the server it can pitch the remainder.
If you also need the total number of rows the query would have returned, do
that as a separate step using count().
If your original query would have returned only 101 rows, you don't save,
but you don't get hammered the way you would if you went ahead and fetched
all 10,000 rows even if you only needed 100.
If you are doing two different things, you probably want two separate queries.
If performance becomes an issue, you can optimize each separately, with the
different aims not getting in the way of each other.
| [reply] |