Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

fetch row or fetchall

by hakkr (Chaplain)
on Nov 09, 2004 at 20:21 UTC ( [id://406479]=perlquestion: print w/replies, xml ) Need Help??

hakkr has asked for the wisdom of the Perl Monks concerning the following question:

Hi all,

Given a query that returns 10,000 rows

if I just want the first 100 Am I better calling fetchrow_hashref 100 times
or
Should I just call fetchall_arrayref get all 10,000 and take the first 100 rows in the array.

my thinking is the query has returned all the data so I am better just to fetch it all rather than repeatedly fetching some of the rows in a loop.

I think fetchall is better as I can use the array size to get the total rowcount instead of doing a second query with a count(*). Assuming I cant rely on $dbh->rows() for selects. so 1 query with fetchall , using array size for the total rows or 2 query multiple limited fetchrow calls and using count for total rows

I guess I am asking is it in theory worth just trying to fetch only what you need if it means you have to do a second query to find the total rows.

cheers

Replies are listed 'Best First'.
Re: fetch row or fetchall
by borisz (Canon) on Nov 09, 2004 at 20:42 UTC
    Use the database to return only 100 lines. See LIMIT.
    select * from t_something limit 100;
    Boris
      Note that LIMIT is not SQL. It's a MySQL extention. Don't use it if portability is required or if you're not using MySQL.
        It works at leaast with mysql, postgres and sqlite. But Thanks for that surprising tip.
        Boris

        LIMIT is indeed, er... limited. But there is usually some equivalent. For example, MS SQL Server allows

        SELECT TOP 100 * FROM table

        As ikegami said, it isn't portable. But if you use stored procedures for your queries, or something like SQL::Abstract, you should still be able to sluff the limiting to the RDBMS.

        But, to answer the OP's question...

        From a memory-conservation POV, you are best looping to grab the first 100 lines. From a performance standpoint, you'll really have to benchmark, because it depends a lot on your RDBMS configuration. Even though the query has returned, all that means is that the DB has located the data -- it hasn't sent it, yet. So, asking for all 10_000 lines might be a significant performance hit, especially if the results will be sent over a slow net link.

        The only issue with the loop approach is that you must make sure that your query returned at least 100 records, else handle the problem of running out of data. Something like:

        while ((my $row = $sth->fetchrow_hashref) && (@results < 100)) { push +@results, $row } warn ("Wasn't able to get 100 records") if (@results < 100)

        radiantmatrix
        require General::Disclaimer;
        Perl is

Re: fetch row or fetchall
by tachyon (Chancellor) on Nov 09, 2004 at 22:08 UTC
    See gmaxs analysis of many performance related questions Speeding up the DBI. Fetchall arrayref is often the fastest option. As noted with MySQL LIMIT is often a good clause to add.

    cheers

    tachyon

      If I put in a mysql limit or restrict by oracle rowid then I have to do a second query to get the real unlimited total. I reckon doing a second count query is inherently more inefficient than fetching all the results from one query.

      i.e this is so i can say. results 1 to 100 of total(10000) rows

      From what gmax says there it looks like I am best(assuming enough memory ) just losing the count query and any limiting sql in the initail query and just going with a fetchall_arrayref on the initial query. thanks

        count(*) is an optimised query on MySQL (and probably most other DBs)

        mysql> describe select count(*) from global_urls_http; +------------------------------+ | Comment | +------------------------------+ | Select tables optimized away | +------------------------------+ 1 row in set (0.00 sec) mysql> select count(*) from global_urls_http; +----------+ | count(*) | +----------+ | 9908618 | +----------+ 1 row in set (0.00 sec)

        I will guarantee you that pulling 10 million odd rows just to get the count above will take longer than 0.00 sec :-)

        Pulling back 10,000 rows just to get the count and save an extra query has some potentially very undesirable side effects.

        Assuming 512Byte records the base data is 5 MB - even with a disk transfer speed of 50 MB/sec this is a minimum 1/10th second (probably more like 1/2 a second in the real world) just to pull that data off the disk. Given most DBs ability to execute hundreds of queries per second two queries is likely to be sinificantly faster as the expense of pulling 100x as much data as you really want is quit real.

        Anyway by the time you get that data into a perl array it is probably 10 MB or more. Now this may not seem like a problem until you get your head around the fact that Perl essentially never releases memory back to the OS. It does free memory but typically keeps that memory for its own reuse. So why does that matter? Well if you have 10-20 long running parallel processes (mod_perl for example) the net result is an apparent memory leak over time. As each child makes a 'mega' query it it grabs enough memory for the results. The net result is that each child grows to the size of the largest query it has ever made.

        cheers

        tachyon

        No. Doing a second count query is not "inheritantly more inefficient" than fetching all the results from one query. Suppose you have 1 million rows but only want 100. Which do you think will be more efficient - a) fetching 100 rows and then fetching the number 1 million (not a million rows, just the number 1 million) in a second query for a total fetch of 101 rows OR b) fetching 1 million rows for a total fetch of 1,000,000 rows?
Re: fetch row or fetchall
by steves (Curate) on Nov 09, 2004 at 20:46 UTC

    You can still count the rows and only use the first 100: your loop could keep the first 100 but continue counting until fetching is complete. It's about the same as fetching all of them using a DBI fetchall call, but fetching would probably (but not necessarily depending on the underlying database driver) use more memory in your application.

      "your loop could keep the first 100 but continue counting until fetching is complete"

      I would rather do a seperate query for count(), for performance. Although you are skipping, not using the fetch result, they still get transferred across, and waste band width.

Re: fetch row or fetchall
by tinita (Parson) on Nov 10, 2004 at 13:18 UTC
    I've got the same problem, and I can just say that it depends all on the data and how complicated the where clause is.
    my select statement could return 200_000 rows as maximum (maybe more), so in my case I don't have an option; I have to use LIMIT (or do a count first) otherwise these rows will cause the memory to grow too high (even before $sth->fetch* the rows take memory).

    In your case, if the where clause is complicated, the query will be slow, so doing

    select count(*) from table where ...; select bla from table where ... LIMIT 100;
    will be slower than select bla from table where ...;

    It's probably the best to benchmark it on your table as results highly depend on your table size, data and oher things.

      Note that the DBI RowCacheSize database handle attribute can often help you balance your memory/speed tradeoff for large queries. But it depends on the underlying database -- not all can make use of it. In practice, I have found some significant differences in query time/memory when changing this for queries that return large numbers of rows.

      For significant database applications I find it nearly impossible to write SQL that works across all. I always end up having to make use of database-specific functions or other constructs. If your application is only going to run on one database it's probably fine to make use of LIMIT, TOP, or Oracle's ROWNUM. It depends on whether your priority is speed and function or portability across databases.

Re: fetch row or fetchall
by rupesh (Hermit) on Nov 10, 2004 at 04:12 UTC
    You could use TOP.

    Select TOP 100 * from table

    Rupesh.

      TOP is not ANSI standard SQL... meaning that its use is limited to those dbms's that have this extension.

      Team Sybase member

      No one has seen what you have seen, and until that happens, we're all going to think that you're nuts. - Jack O'Neil, Stargate SG-1

Re: fetch row or fetchall
by herveus (Prior) on Nov 10, 2004 at 15:27 UTC
    Howdy!

    It's not clear from your question whether or not you need the total number of rows the query might have returned.

    If you need the first 100 rows from a query that could return several orders of magnitude more rows, do the fetchrow 100 times and then ->finish the statement to tell the server it can pitch the remainder.

    If you also need the total number of rows the query would have returned, do that as a separate step using count().

    If your original query would have returned only 101 rows, you don't save, but you don't get hammered the way you would if you went ahead and fetched all 10,000 rows even if you only needed 100.

    If you are doing two different things, you probably want two separate queries. If performance becomes an issue, you can optimize each separately, with the different aims not getting in the way of each other.

    yours,
    Michael

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://406479]
Approved by atcroft
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others musing on the Monastery: (2)
As of 2024-04-25 22:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found