in reply to Evolving a faster filter?

Hi Curtis,

I suppose you are not interested in some technical tips to speed up remove_unwanted because most time is consumed within the ->$filter()s?

Like avoiding array-copies and faster method-calls?

Well hard to tell with the infos given, but the filters could dynamically record their performance in previous runs and you sort them accordingly.

Cheers Rolf

Replies are listed 'Best First'.
Re^2: Evolving a faster filter? (combinatorics)
by LanX (Sage) on Jan 04, 2013 at 17:03 UTC
    > record their performance in previous runs and you sort them accordingly.

    OK some theoretical thoughts of how to sort them:

    be c[i] the average cost to run filter[i] on 1 object

    be r[i] the average ratio of remaining/checked objects after applying filter[i]

    be o the number of objects, c the costs so far

    so after applying a filter i

    o*=r[i] and c+=o*c[i]

    so with an chosen ordering 0,1,2 we get

    c = o*c[0]+ o*r[0]*c[1] + o*r[0]*r[1]*c[2] + ...

    after restructuring we get

    c = o * ( c[0] + r[0] * ( c[1] + r[1] * ( ... ) ) )

    I don't know if there is a direct solution for this optimization problem (I could ask some colleagues from uni, I'm pretty sure they know) but this equation is a good base for a "branch-and-bound" - graph search algorithm:

    the minimal cost-factor after the first filter is  > c[0] + r[0] * c[min] with c[min] being the minimal cost after excluding filter[0].

    These are pretty good branch and bound criteria, which should lead very fast to an optimal solution w/o trying each faculty(@filters) permutation like tye did here.

    That means even if your recorded performance for each filter is fuzzy and unstable you can always adapt and recalculate on the fly, even if you have more complicated models with ranges of values for c and r.

    Cheers Rolf


    ) for examples of B&B applications see Possible pairings for first knockout round of UEFA champions league or Puzzle Time