Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re^2: Patience Sorting To Find Longest Increasing Subsequence

by Limbic~Region (Chancellor)
on May 05, 2006 at 15:51 UTC ( [id://547689]=note: print w/replies, xml ) Need Help??


in reply to Re: Patience Sorting To Find Longest Increasing Subsequence
in thread Patience Sorting To Find Longest Increasing Subsequence

bart,
With regards to your follow up in the CB concerning efficiency. Without resorting to complex data structures, the partial patience sort can be done in O(N Log N) assuming the binary search. Finding the LIS is at max an additional N worst case so O(N Log N + N). The question you posed is if the merge sort O(N^2) was the most efficient way to finish the sort.

If you take advantage of the fact that the top cards are already in ascending order, you can select the top card of the left most pile and then move that pile to keep the new top card in ascending order. To find the new location using a binary search you have O(Log N). To insert in the middle is O(N). Since you have to do this for N items, you result in O(N^2 Log N). A merge sort is only O(N^2) worst case so no, I don't think so.

As I said in the other reply, using a different datastructure could make the finishing of the sort more efficient but it also adds a great deal more complexity. You are welcome to use a Van_Emde_Boas_tree which claims to be able to do the whole thing in O(N Log N) but that is an exercise left for the reader.

Cheers - L~R

  • Comment on Re^2: Patience Sorting To Find Longest Increasing Subsequence

Replies are listed 'Best First'.
Re^3: Patience Sorting To Find Longest Increasing Subsequence
by demerphq (Chancellor) on May 06, 2006 at 09:02 UTC

    A merge sort is only O(N^2) worst case so no,

    No, a merge sort is worst case O(n log n).

    You are welcome to use a Van_Emde_Boas_tree which claims to be able to do the whole thing in O(N Log N)

    Actually the paper by Bespamyatnikh & Segal contains a proof that you can do it in O(N log log N) time. I havent verified it tho.

    But I doubt that the vEB based algorithm would in practice beat a simpler algorithm to do patience sorting. Unless I guess if you were dealing with a deck with tens of thousands or even millions or billions of cards. The overhead of maintaining a vEB tree is prohibitive for small datasets. The cost of doing binary operations on the keys, maintaining the vEB tree and etc, would most likely outweigh that of a simpler less efficient algorithm.

    As bart said, sometimes a binary search algorithm is not as fast a scan, even though one is O(log N) and the other is O(N). The reason of course is that big-oh notation glosses over issues like cost per operation, and only focuses on the "overwhelming factor" involved. So in a binary search if it takes 4 units of work to perform an operation and in linear search it takes 1 then binary search only wins when 4 * log N < N, so for lists shorter than 13 elements there would be no win to a binary search. And I'd bet that in fact the ratio is probably something like 20:1 and not 4:1. Apply this kind of logic to a deck of 52 cards, and IMO its pretty clear that vEB trees are not the way to go for this, regardless of the proof.

    ---
    $world=~s/war/peace/g

      demerphq,
      I had taken bart's word for the merge sort in the CB. I later told him the math was wrong (in the CB) but didn't change it because I knew the math was also wrong for his desired method of finishing the sorting. The thing neither of us considered is that the problem space decreases with each pass. In any case, regardless of the accuracy of the math - the merge sort is still the most efficient given the data structure.

      Actually the paper by Bespamyatnikh & Segal contains a proof that you can do it in O(N log log N) time.
      What is the "it" that is O(N log log N) time though? The partial sort needed to obtain the LIS, obtaining the LIS itself, or completing the patience sort? What I understood from the paper, which I admittedly only read far enough to know that it was over my head, was that the O(N log log N) was not for a complete sort which Wikipedia agrees with.

      As far as the binary search is concerned - I have provided implementations to get to the partial sort using both methods so Benchmarking shouldn't be hard. Additionally, implementing a binary search & splice approach to bench against the merge sort is also straight forward.

      Cheers - L~R

        What is the "it" that is O(N log log N) time though?

        From what I understand its both finding the LIS and doing the patience sort. Actually, if i understand things correctly the B&S algorithm actually finds all increasing sequences in O(N log log N).

        Also I have an implementation of Patience sorting with backrefs for the LIS that I will post when I get a moment.

        ---
        $world=~s/war/peace/g

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://547689]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others wandering the Monastery: (7)
As of 2024-04-19 09:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found