http://qs321.pair.com?node_id=892311


in reply to Re^2: Sharing XS object?
in thread Sharing XS object?

I'm not at all clear about the source of your numbers. Did you benchmark code? Do a big-O analysis? I find it hard to believe that my code even unoptimized has a 4-fold increase in total memory consumption over shared variables when only one thread has a copy of $oData and all the other threads request individual bits of data on an as-needed basis.

The OP did not specify the usage pattern of data in his application, or at least I did not read the post that way. There is nothing there saying that he has very large number of individual items that have to be simultaneously shared between M threads.

Based on his concern about a tree with an unspecified large number of nodes and a sample client appearing to do a search for a particular node named "foo", I made the assumption that he had in mind a quite different scenario. He has a very large data structure, perhaps 1G of data (before Perl overhead). He has threads that need to select bits and pieces from that data structure, e.g. query for a particular node in his tree. At any one time, in any one scope, each thread maybe needs no more than a handful of items out of that huge data structure, lets say 10. Assuming that those 10 items consume 100bytes each, we are talking about no more than a KB of data required by each client thread. Even without optimizations, I can't possibly see how deep copying 1G of data to each thread (10G total) would be better than 1G held by a server thread and 1K held by 10 client threads (1G+10K total). Even if you argued that all that marshalling meant 4x the amount of memory per data item, you still would only have 1G+40K total. That isn't anywhere near 10G, let alone 40G. What am I missing?

Usually, if you actually did benchmarking, you post your results in some detail. Here you did not. Or did you mean me to read your numbers in a rhetorical light - if code is 100-fold slower, if code has 4x the memory.... It is unclear to me.

If your actual point was "Don't be so cavalier about memory-processing time trade-offs because some just aren't worth it.", I agree entirely. It is totally silly to take two weeks to do something, when memory constraints could be solved by buying a few more GB of RAM at 10-120$ a GB depending on quality. However, in many applications, even a 100-fold increase in per-op time is of minimal concern if that op is only a small part of the larger code. Neither of us know what percentage of time the OP is spending querying his tree object relative to other processing he does with whatever data he retrieves.

My point about the marshalling was not to say that you should tolerate it because that is the price you pay. Rather I meant just the opposite: be really sure memory is a real problem because the software solutions to memory constraints are going to cost you.

Update: fixed some typos in numbers.

Update: removed first paragraph - rewrite my post and BrowserUK's and realized he wasn't complaining that my code failed to optimize itself but rather that the whole idea of marshalling was not a tradeoff of memory consumption at the price of CPU.