note blokhead I agree with the hunches of my esteemed colleagues -- this problem certainly smells NP-complete. Of course, the details depend on the generalization of the problem, the range of values, and how the objective function is defined (how do you measure "closeness" of a set of numbers to the average?) But since it's probably NP-complete, one would imagine looking for some reasonable heuristics / approximations. <p> I propose the following mind-numbingly simple algorithm: <blockquote> 1. If a person wants N items, then just allocate a <i>random</i> N items to the person. </blockquote> Sounds like a joke, but think about it. From each person's point of view, they get a random sample of the available items. That person's "score" is the mean of their sample. But the expected value of a sample mean is the overall mean. So at least in expectation, each person's allocation tends to the overall mean. <p> Next, the standard approach for "refining" an algorithm that only has good <i>expected</i> performance is: <blockquote> 2. Run many trials of #1 and take the best one </blockquote> To formally analyze the behavior of the repeated-trials approach, you'd have to do some more statistical analysis that incorporates variances as well (to know how likely each trial is to get close to the objective). I can't make that back-of-the-envelope statistical calculation now, but at least in the sample you gave, the variance among the items seems very small. Thus the variances of the sample means are also small. So I would expect this algorithm to do fairly well. This makes sense, because the difficulty of the problem seems related to how varied the individual items' scores are -- the distribution of the sample mean is greatly influenced by outliers, for example. <!-- Node text goes above. Div tags should contain sig only --> <div class="pmsig"><div class="pmsig-137386"> <p> blokhead </div></div> 739455 739455