http://qs321.pair.com?node_id=822856

andreas1234567 has asked for the wisdom of the Perl Monks concerning the following question:

I have two machines are identical with one exception that I believed was identical when I first wrote this article: In all other aspects they are identical: Some say less is more, but it is a real surprise to see that the machine with 40 GB RAM seems to be roughly twice as fast as the machine with 64 GB RAM. The behavior is consistent, and as illustrated by this little benchmark (which is CPU intensive, but requires little memory):
use warnings; use strict; use JSON::XS; use Benchmark qw(cmpthese); my $coder = JSON::XS->new->ascii->pretty->allow_nonref; my $hashref = { one => 1, two => 2, three => 3, four => { nested => 'b +ird' } }; my $arrayref = [ 'one', 'two', 'three', 'four', 'five', 'six' ]; # cmpthese can be used both ways as well cmpthese( -1, { 'enc+dec hashref ' => sub { $coder->decode( $coder->encode( $hashref ) ) }, 'enc+dec arrayref ' => sub { $coder->decode( $coder->encode( $arrayref ) ) }, } ); __END__
[me@first ~]$ ./perl-json-test.pl Rate enc+dec hashref enc+dec arrayref enc+dec hashref 344063/s -- -22% enc+dec arrayref 438597/s 27% -- [me@second ~]$ ./perl-json-test.pl Rate enc+dec hashref enc+dec arrayref enc+dec hashref 153121/s -- -24% enc+dec arrayref 200972/s 31% --
A collegues commented that Red Hat Linux has a terrible track record in High Performance Computing and that the memory management is not well suited for two digit gigabytes of RAM. He suggests to use another Linux distribution (such as SuSE), but client policy prevents that.

Do any of the honorable monks have made similar observations or even reasonable explanation to why the machine with the most memory is the slower one?

Update 1: Actually, I have 4 machines of which 2 are pairwise identical. Each machine behaves identically as its twin.

Update 2 : Thanks for helpful replies. As I have remote access only to the machines in question, I can't experiment as broadly as I otherwise would have, but I'll surely update this thread once I get some results. I have asked the services provider to reduce physical memory down to 16 Gb.

Update 3 : It turns the machines were not identical after all (see top), and that reducing the memory down to 16 Gb did not significantly change the benchmarks. Whether or not the PCI bus type can account for the huge difference in overall performance is unknown.

--
No matter how great and destructive your problems may seem now, remember, you've probably only seen the tip of them. [1]

Replies are listed 'Best First'.
Re: Less is more? 40G memory twice as fast as 64G
by Herkum (Parson) on Feb 12, 2010 at 14:51 UTC

    It would be a mistake to assume that they two systems are completely identical. The main issue, is that parts (in particular chips) can come from different places even for the same motherboard.

    Also, you would assuming that all your parts are working correctly. Most computer tools don't have a good way of measuring information about what is going on hardware internally (for example, time and throughput of data going through the internal bus).

    A suggestion, one of the previous posters mentioned optimal memory, and generally RAM is easy to add and remove you can try this simple experiment. Swap RAM from one system to the other and then run your performance test. If the results are different, it can have something to do with the RAM. If RAM is a problem you could swap RAM sets and test different combination's and see if you get weird results which would point to a problem with the RAM itself rather than the system.

      Good idea! I'm interested in hearing the results of a RAM swap
Re: Less is more? 40G memory twice as fast as 64G
by zentara (Archbishop) on Feb 12, 2010 at 12:43 UTC
    Sure, makes zen sense to me. You are overburdening the processor with the extra memory, which needs alot of housekeeping and searching.

    I suspect there is an optimum memory size for each motherboard-cpu combo out there.

    A similar problem used to occur in older pentium-era computers back in the day. A motherboard may say it could handle 4 gig ram, but it ran faster with 1 gig.

    I think of it as a chef at a worktable. The chef can work faster with a small table, where everything is within arms reach.... and is definitely slowed down by a giant worktable.... the table (ram size) needs to be just the right size for the intended job.


    I'm not really a human, but I play one on earth.
    Old Perl Programmer Haiku
      similar problem used to occur in older pentium-era computers back in the day. A motherboard may say it could handle 4 gig ram, but it ran faster with 1 gig.

      Yeah, those were the times ... ;-)

      The actual reason was that the mainboard + chipset combination was only able to cache the first 2**n bytes of memory, the remaining memory was accessed uncached. On some mainboards, one could add a tag RAM chip, for an additional 2**n bytes of cached memory, but often that still was factor 2 or 4 less than the maximum memory size.

      DOS and ancient Windows ran in the lower parts of memory, whereas modern operating systems use the full memory range, often one half of the addressable space for applications and the other half for the OS. On such a mainboard, either application or OS would run uncached, assuming a linear memory map from virtual to physical memory.

      Another but similar effect could be observed with EMS, where "high" memory was mapped into a small window in "low" memory, first in hardware, later using x86 protected or virtual mode. Those days, "high" memory started at 1 M. Now, we have a similar barrier at 2**32 bytes = 4 Gbytes, and tricks similar to EMS are used to make use of "high" memory behind the 4 Gbytes barrier. The only way to get back to a "flat" memory model is to switch the CPU and the OS to 64 bit mode.

      So, I would look at the hardware specs to learn if there is a limit for cacheable memory (I doubt it, because caching is now done inside the CPU case, no longer on the mainboard), and I would check that the OS is running in 64 bit mode.

      Alexander

      --
      Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
Re: Less is more? 40G memory twice as fast as 64G
by BrowserUk (Patriarch) on Feb 12, 2010 at 15:59 UTC

    The first thing I'd do is take half the memory out of the 64GB machine and benchmark again.

    The second thing would be to swap (all) the memory between the two machines and try again.

    That might tell you whether it's the amount of memory; or the actual physical memory; or the particular processor; that is the source of the difference.

    Another thing worth a try would be to use whatever bios-based hardware diagnostics are available and run the memory & system board test suites.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Less is more? 40G memory twice as fast as 64G
by davis (Vicar) on Feb 12, 2010 at 18:44 UTC
    I suspect something weird is going on. Try booting from the memtest utility. It's available from the RHEL repositories and pay particular attention to the "memory bandwidth" report.
    If I had to guess, I'd say you've got e.g. 5300-speed RAM in both machines, but when the machine is filled with RAM the memory controller slows down the bus clock (presumably for power/heat reasons), actually slowing down the RAM. In these circumstances you'll get faster results with less RAM.
    Compare the RAM speeds, it's almost certainly your CPU->RAM bandwidth. Oh, I wouldn't pay much heed to the RHEL vs. SuSE debate on HPC use, I use RHEL-based OSes for HPC all the time.

    davis

Re: Less is more? 40G memory twice as fast as 64G
by Anonymous Monk on Feb 12, 2010 at 13:33 UTC
    Is it possible to restrict the RAM available via a boot option for some empirical testing to find an optimum amount?
    (And ensure that the ram quantity in use is actually making the difference, rather than some other effect)
Re: Less is more? 40G memory twice as fast as 64G
by MidLifeXis (Monsignor) on Feb 12, 2010 at 14:49 UTC

    Any hardware problems, overheated CPUs (dust in the fan ;-), extra processes (causing swappingtasks switching), etc, etc, etc?

    It is said that "only perl can parse Perl." I don't even come close until my 3rd cup of coffee. --MidLifeXis

Re: Less is more? 40G memory twice as fast as 64G
by Marshall (Canon) on Feb 14, 2010 at 15:26 UTC
    I don't much about RedHat, but came across this paper. It might be of help or not? Red Hat Tuning rhel4_vm.pdf

    Some OS's will make assumptions about how the machine is going to be used based upon amount of memory that it finds upon installation which of course may or not be what you want to have happen. The more physical memory you have, the more complex the virtual memory table become. Anyway this article sounded relevant to your problem. And you can compare between your two different machines. It possible that the box with more memory will require different settings. Sounds like some experimentation is going to be required. Sounds like we are all curious as to what you find out!