in reply to STOP Trading Memory for Speed
Have you tried using the latest BerkeleyDB with an in-memory only database? It may be fast enough and small enough for your needs. If that doesn't work, you could consider using a C hash library and writing a thin Perl front-end to it (maybe with Inline::C), which would give you total control of the tradeoff in speed vs. memory for your application.
Re: Re: STOP Trading Memory for Speed
by perrin (Chancellor) on Sep 25, 2002 at 21:10 UTC
|
One other thought: have you tried just letting it swap? Depending on your algorithm, the VM system might do a great job of efficiently keeping the part you care about in RAM while swapping the rest to disk. That was my experience with a very large data set on FreeBSD. We didn't even realize it had started swapping because it slowed down so little. | [reply] |
|
You didn't read his problem.
The memory limit is not physical memory, it is the fact that you can only address 4 GB with a 32-bit pointer.
BerkeleyDB might solve this by taking less memory. It might also be able to use anonymous paging to work even though it cannot directly address most of that memory. The documentation says that data sets that large usually speed up by switching from hashing to a BTREE.
But letting it swap won't work.
| [reply] |
|
| [reply] |
|
|
|