[Openmcl-devel] Undocumented heap size limit?
dlw at itasoftware.com
Wed Jul 28 10:38:51 CDT 2010
In our server (which I presume you know pretty well),
we've been doing just fine running for a long time.
One trick we put in was: whenever we finish processing
a request, if the heap is getting big, we run a GC. The
idea is to run it when there is as little reachable memory
Actually we have two parameters, one for the threshold
heap size (currently set to 1100000000) and one that
says GC every N requests (crrently 1000). We are running
with --heap-reserve 13G.
I don't know whether we have empirical evidence that this
improves throughput; I'll see if I can find out. But it
certainly seems to make sense.
Bill St. Clair wrote:
> We've got a live server that spends its time pulling a bunch of XML
> feeds and parsing them into an in-memory "database". It also uses
> Weblocks to provide a web site on the information from the feeds. It
> runs nicely for periods of at least a few days, but still has a few
> memory leaks that require it to be occasionally restarted.
> We have it set up with the default GC thresholds. It's 64-bit, so it
> reserves 512 gigs of virtual memory from the OS. It fairly quickly fills
> up to about 160 megs of lisp heap, at which point it does a full GC
> about every 40 seconds, taking 1/3 to 2/3 of a second for that GC.
> This morning we discovered it with a heap size of about 1.6 gigs. It was
> spending most of its time in the GC, taking 6 to 12 seconds in each full
> GC to recover about 40 megs.
> Are we running up against some undocumented size limit, or is it more
> likely that a full GC in that big a heap just takes that long, so we've
> got to increase our ephemeral generation sizes (we're currently using
> the defaults) and/or eliminate our memory leaks and make the code cons
> less if we're not going to spiral into that black hole?
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
More information about the Openmcl-devel