[Openmcl-devel] process-run-function and mach ports usage
dlw at itasoftware.com
Fri Feb 25 11:01:43 CST 2011
Gary Byers wrote:
> Just to clarify: most OSes distinguish between the ideas of "reserving
> space" and "committing resources - like physical memory - to a set of
> On Windows, they're separate operations; on Unux-like systems, the
> mmap system
> call (with different sets of options) can reserve, commit, or do both
> at the
> same time.
And in fact, ObjectStore, the DBMS I was co-designer
of, was entirely built on this mode of operation.
> Actually making those resources (physical pages) available usually
> lazily: when a committed page is first touched (sometimes, that means
> "when it's first written to", other times is means "read from or written
> to"), a physical page is allocated.
In ObjectStore, we'd mmap to a newly-created file, and then explicitly
touch all the pages, in order to make sure that the
process never ran out. The amount of address space
to allocate was a parameter, adjustable by an environment
variable and probably other ways, which controlled
some tradeoffs, not relevant enough to go into here.
> A system that doesn't overcommit will make sure that N physical pages are
> or can be made available when an application asks for N logical pages to
> be committed; if it can't guarantee this, the commit operation will fail.
> A system that uses an overcommit strategy will allow some "commit N
> pages" operations to succeed even if N physical
(And in case anyone listening isn't clear on this, by
"physical pages" we mean space on disk, rather
than anything to do with RAM.)
> pages aren't
> available; it's gambling that enough physical pages will be available
> by the time the application starts touching the logical pages. (This
> strategy should be familiar to anyone who's written a rent check the
> day before payday: if you get to the bank before the landlord does,
> great, and if not you were probably going to have to sleep in your car
Or to put it another way, it allows application writers to
be lazy, by just mallocing a huge amount of stuff and
not worrying about running out. I have ben told
that that's why turning off overcommit is not
> The analogy (to check fraud) isn't perfect, unfortunately. If you say
> ? (make-array 1-gazillion)
> and you get an error that says "either get serious or get 1 gazillion
> bytes more physical memory", that's likely to be more tractable and
> to recover from than it would be to be told that the large array could
> be created (and then get a mysterious memory fault when setting/accessing
> the array's contents.)
> Confession/disclaimer: until about a year ago, there was a bug in CCL
> that caused it to treat some (maybe all) memory commit failures as
> and this caused the same sort of symptoms (a fault trying to access
> that'd supposedly been committed) that an overcommit failure could. I
> at the buggy code dozens of times before I saw that it was simply and
> wrong, and finally saw that when I saw that the same problems happened on
> Solaris (which doesn't overcommit) as on systems that do.
> On Thu, 24 Feb 2011, Shannon Spires wrote:
>> NT did it. I don't know if it was first, but you did say "popularized."
>> And at least they provided two different allocation functions;
>> VirtualAlloc() which overcommitted and malloc() which didn't, so you
>> knew what you were getting.
>> Jeez. I'm defending NT. Must be losing it.
>> On Feb 24, 2011, at 2:23 PM, Tim Bradshaw wrote:
>>> On 24 Feb 2011, at 19:55, Gary Byers wrote:
>>>> "What OS introduced/popularized the concept of memory overcommit ?"
>>> I think it was AIX, but it might have been some earlier IBM OS
>>> (mainframe OS).
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
More information about the Openmcl-devel