[Openmcl-devel] Detecting "Bitness" of Application
bfulg at pacbell.net
Tue Sep 18 00:59:14 EDT 2007
In working with the 64-bit Intel OpenMCL (on Mac) I am finding lots of
places where API calls through the FFI require explicitly declared
double-float values in 64-bit mode.
A good example would be something like:
(rlet ((&rect :<CGR>ect :origin.x 0.0d0 :origin.y 0.0d0
float Width) :size.height (ccl::%double-float Height)))
(let* ((colorSpace (#_CGColorSpaceCreateDeviceRGB))
(&bitmapCtx (#_CGBitmapContextCreate toBuffer Width Height
8 bytes-per-row colorSpace #$kCGImageAlphaNoneSkipLast)))
In 32-bit OpenMCL, the CGRect is perfectly happy being initialized
with a basic "1.0" (or even "1.0s0"). However, in 64-bit mode, I am
required to declare using the "1.0d0" notation. This isn't such a big
deal, but what if I want code that might compile in 32-bit mode?
Is there a bit-width-agnostic way to make declarations such as the
P.S. For people interested in what OpenMCL can do, I have some
examples of OpenMCL do some simple OpenGL renderings using the Open
Agent Engine logic. Rumor has it that this stuff might even work in
64-bit mode... (see http://lwat.blogspot.com/).
More information about the Openmcl-devel