I'm not much into MacOSX frameworks, but for any graphic programming you should remember that GPUs basically operate on 32-bit floats (any vertex/pixel shader unit works on vectors - 4x32-float values). Any cast performed by CPU to bring doubles to 32-bit floats is just wasted CPU time.
Of course, I'm not saying don't use doubles at all, I'm just not sure where double format fits in graphics-framework pipeline. Definitely not anywhere near low-level code
by Boyan — Sep 24
I'm not much into MacOSX frameworks, but for any graphic programming you should remember that GPUs basically operate on 32-bit floats (any vertex/pixel shader unit works on vectors - 4x32-float values). Any cast performed by CPU to bring doubles to 32-bit floats is just wasted CPU time.
Of course, I'm not saying don't use doubles at all, I'm just not sure where double format fits in graphics-framework pipeline. Definitely not anywhere near low-level code