1) No, you should implement in the math in the underlying machine instructions, which are guaranteed to be as precise and as fast as the manufacturer can make it. The underlying machine instructions are fortunately possible to access in standard C libraries, and the standard C library functions are wrapped by mzscheme, which we then import in arc.
2) It should be, and it isn't.
(defmemo fac (n)
((afn (n a)
(if (> n 1)
(self (- n 1) (* a n))
a))
n 1))
3) Yes, arc-on-mzscheme handles this automagically. arc2c does not (I think it'll overflow)
Implementing numerically stable and accurate transcendental functions is rather difficult. If you're going down that road, please don't just use Taylor series, but look up good algorithms that others have implemented. One source is http://developer.intel.com/technology/itj/q41999/pdf/transen...
That said, I don't see much value in re-implementing math libraries in Arc, given that Arc is almost certainly going to be running on a platform that already has good native math libraries.
I figured that being close to machine instructions was a good thing, but I thought that we should do that via some other method, not necessarily scheme, which may or may not remain the base of arc in the future.
That being said, if you think that pulling from scheme is a good idea, why don't we just pull all of the other math functions from there as well?
Actually I think it might be better if we had a spec which says "A Good Arc Implementation (TM) includes the following functions when you (require "lib/math.arc"): ...." Then the programmer doesn't even have to care about "scheme functions" or "java functions" or "c functions" or "machine language functions" or "SKI functions" - the implementation imports it by whatever means it wants.
Maybe also spec that the implementation can reserve the plain '$ for implementation-specific stuff.