Ah, I see now. How naive of me to presume that lisp actually worked with the AST like it says it does. Oh well.
Is there any way to optimize the interpreter without sacrificing AST interpretation? Or should I write my own language that says "interpreted languages are supposed to be slow; don't worry about it" for the sake of more powerful (in theory) macros? ^^
Or is there actually no difference between the qualities of the two macro systems? Would you care to enumerate the pros and cons of each system? You can do it on a new thread, if you like.
> Or should I write my own language that says "interpreted languages are supposed to be slow; don't worry about it" for the sake of more powerful (in theory) macros? ^^
You might be interested in Common Lisp's 'macrolet form. I've actually implemented this in Arc waaaaaaay back.
Considering that CL is targeted for direct compilation to native machine code (which is even simpler than mere bytecode), you might be interested in how CL makes such first-class macros unnecessary.
I'm very interested in both of those. Would you care to explain? If not, do you have any particularly good resources (besides a google search, which I can do myself)?
So, how does that work, exactly? Does macrolet tell lisp that since the macro is only defined in that scope, it should search more carefully for it, because it doesn't have to worry about slowing down the whole program?
Err, no. It simply means that the particular symbol for it is bound only within the scope of the 'macrolet form. In practice, most of the time, the desire for first-class macros is really just the desire to bind a particular symbol to a macro within just a particular scope, and 'macrolet does that.
For other cases where a macro expansion should be used more often than just a particular scope, then usually the module or whatever is placed within a package and a package-level macro is used.