The difference is that Arc evaluates forms from a loaded file one expression at a time. 'def in Arc is simply an assignment to a global variable; technically, 'load doesn't load a module, it executes a program (which in most cases just assigns functions to global names).
Erlang source files, on the other hand, are a set of function definitions - they aren't executed at the time you load. There are no globals in Erlang (although the function names are effectively equivalent to global variables that can't be mutated normally). Each Erlang source file is compiled as a single unit, meaning one source file == one Erlang module.
> About macros, hasn't there been some interest for a while in getting macros to be "first-class" and so forth?
I presume you mean something like this:
(let my-macro (annotate 'mac
(fn (x y)
(+ "my-macro says " x " and " y)))
(my-macro "hmm" "haw"))
> How are they implemented, exactly, that makes this so hard?
It isn't how macros, per se, are implemented that makes this hard, it's how efficient interpreters are implemented that makes this hard.
One of the slowest implementations of interpreters are what's called "AST traversers". Basically, the interpreter simply goes through the list-like tree structure of the code and executes it. In a Lisp-like, the AST is the list structures input by the s-expression syntax. This is what macros fool around with.
The slowness of this is usually because it needs to enter each sub-AST (i.e. a sub-expression, e.g. in (foo bar (qux quux)), (qux quux) is a sub-AST) and then return to the parent AST (in the example, it has to return to (foo bar _)).
However a faster way to do it is to pre-traverse the syntax tree and create a sequence of simple instructions. This is usually called a "bytecode" implementation, but take note that it doesn't have to be a byte code.
For example (foo bar (qux quux)) would become:
(call qux quux) ; puts the return value in 'it
(call foo bar it)
The increase in speed per se is not big (you just lose the overhead of the AST-traversal stack while retaining the overhead of the function-call stack), but it gives an opportunity for optimization. For example, since the code is now a straight linear sequence of simple instructions, the interpreter loop can be very tight (and relatively dumb, so there's very little overhead). In addition, it's also possible to transform the linear sequence of simple instructions to even simpler instructions... such as assembly language.
However, consider the above sequence if 'foo turns out to be a macro. If it is, then it's too late: the program has already executed 'qux. If it were part of say a 'w/link macro, then it shouldn't have executed yet. Also, recreating the original form is at best difficult and in general highly intractible, and remember that the macro expects the original form.
So in general for efficient execution most Lisplike systems force macros to execute before pretraversing the AST into the bytecoded form. This also means that macros aren't true first class, because they must be executed during compilation.
In short: most lisplikes (mzscheme included) do not execute the AST form (i.e. the list structures). They preprocess it into a bytecode. But macros work on the AST form. So by the time the code is executed, macros should not exist anymore.
> Also, you mention some effort being made into allowing arc to see if the form in head position resolves to a macro. How hard is that to do?
Trivial, just add a few lines in ac.scm. However rntz didn't push it on Anarki, which suggests that the modification hasn't been very well tested yet. http://arclanguage.com/item?id=7451 but the patch itself has been lost T.T . I think it'll work, but I haven't done the patch too either ^^.
> And good luck on SNAP. I would love to help you, as lisp + erlang (feature wise) is something I am very interested in.
Ah, I see now. How naive of me to presume that lisp actually worked with the AST like it says it does. Oh well.
Is there any way to optimize the interpreter without sacrificing AST interpretation? Or should I write my own language that says "interpreted languages are supposed to be slow; don't worry about it" for the sake of more powerful (in theory) macros? ^^
Or is there actually no difference between the qualities of the two macro systems? Would you care to enumerate the pros and cons of each system? You can do it on a new thread, if you like.
> Or should I write my own language that says "interpreted languages are supposed to be slow; don't worry about it" for the sake of more powerful (in theory) macros? ^^
You might be interested in Common Lisp's 'macrolet form. I've actually implemented this in Arc waaaaaaay back.
Considering that CL is targeted for direct compilation to native machine code (which is even simpler than mere bytecode), you might be interested in how CL makes such first-class macros unnecessary.
I'm very interested in both of those. Would you care to explain? If not, do you have any particularly good resources (besides a google search, which I can do myself)?
So, how does that work, exactly? Does macrolet tell lisp that since the macro is only defined in that scope, it should search more carefully for it, because it doesn't have to worry about slowing down the whole program?
Err, no. It simply means that the particular symbol for it is bound only within the scope of the 'macrolet form. In practice, most of the time, the desire for first-class macros is really just the desire to bind a particular symbol to a macro within just a particular scope, and 'macrolet does that.
For other cases where a macro expansion should be used more often than just a particular scope, then usually the module or whatever is placed within a package and a package-level macro is used.