The only major disadvantage is that it works based on the name, not the value.
Is this because you're still thinking in terms of creating a new data type that would notify you of changes?
My thought was at the beginning or read, or at the beginning of eval (whichever worked better), it would check whether ssyntax-rules* had the same contents as the last time it looked.
Now you don't have any special data types or magic going on, just ordinary variables and values.
My idea for hacking eval was that for (assign), it would check if it was assigning to ssyntax-rules* and if so, do something. In other words, implement the hook in eval rather than as a special data type.
That avoids the issue, because it works based on the name rather than the value. If I implemented ssyntax-rules* as a special data type, it would work based on the value, rather than the name. But this way, it works no matter what is assigned to ssyntax-rules*
P.S. Checking ssyntax-rules* at the beginning of each eval sounds dreadfully slow. Eval is called a lot, after all.
I don't have an Arc compiler. Everything is evaled in my interpreter, for better or worse. It can be made fast later, I just want to get it working right now.
It's not just a matter of performance. That's the kind of semantic difference that'll make it really hard for people with existing Arc code to use their code on your interpreter; in a sense, it won't actually "work" in the sense of being a working Arc implementation. Are you concerned about that?
No need to be; it could very well become a language that's better than Arc (and if modules are in the core, that's definitely a possibility ^_- ).
I'm not terribly concerned about existing code. I would like existing code to work, but it's not a huge priority as in "it has to work no matter what." I've even toyed around with the idea of changing the default ssyntax, which would definitely break code that relies on Arc's current ssyntax.
I'm curious, though, what's the semantic difference? In PyArc, when eval sees a function, it calls apply on it. Apply then calls eval on all of the function's arguments, and then calls the function. How does MzArc differ significantly in that regard?
In MzArc (RacketArc?), 'eval is only called once per top-level command (unless it's called explicitly). If you define a function that uses a macro and then redefine the macro, it's too late; the function's code has already been expanded.
I rely on this behavior in Lathe, where I reduce the surface area of my libraries by defining only a few global variables to be macros that act as namespaces, and then replacing those global variables with their old values when I'm done. In an Arc that expands macros at run time, like old versions of Jarc, these namespace macros just don't work at all, since the variables' meanings change before they're used.
I also rely on it in my Penknife reimplementation. I mentioned (http://arclanguage.org/item?id=14004) I was using a DSL to help me in my pursuit to define a function to crank out new Penknife core environments from scratch. The DSL is based on a macro 'pkcb, short for "Penknife core binding," such that pkcb.binding-get expands to something like gs1234-core-binding-get and also pushes the name "binding-get" onto a global list of dependencies.
There's also a global list of Arc expressions to execute as part of the core constructor function. Adding to the list of expressions invalidates the core constructor function.
Whenever you call the core constructor, if it's invalid, it redefines itself (actually, it redefines a helper function) by evaluating code that has a big (with ...) inside to bind all those 'pkcb variables. But actually, I actually don't have a clue what 'pkcb variables to put in the (with ...) at that point. Fortunately, thanks to Arc's macro semantics, the act of defining triggers the 'pkcb macro forms, and it builds a list telling me exactly what I need to know. Then I define the core constructor again using that list in the (with ...).
I do all this just to save a little hash-table lookup, which I could have done with (or= pkcb!binding-get (pk-make-binding)). :-p If that's not important to you, I'm not surprised. It's hardly important to me, especially since it's a hash table of constant size; I mainly just like the thought of constructing the bindings eagerly and ditching the hash table altogether when I'm done.
So that example falls flat, and since you're taking care of namespaces, my other example falls flat too. I've at least established the semantic difference, I hope.
Actually, PyArc expands macros at read-time and run-time, as needed. You can even use (apply) on macros.
If that's what aw meant by "Arc compiler", then I misunderstood. I took it to mean "compiling Arc code into a different language, such as assembly/C/whatever"
Given what aw said, though, eval doesn't invoke the Arc compiler... once read-time is over, the data structure is in memory so eval can just use that; it doesn't need to do any more parsing.
So, I'm not sure what aw meant by "eval invoking the Arc compiler."
Actually, PyArc expands macros at read-time and run-time, as needed.
Ah, okay. The semantic differences may not exist then. ^_^
In ac.scm, the eponymous function 'ac takes care of turning an Arc expression to a Racket expression. Arc's 'eval invokes 'ac and then call's Racket's 'eval on the result, so it is actually a matter of compiling Arc into another language. As for PyArc, in some sense you are compiling the code into your own internal representation format, so it may still make sense to call it a compile phase. The word compile is just too general. XD
Then in that case, I think my previous answer is correct: PyArc does not have an Arc compiler, only an interpreter.
What you referred to as compile phase I refer to as read-time, since I think that's more specific. In any case, everything passes through read/eval, which is actually exposed to Arc, unlike MzArc where eval is just a wrapper.
Thus, implementing an additional check in eval could add up, if I'm not careful. So I think I'll add a hook to (assign) instead, that way it can be really fast.
"What you referred to as compile phase I refer to as read-time, since I think that's more specific."
I don't know about calling it read-time. Arc being homoiconic and all, there's a phase where a value is read into memory from a character stream, and there's a phase where a value is converted into an executable format, expanding all its macro forms (and in traditional Arcs, expanding ssyntax).
Read time is kinda taken, unless (read) really does expand macros. If compile time isn't specific enough for the other phase--and I agree it isn't--how about calling it expansion time?
---
"In any case, everything passes through read/eval, which is actually exposed to Arc, unlike MzArc where eval is just a wrapper."
I don't know what consequences to expect from that, but I like it. It seems very appropriate for an interpreter. ^^
---
"Thus, implementing an additional check in eval could add up, if I'm not careful."
Read actually does expand macros. At least for now. In other words, it does the "read characters into memory" and "convert into executable format" phases at the same time.
Calling it expansion time isn't really correct because it doesn't always expand; sometimes it just takes data like 10 or (foo bar) and converts it into the equivalent data structure.
---
"I don't know what consequences to expect from that, but I like it. It seems very appropriate for an interpreter. ^^"
Well, it means you can overwrite them in Arc, possibly to implement defcall or similar, as opposed to in MzArc where they're hidden.
Maybe we need an official jargon file for arc, given that we're having a lot of confusion over things like "compile time", what to call pg's original arc, etc.
I realize that there was an attempt at starting a wiki, but it requires permission to edit and is somewhat different from most wikis I'm familiar with. Maybe a more open data platform would be more accessible, such as wikia? I would quickly set one up there if anyone else is interested (and aw isn't too offended ;)
Well, http://sites.google.com/site/arclanguagewiki/ is up, running, works, is available for editing for anyone who want to edit it, and I've been planning to continue to contribute to it myself. So... certainly anyone can post material anywhere they want, including a different wiki platform if they like it better, and if it's something useful to Arc we'll include links to it just as we include links to other useful material, whether published in a wiki or a blog post or whatever... but I personally would be looking for a more definite reason to post somewhere else myself.
Is there something specific you don't like about Google Sites?
Some of the things I was looking for:
- no markdown. I despise markdown syntax (when writing about code) with the myriad special syntax that I can never figure out how to escape.
- a way to plug in code examples from my upcoming "pastebin for examples" site, which I hope I'll be able to create a Google gadget to do.
- a clean layout without a lot of junk gunking up the page