> I would like to note that arcfn is out of date. I believe it references Arc 2, not Arc 3.1. I'm sure that myself or others can help you out if you have any questions. One of the changes between Arc 2 and Arc 3 is that Arc 3 uses "assign" whereas Arc 2 uses "set".
Thanks. I did notice ac.scm uses assign where as arcfn talks about 'set'. Anyway, I didn't implement neither assign nor set, not scar not scdr. I just implemented '=' as a special form.
Thanks for #3 btw, my '=' only worked with one variable before; now fixed.
The brainstorm file is never updated. I keep putting some brain dump there without cleaning the previous brain dumps.
I think closures already work; I just haven't tested them enough.
> How it works is that every function has an environment. This environment contains two bits of information: a mapping between variables and values, and a reference to the outer environment.
> Then, when looking up a variable, you start in the current environment, and if it's not found, you check in the outer environment. You then repeat this process as many times as necessary until you either find the variable, or reach the global scope.
An active, vibrant developer community (like the one around Node.js)
There are of course a whole lot of thing I'd wish arc had, but the community is the most important aspect, if it was there, it could take care of all the details. The community would make the package management system, build decent libraries/packages, and make sure the implementation becomes widely available, etc.
I heard pg & rtm tried first-class macros but the performance was unacceptable. I would really like to have tried them out, seen results from some performance tests... or something!
How difficult of a hack would it be to give Arc first-class macros again?
If you want to try out first-class macros to see what they could do for you, that's easy enough: write an interpreter for Arc. It'd be slow of course, but enough so that you could try out some different kinds of expressions and see if you liked what you do with it.
I was a fan of fexprs not too long ago, and I still kinda am, but they lost their luster for me at about this point: http://arclanguage.org/item?id=11684
Quoting from myself,
Quote syntax (as well as fexprs in general) lets you take code you've written use it as data, but it does little to assure you that any of that computation will happen at compile time. If you want some intensive calculation to happen at compile time, you have to do it in a way you know the compiler (as well as any compiler-like functionality you've defined) will be nice enough to constant-propagate and inline for you.
I've realized "compiler-like functionality you've defined" is much easier to create in a compiled language where the code-walking framework already exists than in an interpreted language where you have to make your own.
If part of a language's goal is to be great at syntax, it has a conflict of interest when it comes to fexprs. They're extremely elegant, but user libraries can't get very far beyond them (at least, without making isolated sublanguages). On the other hand, the dilemma can be resolved by seeing that an fexpr call can compile into a call to the fexpr interpreter. The compiler at the core may be less elegant, but the language code can have the best of both worlds.
This is an approach I hope will work for Penknife. In a way, Penknife's a compiled language in order to support fexpr libraries. I don't actually expect to support fexprs in the core, but I may write a library. Kernel-style fexprs really are elegant. ^_^
Speaking of such Kernel-like libraries, I've thrown together a sketch of a Kernel-like interpreter written in Arc. It's totally untested, but if the stars have aligned, it may only have a few crippling typos and omissions. :-p https://gist.github.com/778492
Can't say I'm a fan of PicoLisp yet, though. No local variables at all? Come on! ^_^
Speaking of speaking too soon, I may have said "user libraries can't get very far beyond [an fexpr language's core syntax]," but I want to add the disclaimer that there's no way I actually know that.
In fact, I was noticing that Penknife's parse/compile phase is a lot like fexpr evaluation. The operator's behavior is called with the form body, and that operator takes care of parsing the rest, just like an fexpr takes care of evaluating the rest. So I think a natural fexpr take on compiler techniques is just to eval code in an environment full of fexprs that calculate compiled expressions or static types. That approach sounds really familiar to me, so it probably isn't my idea. :-p
No harm done. :) PicoLisp appears to have lexical scoping but dynamic binding, although my PLT is too weak to understand all the implications of that. From the FAQ:
> This is a form of lexical scoping - though we still have dynamic binding - of symbols, similar to the static keyword in C. [1]
> "But with dynamic binding I cannot implement closures!" This is not true. Closures are a matter of scope, not of binding. [2]
Sounds like transient symbols are essentially in a file-local namespace, which makes them lexically scoped (the lexical context being the file!), and that transient symbols are bound in the dynamic environment just like internal symbols are. So whenever lexical scope is needed, another file is used. Meanwhile, (====) can simulate a file break, making it a little less troublesome.
But the let example I gave a few comments ago didn't use a transient symbol. Why does it work?
I chatted with PicoLisp's author, Alexander Burger, yesterday on IRC. If I catch him again, I can ask for clarification about the scoping/binding quirks.
I think it works because while you're inside the let, you don't call anything that depends on a global function named x. :) That's in the FAQ too:
-
What happens when I locally bind a symbol which has a function definition?
That's not a good idea. The next time that function gets executed within the dynamic context the system may crash. Therefore we have a convention to use an upper case first letter for locally bound symbols:
(de findCar (Car List)
(when (member Car (cdr List))
(list Car (car List)) ) )
You have a good point about the rendering oddities with whitespace. There's a lot of room for improvement here. Rendering hints would definitely be nice. For instance, <link> and <input> tag don't typically close themselves. And tags like <a> <b> <u> <em> should just print on a single line.
> I'm definitely a fan of this 'annotate approach to building HTML. I like the idea that as the framework gets more refined, it might be possible to output the same page as either HTML or XHTML with just a configuration change.
haha, I didn't quite think of this at the beginning.
My 'render-html is completely decoupled from the building of the HTML. It's just one type of a printer, and it likes to pretty-print things for readability.
We could add easily imagine adding several more printers, and even a system for registering and choosing one, with 'render-html being merely an alias to the chosen one.
...perhaps it's a good thing that rendering hints aren't part of the tag object.
I have tag-by-tag hints in mine just in case I want two otherwise identical HTML tags to have different rendering behavior, for a browser hack or something. Eventually I do hope to have an 'html hint that causes the element's tag name to be looked up in the renderer's HTML specification of choice, but I may still want to leave that hint out occasionally just to have better control.
Actually, it could just be a matter of laziness. The only hint I use right now is 'collapsible, which I put on the <meta> and <link> tags so that they don't have closing tags when their contents are empty. Since it's only one hint, I haven't bothered to make a whole extensible system based on objects that represent HTML specifications.
Oh hell, I'm not writing a complete system for html specs. Just a quick hack for the most commonly used tags and the way I think they work.
For that matter, I didn't know link and meta could have content.
I changed the rendering engine to print some tags inline and self-close some tags.
(render-html (page 'title "Test" 'js "js.js" 'css "css.css"
(p "Hello world" (e 'em "emhasis!!!") "did you see that?")
(p "Btw, this is" (e 'a 'href "google.com" "mylink"))))
<!doctype html>
<html>
<head>
<title>Test</title>
<script type="text/javascript" src="js.js"></script>
<link rel="stylesheet" type="text/css" href="css.css" />
</head>
<body>
<p>Hello world <em>emhasis!!!</em> did you see that?</p>
<p>Btw, this is <a href="google.com">mylink</a></p>
</body>
</html>
> I have tag-by-tag hints in mine just in case I want two otherwise identical HTML tags to have different rendering behavior
The tag object is just a hashtable, you can always hack it and insert whatever custom attributes you want, and then use these as rendering hints in the rendering engine/function.
Oh hell, I'm not writing a complete system for html specs.
I sort of am, but only in the very long term. It can start as a tiny type that just holds its own specific special-casing behavior, and then it can grow in complexity as complexity is needed. That said, it may be difficult to predict what behavior is specific to a single HTML specification until one's tried to provide a choice between multiple specs.
I changed the rendering engine to print some tags inline and self-close some tags.
Awesome. :D
For that matter, I didn't know link and meta could have content.
They're not supposed to. (It would be interesting to look at the DOM to see if they can in practice.) I'm talking about when I might potentially want to give them content for a weird browser-specific hack.
The tag object is just a hashtable, you can always hack it and insert whatever custom attributes you want, and then use these as rendering hints in the rendering engine/function.
It seems to me like when a symbol is bound to a name inside a function, that binding should temporarily override whatever is in the global symbol table; but that doesn't seem to be the case.
> It seems to me like when a symbol is bound to a name inside a function, that binding should temporarily override whatever is in the global symbol table; but that doesn't seem to be the case.
Macros are a special case. If it's a globally defined function, then the temporary binding overrides. But if it's a macro, then it doesn't.