Thanks for this- I'm actually writing a Python interpreter in Coffeescript right now, so the Norvig interpreter and this coffeescript version are a great reference to compare against!
1. Brevity is top priority. If you learn all the arc functions there are lots of clever commands that anticipate common programming idioms and allow you to write very pithy code.
2. Encourages 50:50 mix of functional and imperative code for what each does best- See the accum command for an example. (If you implement this in clojure, be sure to use the new transients feature http://clojure.org/transients)
3. The design of the web server is elegant (though still somewhat alpha)
4. Call/cc is available (this is the one thing you won't be able to implement in clojure, unless you're some kind of super-guru :-)
5. Intrasymbol syntax is a really promising idea that still needs to be fleshed out a bit.
6. Simplicity- The code behind arc (i.e. the files ac.scm and arc.arc) are, by design, is extremely simple (much simpler than clojure)
IMHO, the value of arc to someone not interested specifically in language design is lower now that Clojure is available. Clojure took many good ideas from arc and expanded on them in a way that really cut into arc's value proposition.
I still think pg's rule of "brevity, brevity, brevity" is the right approach in the long run- Hopefully someone will find time to take the best of these two Lisp dialects and create a new language in the future that rethinks "brevity" in terms of the ideas behind Clojure. (I'm think it'll take more than just a library of macros and functions to do this)
I really have mixed emotions regarding the two approaches of building on the JVM vs. building from scratch.
On the one hand, building on top of the JVM gives quite a head start in many ways, and I think it's a bit ridiculous how poor the interop situation is for native programs despite the availability of FFIs (although some/much of this may be fundamental to language differences).
On the other hand, it seems the least common denominator problem always creeps up when building on a VM plus you have an extra dependency in the chain, etc.
If the Y axis is features and the X axis is time, it seems clear that building on a VM gives you a higher Y intercept. The question is whether the slope is unduly impeded. I suspect it might be, but that's totally subjective on my part.
>Hopefully someone will find time to take the best of these two Lisp dialects and create a new language in the future that rethinks "brevity" in terms of the ideas behind Clojure.
I'm curious which particular ideas you're talking about? I've used both and I don't know what clojure brings to the table that arc really needs, besides the stuff that arc will certainly get anyway, like libraries, some kind of module system, and facilities for performance tuning. (Although I think clojure could use some better libraries, too :)
As for point 5, Arc uses certain characters as syntax abbreviations iff the special characters occur in what would otherwise be normal symbols (to the Scheme reader). So far, there's
.a ; is the same as (get a)
a.b ; is the same as (a b)
!a ; is the same as (get 'a)
a!b ; is the same as (a 'b)
f&g ; is the same as (andf f g)
f:g ; is the same as (compose f g)
~f ; is the same as (complement f)
where
((get a) b) ; is the same as (b a) by Arc's indexing
; e.g., ("abc" 0) is #\a
((andf f g) x) ; is the same as (and (f x) (g x))
((compose f g) x) ; is the same as (f (g x))
((complement f) x) ; is the same as (no (f x))
See arc.arc for the actual definitions of these operators.
There are precedence and associativity rules, such as
a!b.c ; is the same as ((a 'b) c) because ! and . are
; left-associative
f:g:h ; is the same as (compose f g h)
~f:g ; is the same as (compose (complement f) g)
To explore these more, you can use the ssexpand function in Arc:
arc> (ssexpand 'a:b.c!d)
(compose a b.c!d)
arc> (ssexpand 'b.c!d)
((b c) (quote d))
On top of that, while we're on the topic, I don't know what happens if you jump into another continuation in the following circumstances:
1. you're inside atomic-invoke, call-w/stdout, call-w/stdin, or anything that creates context that needs to be undone on the way out. The problem is that a continuation from in here can be exited multiple times although it was entered only once.
2. similarly, you're inside protect, (equivalent to "finally" in java) - is the "after"-fn called, and if so, only once or for each re-entered continuation?
3. the continuation you jump into was originally running on another thread.
I should write tests to figure out what happens in all these cases, but if anybody knows the answers I can live with being lazy :)
I would argue that the fact that you can't just say foo!a!x is a borderline bug, as well.
Also, I wish this would work right:
(map `[x _ z] '(1 2 3))
It should give the same result as (map [list 'x _ 'y] '(1 2 3))
Some primitive data types leak scheme internals when printed, such as hash tables- I would call this a bug.
Also, all datatypes should really be readable. For instance,
(read (tostring (pr (obj a 1 b 2))))
should not give an error.
As an aside, if you haven't looked at the map/vector syntax in clojure you really should... The fact that you can write [1 2 (+ x 1)] to yield the same result as (list 1 2 (+ x 1)) or `(1 2 ,(+ x 1)) is very compact, clever and useful for writing succinct code.
(It'll create a vector not a list of course, but that's orthogonal to my point, since all clojure sequence functions can use these two interchangeably)
()() ()()
# # # #
__#_#_#_#__
{_` ` ` ` `_}
_{_._._._._._}_
{_ H A P P Y _}
_{_._._._._._._._}_
{_ B I R T H D A Y _}
.---{_._._._._._._._._._}---.
( `"""""""""""""""""""""` )
`~~~~~~~~~~~~~~~~~~~~~~~~~~~`
I think the answer is simple: If you allow non-ascii characters in a lisp, it would only "feel" right if the characters were fully programmable, as opposed to just adding a handful of extra characters. Lisp users don't like new features if they aren't fully programmable- You should be able to declare a new symbol in lisp code somehow and then be able to use it, IMHO. This, however, is non-trivial to implement (though I've had some ideas about this that I've been wanting to code up for years but haven't had time to do yet...)
By that I mean if you define a table library, you should be able to write code that allows the data of a table appear like a table, with a grid and everything, right within your code editor. That's what I would mean with "fully programmable"
When you say you want "characters to be fully programmable" does this mean that if you had "fully programmable characters" you'd be able to edit tables in your code editor as you describe, or are you making an analogy saying that you'd want characters to be fully programmable like these tables are fully programmable?
The former- I'm saying that if you're going to allow more characters, I'd want to go all the way and have an editor that is part of the "language definition" that supports arbitrary graphics and ui elements that is controllable through the language in a "lisp like" way and would allow code appearance to be 100% customizable right inside the code.
No, I think what people are getting at is... why does program source have to be represented by only ASCII text? After all, if code is data, an IDE is just a data editor, where that data happens to be code.
I think it has been shown many times throughout history that notation matters. In fact, the entire concept of Arc is based on this principle. Otherwise, Common Lisp would be suitable.
Why does it have to be ASCII? No reason, The simplest answer may just be that the language writer feels that adding UTF-8, or UTF-16, support would be a waste of their time, or is below them. As for programming IN the language, it becomes a matter of what is easy for the programmer to type. In a way, this whole thing is a matter of deciding what is the lowest common denominator that one wants to support.