My understanding is that ssyntax can only be part of symbols, so you wouldn't be able to write :(a b c). At any rate, I think the idea of using it is to ape keyword parameters in Common Lisp.
Personally, I like & and | or andf and orf, but as almkgor mentioned, | is taken for "odd" symbols (e.g. '|this is a symbol|), using just / would interfere with things like w/stdout, and using & and // would be asymmetric. In any case, having some ssyntax for andf and orf is definitely a good idea.
Taking inspiration from discreet math, we could use ∧ and ∨ as andf and orf, respectively. This would be in line with pg's current use of ~ as not. On the other hand, maybe we shouldn't make use of non-ascii characters... (or else you get APL et al)
I would say use ^ and v, but something tells me that disallowing a v in identifiers would be a bad idea (e.g.eval) ;)
My biggest problem with using Unicode is that it's often a pain to type. The Mac gets this the most right of any platform I know, but even so, it's (a) not a standard, and (b) mostly alphabetic. Which is a pity, really, since those do make the most sense.
> My biggest problem with using Unicode is that it's often a pain to type.
Agreed. We might be able to add a hook to arc-mode in emacs to make it more convenient, but reminds me even more of APL (which if I recall correctly required a custom keyboard to type).
> + might not be bad, but it's probably a little too common
I like +. While + does get used in symbols occasionally (most notably in arithmetic), I can't think of any cases where it gets used in the middle of a symbol.
EDIT: On second thought, when I use plus outside of a programming context, I usually mean and, so it might be confusing to use + as 'andf.
The problem with "adding a hook to arc-mode in emacs" is that then you alienate everyone not using Emacs. And that's a good point about + being used to mean &.
"maybe we shouldn't make use of non-ascii characters" : why not ? Characters like λ have their place in a Lisp, and I think ∧and ∨ have their place too. Anyway, such characters wouldn't be used very frequently (do you often use andf & orf ?) and could be ignored if typing them is too painful. That's better than consuming ASCII characters that sometimes fit well in symbol names, IMO.
I think it would be hilariously ironic to make non-ASCII Unicode characters part of Arc's syntax and functions.
A few proposals for giving functions new names: ☢ for atomic, ✄ for cut, ✔ for check, ⌚ for time, ⌛ for sleep, ☠ for kill-thread, ☇ for zap, ♭for flat.
But anyway, I still think they're not worth loosing an ASCII character, and using mathematical notation would be very useful. It would make code readable by people a little aware of mathematics. That is, most programers. It would be definitely better than arbitrary characters.
Why should we restrict to ASCII anyway ? I mean, a lot of symbols I use are not ASCII anymore (they are accentuated, I'm French so something like 'year is translated into 'année, not into 'annee). Sure, they're hard to type, but are they any longer than they symbol counterpart ? If you type them often, just make them a vi macro (or whatever in your favorite text editor) and you're done.
It might end up looking like APL, for sure, but I think Fortress, Sun's new language designed by Guy Steele, is going that way too. And Steele cannot be wrong :)
I don't mind non-ASCII, I mind weird non-ASCII. Even in English, we leave ASCII behind: “As I was going to the café—what fun—my naïve friend said…” It's just that I don't know of any keyboard layout that supports ∧ or ∨. I agree that they would look great, as would Fortress.
I wonder if anyone's given any though to using (La)TeX in a language? So that something like (number{\wedge}acons var) would be the same as (number∨acons var)? Or just as a nice way of typing things like Fortress? (Which I agree looks very interesting.)
I'd probably use them a lot more often if the syntax was a little easier, which is why I suggested using ssyntax for them. Currently the syntax is ((orf this that the-other) foo), and doubled starting parens feel rather strange to me.
What about what cchooper proposed in http://arclanguage.org/item?id=4712 : ($car ...) == (map car ...)? That's straightforward to add, if we add a new macro: def-ss-, to add new ssyntax with lower precedence (def-ss+ can be defined by analogy):
Actually, that's precisely why it has to be lower precedence: we want $.sin to turn into ($ sin), so as to get at trig from mzscheme; if it were to become (map .sin ...), things would die because of the malformed ssyntax in .sin. (I tried it both ways, which is how I figured this out.)
Ah, I think we have a misunderstanding here: from my point of view, objects earlier in the list have lower precedence to objects later in the list. ^^ So your "lower precedence" is my "higher precedence". Basically, it has to do with stuff like:
foo!bar:qux!quux
which is compiled as:
{{foo ! bar} : {qux ! quux}}
So !, which is listed later, has higher precedence (it gets grouped more tightly)
I wouldn't want to see any of these added to the base language, but I think it's great that the anarki/git allows the user to create these, and I hope that feature makes it into the official language. Lisp is for creating domain-specific languages, and Arc is supposed to be a compact Lisp, so domain-specific syntax seems like a good thing to me.
I'm also biased, because I want to create my own syntax for a message-passing Arc to be used as the object scripting language for a multi-user VR system. (I haven't even started working on a server for it yet -- just the client and the protocol.) So something like:
capital L means left-associative, so foo/bar/nitz is grouped {{foo / bar} / nitz}.
As an aside, why not <- ? Basically parent<-owner<-id means "send 'id to the result of sending 'owner to parent".
For that matter if an object, such as parent above, is something of very specific type, you can even use the Anarki 'defcall syntax. For example if messageable objects are of type 'messageable, you can do:
I considered <- (along with a lot of other things), but rejected it because the left-associativity of it seemed counter-intuitive, it didn't stand out as much as slashes, and I want it to stand out because it's inconsistent with symbol syntax, it's confusing with -> which conventionally means type-conversion (and which doesn't break up symbols), and it's three times as much effort to type as /.
Slashes are fairly common in symbols, but not at the beginning (except for / itself), so it's unambiguously parseable, though maybe it won't work with this system, now that I think about it. I don't see how to prevent embedded slashes from meaning anything special without the first slash. I.e., I want foo/bar/nitz to be one symbol, but /foo/bar/nitz to be a symbol and two msg calls. Maybe that's too hairy, but I'd hate to break things like w/uniq.
I guess the general solution to any syntax that is confusing to people is a good syntax-highlighting editor.
Well, for that matter having (foo msg) mean "send msg to foo" would be much shorter, and also shows that Scheme was originally a synchronous message-passing language anyway.
Something like ->foo would be nice. Even if it didn't have that exact syntax, I use 'coerce enough that it would make a (noticeable albeit small) difference in my code. And I think it is more readable than looking at lots of nested type checks.
Are different ssyntaxes be combinable? Could you, for example,
(if (int?//num? n)
; its a number...
; not a number...
Do these ssyntaxes currently get optimized away at compile time (i.e. like 'compose does)? Or do they incur a run-time penalty of some sort?
With proper ordering of the ssyntaxes in 'def-all-ss, yes.
> Do these ssyntaxes currently get optimized away at compile time (i.e. like 'compose does)? Or do they incur a run-time penalty of some sort?
Not yet. The run time penalty should be as small as a function-composition overhead. The optimization away of 'compose is done by the Scheme-side 'ac, so if optimizations for 'orf and 'andf and similar are desired, munging them in the scheme-side is necessary.
Edit: As an aside, the above ssyntaxes are not yet implemented in the ssyntaxes.arc file. For the most popular ones - postfix ?, as well as 'andf && and 'orf //, you can modify:
I've been playing with this, and the other downside is that it's slow. Things are noticeably slower to load (not to run), as the ssyntax check is (a) more complex, and (b) written in Arc, not mzscheme. Other than those two things, though, it's absolutely great.
Hmm. After looking at that, as far as I can tell it can only match a word which ends in one of the strings in its dictionary. Using it to parse out middle bits would be tricky--you could try saving the string at each accept state, or something like that.