Simplification!? You seem confused. We have two concepts here. Using one symbol for two concepts does not combine two concepts into one. All it does is add confusion. Overloading is fine for some things (eg. + adds numbers and joins lists), but be careful with it. Remember that post (which I'm too lazy to track down right now) titled "You want brevity? You can't handle brevity!" that gave an example of K code? That language wanted to fit APL into ASCII, and did so by overloading the punctuation characters over 4 times each on average. That results in code like this:
(!R)@&{&/x!/:2_!x}'!R
Again, overloading should be used sparingly, and only when the concepts are related in a certain way that I won't bother trying to articulate right now.
I think that in this case any benefits this might have are outweighed by the confusion it would cause and the kludge required for certain things like varargs.
(fn (() . args) body)
Doesn't that hurt your eyes? Not to mention what it is saying: "Function that accepts the empty list and any number of other arguments". That doesn't make sense to me.
>
> Simplification!? You seem confused. We have two concepts here. Using one symbol for two concepts does not combine two concepts into one.
i don't think these are distinct concepts. to me they are the exact same thing except in one case we're attaching a name to what we're making, and in the other we aren't. i can see how people familiar with lisp might see essentially 'lambda' and 'set' in the forms, but on the surface 'fn' and 'def' are very similar
i see the need for say, 'and' and the anaphoric 'aand', but to me def is so redundant with fn i actually do use that (= name (fn (x) ...)) form someone mentioned here
>
> Again, overloading should be used sparingly, and only when the concepts are related in a certain way that I won't bother trying to articulate right now.
i agree, but again i don't think this is an inappropriate case. what i would like to see in Arc is something that Lua does in various places. the best example is its tables, which are simultaneously arrays, hash tables, and with its metatable system, any other structure you can think of from multiple-inheritance objects to arbitrarily-linked lists. and they act respectively depending on how they're used. eg if you use a table as an array it will iterate fast. the result is that the programmer is relieved of the details of choosing and using structures, and can instead focus on actually using them
is Lua's implementation of tables more sophisticated than arrays, hash tables, etc would have been individually? definitely. but it relieves the programmer of a lot of thought that is able to focus on something else. tables are definitely simpler than if their functionality was spread out, yet no functionality is lost compared to such a model. in fact, functionality is gained through the flexibility
by upping the sophistication underneath, a language can become simpler. if you only rely on the sort of simplicity brought about by a pure axiomatic approach, you get a sort of "assembly" simplicity. is MOV DX,8 simple? yes, but for naught
(not to blitzkrieg with text, but i thought i would explain the philosophy in one place)
>
> for certain things like varargs
full varargs would be the only alteration
>
> Doesn't that hurt your eyes?
yes, which is why i proposed another syntax. someone may be able to come up with something better. i use full varargs commonly so it wouldn't be easy for me to let go of the current syntax either
It is true though that whenever you overload an operator you depend on the programmer understanding more about a single operation. Thus by definition overloading increases the complexity of a language.
There is some significant complexity in setting a variable. Note the current existence of four (or more) different forms of 'fn: 'fn (anonymous), 'afn (capturing 'self as a local name), 'rfn (using a user defined local name), and 'def (using a user defined global name). All these forms exist for the purpose of creating functions, but they each has a different way of setting varaibles.
How do you capture that complexity if you want to start combining them into a single operation? Do you just ignore everything except 'fn and 'def and say the user can type the extra character for 'afn and 'rfn? That seems rather inconsistent.
>
> It is true though that whenever you overload an operator you depend on the programmer understanding more about a single operation. Thus by definition overloading increases the complexity of a language.
that's if we're looking at it from the perspective of the operator. but if we look at it from the perspective of an expression, i think it is simpler. to illustrate: one of the nice examples on the front page of ruby-lang.org is the following (keeping their comments):
# Ruby knows what you
# mean, even if you
# want to do math on
# an entire Array
cities = %w[ London
Oslo
Paris
Amsterdam
Berlin ]
visited = %w[Berlin Oslo]
puts "I still need " +
"to visit the " +
"following cities:",
cities - visited
if we look at it from the perspective of the operators, we see we've overloaded + and - for non-numeric types. however, if we look at it from the perspective of the programmer as they are writing those lines, we see that the programmer just wants to say "remove the cities i've visited from the cities i have to visit," and "cities - visited" is almost a direct translation of that. it makes sense. the fact that that operator happens to be overloaded from another domain is irrelevant. the context secures the role of the operator
There is no syntax for varargs. Destructuring argument lists are very simple and easy to understand once you understand the structure of Lisp code. What you're proposing is taking away the ability to manipulate s-expressions as they are just to free up another name that would rarely be used anyway.
Yes, but then it does not work with the core functions (notably each & friends), as they are relying on isa. Renaming your function isa does not work either (that would be too easy) : atom and some call isa themselves, so you get in an infinite loop. Btw, obj is a macro (at least in Anarki, I don't know if it's present in the official Arc2), so your code has a red flag on it (although it does seem to work).
I tried this, it does work :
(redef isa (x typ)
(isa-rec type.x typ))
(def isa-rec (types typ)
(if
(no types) nil
(is types typ) t
(isnt type.types 'cons) nil
(is (car types) typ) t
(isa-rec (cdr types) typ)))
(isa nil 'sym) -> t
(isa (annotate '(sym list) nil) 'sym) -> t
(isa (annotate '(sym list) nil) 'list) -> t
(isa (annotate '(sym list) nil) 'cons) -> nil
And now the funny part :
(redef car (x)
(if (isa x 'int)
(if (> rep.x 0)
'a
nil)
(old x)))
(redef cdr (x)
(if (isa x 'int)
(if (> rep.x 0)
(annotate type.x (- rep.x 1))
nil)
(old x)))
Both car and cdr now work on objects of type 'int (and objects of type 'cons, as before). If an object is both an int and a cons, its int being is taken into consideration. Every operation requiring 'cons cells or calling car and cdr can now be overridden to use ints instead (an int being a list whose car is the symbol 'a and whose cdr is that num - 1).
The problem is that each calls acons and alist, but they are defined in terms of (is (type x) 'cons) instead of (isa x 'cons). Once you redefine them, it works fine.
pg, do you still accept suggestions about the core functions ? Shouldn't acons and alist be defined with isa instead of is ? That would let us redefine them more easily. Btw, what do you think of all these discussions about types ?
I agree with you, although your choice of terminology is confused. You're not arguing for has-a. Has-a means "contains as a part", not necessarily part of the interface. Yes, a car (the kind you drive) has-a steering wheel, but it also has-a engine.
You seem to mean a specific kind of is-a called polymorphism. Looking at OOP, it looks like the only truly useful capability of OOP that Arc doesn't have is polymorphism, which fits well with duck typing. It basically means, "I don't care what this is or how it works, as long as it can foobaz."
Actually, I am arguing for a "contains as a part" semantics. arc.arc does have polymorphism - there are functions that work on sequences like lists, strings, and tables. The problem, however, is that some of them don't work on user-defined types without munging with 'isa.
What I'm proposing is abstracting some very "basic operations" such as 'car and 'cdr, put them in some "types" (really closer to type classes/abstract base class), and then have the built-in types "contain as a part" the "basic operation type". Then the arc.arc polymorphic functions will work based on the "basic operation type" instead of the actual is-a type, and user-defined types don't have to munge 'isa.
But if you think about it as an abstract base class/mixin/interface, then the standard terminology is "is-a". For instance, in Ruby:
module MyMixin # Like an interface
...
end
class MyClass
includes MyMixin # Like "implements MyMixin"
...
end
foo = MyClass.new
if foo.is_a? MyMixin
puts "is-a"
else
puts "has-a"
end
# Output: "is-a"
I'd rather see the "basic types" not as a collection of basic components, but as a collection of basic interfaces one can implement, or basic type classes one can be a member of, or what-have-you; what I'd really rather do is duck most of it, like Ruby does. If I can define car and cdr for my type, map should work seamlessly.
Regardless, it sounds like a lot of the voices here are in agreement over some common set of the features this plan proposes, which is a good thing. Perhaps we should set the naming quibbles aside for now and try to flesh that out. Or perhaps we should settle on a name for what we are about to flesh out. Either way, it looks like something good could well emerge from this thread.
In the case of Python, yes, it is prejudice. Have you ever coded in Python? If anything throws you off, it's going to be thinking in Python idioms, not the whitespace. In earlier languages, "significant indentation" meant something entirely different (columns 0-20 are comments, etc.). Python's use of indentation means that the interpreter uses the same code-block delimiters that the people reading and writing the code use.
Significant indentation is great. Wait, come back! Don't put this in the language yet! I'm not done! Significant indentation is great for languages like Python, in which code blocks are different from expressions and never go smack in the middle of a function call.
I've seen attempts to do significant indentation in Lisp, and instead of getting simple and pretty like Python, it gets all tangled and ugly. Ideas that are great in one context may be horrible in another. Need I remind you what happened when Larry Wall tried to combine all the great ideas from a bunch of different languages into one?
If you claim it's great for languages like python, and that's fine if you want it. I do know python, but I don't use it actively because I detest its engineer engineering and its significant whitespace.
But Lisps and Arc are not Python, or even a language like Python. And making a terminator or separator character that is both invisible and munged differently on many platforms seems like an extremely bad idea to me.
No, it's common sense. Call me old-fashioned, but I prefer being able to tell if code is syntactically correct just by looking at it instead of having to perform a hex dump. Never mind being able to reconstruct indentation from syntax when it eventually gets mangled in email/forum postings/code restructuring.
> Have you ever coded in Python?
Yes I have. Apart from the above caveat it's quite a nice language, although I find Ruby more elegant.
> Need I remind you what happened when Larry Wall tried to combine all the great ideas from a bunch of different languages into one?
Yes, it got hugely popular. You got a problem with that?
> No, it's common sense. Call me old-fashioned, but I prefer being able to tell if code is syntactically correct just by looking at it instead of having to perform a hex dump.
Are you trying to say that source code (either with or without meaningful indentation) requires a hex dump to read? Or are you talking about object code (in which case I am pretty sure indentations is entirely irrelevant).
>> Need I remind you what happened when Larry Wall tried to combine all the great ideas from a bunch of different languages into one?
> Yes, it got hugely popular. You got a problem with that?
I don't know about you but Perl's conglomerate syntax, combined with all that extra magic, makes Perl pretty unmanageable in my opinion after a couple hundred lines of code.
From pg's essay Arc at 3 Weeks:
"We're also going to try not to make the language look like a cartoon character swearing. [...] (Dan Giffin recently observed that if you measure Perl programs by the number of keys you have to press, they don't seem so short.)"
This is why Arc is better than newLISP. The only reason I can think of for this is that newLISP is trying to protect newbies from the actual structure of lists by telling them that cons adds an element to the beginning of a list, car gets the first element of a list, and cdr gets the rest of the list.
Arc doesn't try to protect people from these kinds of things. It's not like we can't handle the real meaning of cons.
Why is everyone opposed to foop for predicates? I understand the opposition to foo? (it involves an awkward shift-/ and swallows up one of the most potentially-useful ssyntax characters), but foop is easy to type, easy to read once you've used it for a couple hours, and easy to say. I like CL's last-character naming conventions. They make it easier to see what the code does, and they are quicker and easier to use than Scheme's set! and integer?
Note: this is not coming from a nostalgic Common Lisper. I'm somewhat new to Lisp, and the only Lisps I'm comfortable with are Scheme and Arc. I just think CL naming conventions make a lot of sense.
(I also like the idea of having a function called intp embedded in the language (if you don't know what I mean, look it up).)
As another new-ish Lisper, comfortable with only Scheme and Arc, I have to disagree. I find appending a character which might anyway occur at the end of a predicate to be a confusing way of expressing predicates. What about a type called, say, tem (a template, perhaps): what's temp? Is it tem predicate or temporary? I actually prefer the "a..." style to the "...p" style, but I think "...?" is still better.
Who would ever use 'tem for anything? A template type would more likely be called temp, and the predicate would be tempp. Usually when p is the next letter, it's included in the variable name, and when a p at the end of a name is part of a word, it is not usually read as predicate. This sort of collision is very rare, and when it does happen, it usually takes no more than a second to figrue out.
On the other hand, asomething in Arc sometimes means anaphoric-something and sometimes means isa-something. This inconsistency is much more common, and a- collisions are no better than -p collisions.
Then there is the foo? of Scheme. Every time I see foo? it interrupts my reading. I don't know about you, but I think of ? followed by a space as a terminator, and putting it between a function name and its arguments throws me off. The ? character also has a lot of potential for ssyntax, and in that position any break it would cause would most likely coincide with a conceptual break. I have no problem with punctuation separating symbols, but when it's punctuation followed by a space, that looks like a pause and throws me off.
I used tem because I couldn't think of another word quickly, and everyone has seen temp used as a variable name. Your point about anaphoric- versus isa-, though, is a good one; I'd forgotten that, and that's just another reason I like ...? as the terminology. Using single characters can lead to collisions.
When I see (even? n), I read the "even?" as "even" with an upwards tone, not as "even" followed by a separator. Thus, the predicate call reads as a question in my head, which is what it's supposed to be. On the other hand, I find that #'foop reads like the word "foop," which doesn't mean anything. It's less severe with something like #'numberp, but nevertheless, I find ...p to be where the semantic collision lies.
And though you're worried about the removal of ...? as ssyntax, as nex3 pointed out (http://arclanguage.org/item?id=4849), one could explicitly allow a ? at the end, while still allowing it as ssyntax.
Actually, we right now have no ssyntax that goes at the end of identifiers. Arc has ten pieces of syntax. () and [] are circumfix. ' ` , ,@ and ~ are prefix. : . and ! are infix. The two syntax requests I recall off the top of my head were (1) being able to write $(...) for some reason, which is prefix; and (2) being able to write ($f ...) for (map f ...), which is also prefix. So from a preliminary study (admittedly, with very few data points, but that's all there are), it appears that prefix syntax is the most common.
And as for the foo?!baz observation? Don't do that then :) Seriously, I don't think that's a problem. Just because we can write something like ~+:/.3.1@-2.5!-1 doesn't mean we should. (If you're curious, that is currently legal and expands to (compose (complement +) (/ 3 1@-2 5 '-1)) [r@q is notation for the complex number with magnitude r and angle q, just in case you haven't seen it before.]).
EDIT: Used to say "that's actually probably a bug, since I was expecting it to expand to (compose (complement +) (/ 3.1@-2.5 '-1)), but how often will we be putting complex numbers inside ssyntax?," but that was wrong (see http://arclanguage.org/item?id=5090).