Ick. I like 'is being an equivalence relation. Is there a way you could achieve what you want with 'testify instead?
I'm thinking something like this:
(def isa (x typ)
(.typ:testify type.x))
(extend testify (x) alist.x
[some _ x]) ; uses testify on '_ too!
This doesn't work on (case type.foo ...) though. In 'case, I'd prefer to be able to use predicates as the cases, so what I really want is for the cases to be testify'd. On the other hand, I still believe techniques like 'isa and (case type.foo ...) limit extensibility (even if you're trying to lift those limits a bit here ^_^ ), so I'm not married to any particular semantics for them.
"Ick. I like 'is being an equivalence relation. Is there a way you could achieve what you want with 'testify instead?"
Nope. I don't see a problem with overriding `is`, if that's what you want to do. If you don't want to override `is`, then just don't. Hell, you could override `is` and use testify, if you wanted to.
The point of the `is` attribute is that your object might have a custom idea of equivalence relationship. It's up to you, the designer of the object, to use a sane definition of "equivalence".
Now, if you had suggested to use `iso` instead of `is`, then I could be down with that, but then things like `isa` would need to be changed so they used `iso`... I'm not against the idea, but it would cause incompatibility with Arc/3.1.
I'll also note that the concept of something being eq to multiple things is not exactly new:
(is nil ())
(is nil '())
(is nil 'nil)
So I think you should be allowed to define a custom `is` behavior, when it makes sense for your object.
---
By the way, originally I extended `isa`, which did work fine, as long as you used `isa` only, but I really wanted it to be perfectly seamless. And, by doing this, I also get another benefit: objects can have a type of object and table, so (isa foo 'object) works:
(= foo (object))
(isa foo 'table) -> t
(isa foo 'object) -> t
I would still recommend using `object?` to test for object-ness, though. In addition, if I did this change, then I think I can fix the bug where you can't load the library more than once in the same namespace.
"It's up to you , the designer of the object, to use a sane definition of "equivalence"."
I agree, but your extension to 'is already violates the mathematical definition of an equivalence relation, so it's already insane as far as I'm concerned. For 'is to be an equivalence relation, if (is foo 'cons) and (is foo 'table), then anything equivalent to 'cons must be equivalent to 'table, and that renders the type checks rather useless.
---
"Now, if you had suggested to use `iso` instead of `is`, then I could be down with that, but then things like `isa` would need to be changed so they used `iso`... I'm not against the idea, but it would cause incompatibility with Arc/3.1."
Yes, I consider the pervasive use of 'is to be a flaw of Arc. I'm happy Wart doesn't approximate Arc in this way, and I don't design my languages/libraries to approximate Arc in this way either.
---
"(is nil ())"
When I use 'is, I'm asking if there's any possible way for me to distinguish the values. IMO, all ways to distinguish nil from () in Arc are "nonstandard," so (is nil ()) is just fine.
If there were a way to distinguish them, I wouldn't want (is nil ()) to be true. We could still have (iso nil ()), since 'iso can still be an equivalence relation with that constraint. (Everything 'iso to nil would be 'iso to () as well.)
"...then anything equivalent to 'cons must be equivalent to 'table, and that renders the type checks rather useless."
Right, so if your object has a type of both cons and table, it should definitely act like both a cons and a table, meaning it should have a car, cdr, etc.
---
"Yes, I consider the pervasive use of 'is to be a flaw of Arc. I'm happy Wart doesn't approximate Arc in this way, and I don't design my languages/libraries to approximate Arc in this way either.
[...]
If there were a way to distinguish them, I wouldn't want (is nil ()) to be true. We could still have (iso nil ()), since 'iso can still be an equivalence relation with that constraint. (Everything 'iso to nil would be 'iso to () as well.)"
Alrighty then, it looks like we have a bit of a viewpoint clash. You're seeing `is` as being like `eq`: super-strict, checking for exact equivalence. That's totally fine. I'm seeing `is` as being similar to, but a little bit looser than `eq`.
So perhaps what would be nice is if Arc had 3 operators: eq, is, and iso. eq is stricter than is, which is stricter than iso. Or perhaps we could just get rid of `is` completely and just use `eq` and `iso`.
In any case, those are incompatible changes, so given aw's position with ar, I'm not exactly planning to change ar in that way, even on my fork. Now, if we're talking about Arubic, then yes, perhaps Arubic could be designed differently, using iso everywhere rather than is.
But in the context of ar, I think overloading `is` in this way is the most reasonable choice, given how much of Arc uses `is`. Remember: my goal with this library is to allow you to easily and concisely create both new data types, and also data types that resemble existing data types, like tables.
Thus, I want objects to be as seamless as possible with the existing language. Unfortunately, that means that if the existing language made some poor choices, my library might end up doing something that's kludgy or hacky to make it work. Or hell, maybe the language is perfectly fine, and my library is just hacky and kludgy. :P
In any case, I think an object library designed more in line with a perfect ideal Arc would probably be designed differently than mine, and I'm okay with that, since mine is designed to get things done within existing Arc code.
"Right, so if your object has a type of both cons and table, it should definitely act like both a cons and a table, meaning it should have a car, cdr, etc."
We're miscommunicating spectacularly today. XP
If any one thing is equivalent to both 'cons and 'table, then everything equivalent to 'cons must be equivalent to 'table (for us to be talking about an equivalence relation). This means that if anything's type 'is the symbol 'cons, then its type 'is also 'table. All conses are tables. The converse holds too: All tables are conses.
This is absurd (in a non-mathematical way ^_^ ), so we shouldn't let it get that far: Either you must believe that 'is is not an equivalence relation, or you must accept you're not using 'is propahlay. Anyway, I think you are consciously resigning yourself to one or both of those things, in order to "get things done within existing Arc code."
Personally, I'm very worried Arc-compatibility in ar will hold it back in ways like this.... I'm going to do what I can to help establish compatibility anyway though.
---
"So perhaps what would be nice is if Arc had 3 operators: eq, is, and iso."
Aahhhh, no way. XD I think Akkartik says it pretty well at http://arclanguage.org/item?id=13690: "But the problem isn't that it's impossible to find the one true semantic for equality or coercion, it's that the language designers didn't put their foot down and pick one. Don't give me two different names with subtle variations, give me one strong name."
This helps because there's just one utility for people to use and extend, not a variety of utilities which compete for attention. And as I say at http://arclanguage.org/item?id=12803, if we need a different notion of equality, we can define a wrapper type with its own 'iso support which compares its contents using that new notion.
But in ar, maybe you're right. From http://arclanguage.org/item?id=248, it's clear that your lax interpretation of 'is is consistent with pg's intentions for it. Perhaps ar should provide 'eq and each Arc variant running on ar should choose its own notion of equality.
"All conses are tables. The converse holds too: All tables are conses."
Woah, hey, that gives me an idea... :P
---
"This is absurd (in a non-mathematical way ^_^ ), so we shouldn't let it get that far:"
You sure about that? :P
In all seriousness, I considered representing tables in Arubic as alists (or possibly plists). But that's getting off-topic...
---
"This helps because there's just one utility for people to use and extend, not a variety of utilities which compete for attention."
Absolutely. So I went in and extended `is`. But now you're saying that extending `is` is bad because you want `is` to behave like `eq`. :P It was part-serious and part-joking, since although it would solve this particular problem, I can understand your reluctance to add in a zillion equality-testing things.
---
"Perhaps ar should provide 'eq and each Arc variant running on ar should choose its own notion of equality."
Sounds good to me, provided that I can use Arc's `is` when I want to. In other words: ar provides eq, Arc/ar provides is, and Arc variants can choose to use either one, both, or neither.
---
"Personally, I'm very worried Arc-compatibility in ar will hold it back in ways like this...."
Me too. Personally, I want to just go in and rip up things, and have a backcompat.arc file that smooths over things, but alas... Hell, that's what I did on my fork, and backcompat.arc is loaded automatically, so it behaves almost exactly[1] like Arc/3.1.
---
* [1]: The primary difference is when you want to extend something. For instance, let's say you wanted to extend "no". But my fork doesn't use "no", it uses "not", so you'd be extending the wrong thing. Stuff like that. Probably possible to work around, but ugh.
So, if I can get a perfect compatibility layer, then sure, let's go with that, but as long as it's imperfect, then there will always be some niggling corner case which could be used as justification for not changing ar.
As I'm sure you're aware, conses and tables are an arbitrary part of the example. ^_^ If you wanted anything to act as both a 'num and a 'cons, then all conses would be nums too.
---
"Me too. Personally, I want to just go in and rip up things[...]"
Yeah... I wonder if I should be forking your ar instead of working on aw's. >.> It would become a fork of a fork of a fork of an Arc. :-p Maybe since it's hard to pull-request all those changes, you could treat your ar as a separate project?
Eh, I'm not usually one to make an Arc spinoff from existing source anyway. I'm more likely to either give advice from the sidelines or start from Racket (or JavaScript) and implement utility layers on top of it until I have a system I like. So don't worry about my participation either way, heh. ^_^;
"As I'm sure you're aware, conses and tables are an arbitrary part of the example. ^_^ If you wanted anything to act as both a 'num and a 'cons, then all conses would be nums too."
Of course, I was joking. Wait a second... we could Church encode numbers... then they really would be conses...! (I wonder how much longer I could keep this up :P)
---
"Maybe since it's hard to pull-request all those changes, you could treat your ar as a separate project?"
Yeah, I'm kinda leaning toward that, especially since some of my changes break backwards compatibility, and aw has made it pretty clear that Arc/ar is aiming at compatibility.
"I wonder how much longer I could keep this up :P"
Probably until we reach "Everything's an object." Then we can take a look at all the ways we distinguish certain objects from others, and we can start all over again. ^_^
"Then how do you usually invoke a method on an object? I guess what I mean is, what's the shortcut for (w/self bar (bar<-x ...))?"
No shortcut yet, but the idea is that hopefully you won't need to most of the time. The point of defining methods like `call` is that you don't call them directly:
(= foo (object call (fn () ...)))
(foo ...)
A shortcut for it would be nice, though.
One idea would be that get-attribute would return a binded form, so you could just call (foo<-x ...) directly, without needing w/self, but still allow w/self to override it.
What's the advantage 'del gives me? Wouldn't my programs have fewer tokens if I directly used (zap cdr foo) instead of (del car.foo)?
Right now 'del looks like a smorgasbord of special-case utilities which could all have shorter tailor-made alternatives. Ideally what I like to see is a utility with an abstract enough description that I can build new tools upon it and straightforwardly extend it to make all my tools handle a new case. (Basically I have the same indifference to 'del as I have to 'coerce. ^_^ )
---
"The point of defining methods like `call` is that you don't call them directly"
Okay, I thought it might be something like that. ^_^ But in that case, it reminds me of Racket's structure type properties again. It's up to you to decide whether that's a flattering comparison or not, but you didn't seem to like them the last time.
---
"One idea would be that get-attribute would return a binded form, so you could just call (foo<-x ...) directly, without needing w/self[...]"
That's what I was intuitively thinking would happen. ^_^ I usually wouldn't say that, though. Perhaps I've been working in JavaScript for too long now. :-p
---
"[...]but still allow w/self to override it."
Er, even my currently weird intuition doesn't like that kind of inconsistency. >.>
What about making 'get-attribute a metafn? That is, ((get-attribute a b) ...) would expand into something like (invoke-attribute a b ...) the same way ((compose a b c) ...) expands into (a (b (c ...))).
Then when you want to call foo's x from bar, you say something like this:
(w/self bar
((do foo<-x) ...))
The (do ...) would keep (get-attribute foo x) from being in functional position, so it wouldn't be treated as a metafn.
This would take a bit of language-core-hacking. Metafns are special-cased like special forms in official Arc--and even in your fork of ar, even though you're using gensyms for special forms. I would have tried to "fix" that in Arc a long time ago, but my new design for custom syntax was so different from Arc's that I considered it part of Penknife instead. The most straightforward way to get started in ar would probably be to design metafns as macros which expanded into literal macros, and then to let literal macros expand even when they're not in functional position.
---
At this point these are all just substanceless, brainstorming-like suggestions. I won't mind too much if you throw them all out, just as long as you've at least thought about them. ^_^
"What's the advantage 'del gives me? Wouldn't my programs have fewer tokens if I directly used (zap cdr foo) instead of (del car.foo)?"
Abstraction and readability. Let me answer your question with a question: why do we need (= (car foo) ...)? you can just use scar! Or why do we need (= (foo 'a) ...)? We can just use things like sref!
For the same reasons that overloading = to understand things is nice, overloading del to understand things is nice. So, if you feel that using (scar foo ...) is better than (= (car foo) ...), then by all means use (zap cdr foo), but I personally like using the = forms, and so I like del for the same reasons.
I'll note that there is a defdel, which lets you create new "del" stuff, similar to defset. And you can extend dref as well, just like extending sref. It's basically the exact same idea as =, except it's applying to deletion rather than assignment.
And let me say one more thing. How would you easily and concisely implement this, which deletes a range of elements:
(= foo '(1 2 3 4))
(del (foo 1 2)) -> (1 4)
Or how about doing it in steps, so you could delete, say, every other element in a list:
Sure, you could write a special-case function for that, in the same way we have special-case functions like scar, scdr, and sref, but it's nice to be able to bundle them up into a single convenient macro, like del.
---
"It's up to you to decide whether that's a flattering comparison or not, but you didn't seem to like them the last time."
I didn't like how they automatically created stuff like exn-message, rather than letting you do (foo 'message). That's different from disliking the overall idea.
---
"Perhaps I've been working in JavaScript for too long now. :-p"
JavaScript does have bind... :P
foo.bind(this)
---
"This would take a bit of language-core-hacking."
Or I could just use dynamic variables, which is what I'm doing. It's quite simple: if self is bound, use that, otherwise use the object. Basically, self defaults to the current object, while letting you override it with w/self.
This is actually very similar to JavaScript:
foo.method(...)
foo.method.call(bar, ...)
Which would be like this, with my library:
(foo<-method ...)
(w/self bar (foo<-method ...))
The difference is, that in JS they use the read-only "this" keyword, but in my library I'm just using an ordinary dynamic variable.
"Let me answer your question with a question: why do we need (= (car foo) ...)? you can just use scar! Or why do we need (= (foo 'a) ...)? We can just use things like hash-set!"
I was originally going to use that as an example. ^_^
Arc's notion of assignment follows the abstract, informal contract that (= (foo ...) bar) changes whatever (foo ...) returns. This means we can have utilities like 'zap and 'swap that treat something as both a "get"-like expression and a "set"-like expression. I don't see 'del having a contract like that, so I don't see why "it's nice to be able to bundle them up."
I'd implement your range deletion examples like this...
(mac del (place start (o howmany 1) (o step 1))
...)
...and say (del foo 1 2) and (del foo 0 nil 2).
---
"I didn't like how they automatically created stuff like exn-message, rather than letting you do (foo 'message). That's different from disliking the overall idea."
Ah, okay. That works for me.
---
"JavaScript does have bind... :P"
I'm already fully aware a JavaScript programmer can control the value of "this" using call, apply, or bind. ^_^
All I'm talking about is the way that foo.bar( baz ) calls the value of foo.bar with foo as "this," but if you instead define idfn() and say idfn( foo.bar )( baz ), then "this" is the root object. That's not the most elegant semantics in the world, but I'm saying my intuition is somehow in line with it right now.
(The "root object" aspect isn't part of my intuition, though. I'm fine with idfn( foo.bar )( baz ) using anything for "this", including treating it as a dynamic variable like you are.)
---
"Or I could just use dynamic variables, which is what I'm doing."
Again, I was already taking that for granted and not arguing with it. Instead, I was talking about a way for (foo<-x ...) to automatically use (w/self foo ...), the way JavaScript foo.x( ... ) is like foo.x.call( foo, ... ).
---
"It's quite simple: if self is bound, use that, otherwise use the object."
That's not reassuring....
I'm particularly concerned about the case where we're using (foo<-method ...) from inside another object's method. I'm afraid the self variable will already be bound then, so instead of calling the method on foo, we'll end up calling it on the current object. As a workaround, we'll have to say (w/self foo (foo<-method ...)).
I guess we could probably say (w/self nil (foo<-method ...)) too, but that's still not ideal.
"Arc's notion of assignment follows the abstract, informal contract that (= (foo ...) bar) changes whatever (foo ...) returns."
del's contract is that it zaps the variable to whatever dref returns. It is more analogous to = than you think. In fact, it's exactly the same idea, only the name is different, and it's purpose is different.
So, my point still stands. If you think that (= (car foo) ...) is good, then why is (del (car foo)) bad? You may think, "well, I don't think that's useful". Fine. There's plenty of Arc functions I don't think are useful. Just don't use del, then.
But in the same way that = being general-purpose is obviously useful, del being general purpose is obviously useful. For instance, how do you delete an attribute?
(del foo<-bar)
Compared to:
(del-attribute foo 'bar)
How do you define a custom deletion behavior for your object? Just slap on a `del` attribute, in exactly the way that objects can define a `set` attribute.
Did you define new functions "get-foo", "set-foo" and "del-foo"? Well, nobody has to know about "del-foo" because you can just use defdel, in the same way that nobody needs to know about scar, because you can just use (= (car ...) ...)
And nobody has to know about set-foo, because you can use defset... and nobody has to know about get-foo, because you can use defcall. Notice the pattern, here?
---
"The specific case I'm interested in is the one where we're using (foo<-method ...) from inside another object's method. I'm afraid the self variable will already be bound then, so instead of calling the method on foo, we'll end up calling it on the current object."
Ah yes, good point. I should have a unit test for that.
"It is more analogous to = than you think. In fact, it's exactly the same idea, only the name is different, and it's purpose is different."
If the name and purpose are different what's left? :p
Slices and steps don't currently work with '=. So they feel bolted on to del.
There's two separate issues here: whether we need del in addition to '=, and whether places should support slices and steps. Let's discuss them separately.
"There's two separate issues here: whether we need del in addition to '=, and whether places should support slices and steps. Let's discuss them separately."
Of course. I don't recall ever conflating the two, and I think we should allow for both assigning to slices, and deleting slices. In fact, then deletion could use assignment, so this:
I don't. We already have (= (foo 0) ...) modifying the first element, so we don't have room for it to modify the slice 0..<(0 + 1). We already have (zap inc (my-table idx 0)) incrementing my-table.idx with a default starting value of 0, so we don't have room for it to modify the slice idx..<(idx + 0).
(FYI: The ..< I'm using is Groovy's notation for half-inclusive ranges, inspired by Ruby's ... notation but less ambiguous with Groovy's other syntax. I also prefer it for its obvious asymmetry.)
It would probably make more sense to use (slice foo ...) to get a slice and (= (slice foo ...) ...) to put it back.
This isn't ideal, as far as I'm concerned, 'cause it means we don't always get what we put in:
arc> (= foo '(a b c d e f g))
(a b c d e f g)
arc> (= (slice foo 1 3) '(x y))
(x y)
arc> foo
(a x y e f g)
arc> (slice foo 1 3)
(x y e)
However, that's not really part of the existing contract:
arc> (= foo (table))
#hash()
arc> (= (foo 'a 1) nil)
nil
arc> foo
#hash()
arc> (foo 'a 1)
1
Furthermore, I don't know a place I'd use that contract even if it did exist. ^_^ So I retract my objections for now, just as long as slice-getting isn't ambiguous.
The other issue was that I think at some point you were using (foo 0) to mean a one-element slice, and that would definitely be ambiguous with regular old indexing.
---
"Hm... why not use cut rather than slice?"
I was going to say that, but I didn't want to clutter my point too much. Arc's 'cut accepts start and stop positions, and your slicing examples instead use a start position and a length. Translating between those was making my post harder to follow (I think), so I just assumed the definition of 'slice instead. Either should work. ^_^
"In tables, (foo 0 1) already means the same thing as (or foo.0 1). But hmm, maybe you're not interested in slicing tables. That's fine, then. ^_^"
What would a table slice even mean? They're not even ordered. The closest thing would be a table that has a subset of the keys of another table, but that wouldn't use numeric indexing.
---
"The other issue was that I think at some point you were using (foo 0) to mean a one-element slice, and that would definitely be ambiguous with regular old indexing."
I was? (foo 0) should always be an element, not a slice. Ah well, it doesn't matter. If I did use that, I was incorrect.
I'll note that (del (foo 0)) is equivalent to (= (foo 0 1) nil) so perhaps that confused you.
---
"Arc's 'cut accepts start and stop positions, and your slicing examples instead use a start position and a length."
They did? I always thought of it as being a start/end position, like cut. So for instance:
The way I'm doing it is, the number of elements is the end minus the start, so for instance, in the example above, 5 - 2 is 3, so it removed 3 elements, starting at position 2. This is exactly how cut behaves:
(= foo '(1 2 3 4 5 6))
(cut foo 2 5) -> (3 4 5)
In fact, I think it'd be a good idea for lists/strings in functional position to just delegate to cut, so (foo 2 5) would be the same as (cut foo 2 5)
Well, sure, but you wouldn't use numeric indices to refer to that. :P It might be something like this:
(subset x 'foo 'bar 'qux)
As opposed to:
(x 0 1)
Which would be slice notation, which would work only on ordered sequences, like lists.
Which isn't to say that table subsets would be a bad idea, just that it's a different idea from list slicing, so there shouldn't be any ambiguities with slice notation (which was one of rocketnia's concerns).
You could implement subsetting at function position, in which case it would be ambiguous. I already use :default in wart, so I don't care too much about that :) But yeah I hadn't thought of how to create ranges of keys. This may not be a good idea. You can't distinguish table lookup from a subset table with just one key/val, for example.
Well then, guess I'll go in and do it, as a library for starters, and then we can potentially get it into base ar later.
By the way, sref has a terrible argument signature:
(sref foo value key)
Intuitively, it really should be[1]:
(sref foo key value)
But that's a backwards-incompatible change... Also, setforms only sends a single argument to sref, so we would need to change that too. For instance:
(= (foo 0 1) 'bar)
Is translated into:
(sref foo 'bar 0)
But it should be:
(sref foo 'bar 0 1)
But if I change that, it breaks other things. Ugh. This really wasn't designed with slices in mind.
---
* [1]: To be fair, having the key as the last argument is great for slices, but bad for pretty much everything else. The argument order would make a lot more sense if sref had been designed with slice support.
"Also, setforms only sends a single argument to sref, so we would need to change that too."
Yeah, I think I noticed that at one point a long time ago, and I consider it to be a bug. (I promptly forgot about it. XD;; ) I never rely on that behavior, but then I don't extend things in the Arc core much either, for cross-implementation compatibility's sake.
---
"To be fair, having the key as the last argument is great for slices"
I think that's the reason it is the way it is. I've ended up designing some of my utilities the same way. For instance, I usually give failcall the signature (func fail . args), which is very similar to sref's (func new-value . args). I'd probably call 'sref "setcall" if I were to reinvent it.
I'm not sure I want to reinvent 'sref though: It's kinda bad as a failcallable rulebook, because you can't tell if it will succeed until you've called it, and then you get side effects. I've been thinking about other designs, like a currying-style ((sref func args...) new-value) version or something. But this is kinda specific to a system that uses failcall. In Arc, we can't really recover when an extension doesn't exist, so we don't write code that tries yet.
"Right now 'del looks like a smorgasbord of special-case utilities which could all have shorter tailor-made alternatives."
That was my first reaction as well. I see one general new idea here, though: to extend indexing to return not just places but slices like in python and Go.
(= l '(1 2 3 4))
(l 1) ; 2
(l 0 2) ; '(1 2)
(= (l 0 2) '(3)) ; l is now '(3 3 4)
Now del becomes identical to setting a range to nil.
(= l '(1 2 3 4))
(= l.0 nil) ; l is now (nil 2 3 4)
(= (l 0 1) nil) ; l is now (2 3 4)
"Things get more complicated when an object's method calls a different object's method, or when you introduce things like inheritance... I can tell you how I would expect it to behave, the complicated part is actually making it behave that way."
This is where something like Lathe's secretargs might come in handy. If we made a secretarg for "the object the function is invoked on", then it's like every function in the language has it as a lexical parameter. Traditional function calls just pass a default value, and traditional functions just ignore it; you have to use special-purpose forms to define secretarg-aware functions and pass those arguments in. Is this close to what you have in mind?
A failure secretarg is how I hacked in failcall to Arc--and now JavaScript--and I'd also make a secretarg if I ever felt the need to implement keyword arguments. (Secretargs are essentially first-class keyword args without names, but one of them can hold a more traditional keyword table or alist.) The catch is that things like Arc's 'memo and JavaScript's Function#bind() aren't written with secretargs in mind, so they're not especially seamless with the language. But I think it would be cool to add secretargs to ar on a more core level. ^_^
"If we made a secretarg for "the object the function is invoked on", then it's like every function in the language has it as a lexical parameter."
Hello dynamic variables!
---
"Is this close to what you have in mind?"
Sorta. The idea is close, but your way looks too verbose. I want it to be really short and simple, so having to define methods in a special way does not sound good to me. I want them to be plain-old functions.
I have an idea for what I'm going to do, and I think it'll work, because in ar, lexical variables always take precedence over dynamic variables, from what I can tell.
By the way, right now I only care about ar compatibility, since ar has nifty stuff like dynamic variables, which I've come to love. This not only should make the library shorter and easier to understand, but I honestly can't care much about compatibility, because my library kinda needs defcall, which isn't standardized[1] (and isn't even in Arc/3.1 at all)
Secretargs happen to be implemented in terms of dynamic boxes. The benefit of secretargs is that you can always pass the default value just by making a normal function call; you don't risk propagating the existing dynamic value into the call. Whether that's important in this case is up to you. ^_^
Oh, and I don't think we can say anything's standardized in Arc at all. :-p The standards are just implicit assumptions around these parts.
Note: when I said "standardized" I meant in the "de facto standard" way. Arc has plenty of those. :P
I agree that Arc has essentially no de jure standards[1], and I think that's a good thing.
---
* [1] The closest thing would probably be whatever pg does, but that would only concern the question of what official Arc is; other implementations would be free to either follow official Arc, or branch off into a different Arc-like dialect.
Anyway, I think any discussion of modules really depends on the nature of the language. How are people likely to compose their programs? Erlang programs can be updated on the fly. Arc programmers tend not to treat libraries as black boxes. As for me personally, I like to generate my programs by writing other programs. Sometimes a separation of modules is useful so that some can be autocompleted in an IDE or separately compiled. There's a wide range of crazy spins on collaborative (and otherwise modular) programming which might provoke different ideas of the term "module."
In the end, modules are the abstraction (or lack thereof) associated with individual programs in a community. Even very pure languages can have pretty flaky abstractions at that level thanks to the soft, changing factors of just what the community is, what it wants to be, and what tools it uses to program and communicate. I generally know what I like in a module system, but talking about "Do we need modules?" probably isn't as useful as talking about particular module-level characteristics like searchability, hackability, documentation, and centralization (in the sense of everyone contributing to the same project).
I consider it to be a prototype of what Penknife is supposed to be like, but with no module system or custom syntax. It's mostly for fun and so I have something to run when my only runtime is a browser, but I wouldn't be surprised if an s-expression parser over lathe.js turned into a pretty viable hack of a language.
The important thing for Node.js is that JavaScript has a nice Scheme-like evaluation model with a very high demand for fast implementations. It's at the point where they're practically putting it into hardware, which would essentially mean a new age of Lisp Machines. Pretty nifty and all. ^_^
"It would mean that in addition to having descriptions that were more or less detailed, we would have descriptions that were about different aspects of the system, because no one kind of description is going to capture everything that matters. (This is something the other engineers have had.) [...] Now the key point in getting all of this to work is to arrange the descriptions to that we can take advantage of what one description doesn't say, to turn around and say it in a different description."
Reminds me a bit of UML. :-p
"Achieving this kind of separation has been the focus of much open-implementation research. A primary focus has been the concept of computational reflection, which explores issues of how modules can provide interfaces for examing and adjusting themselves - that is, how modules can provide meta-interfaces. Although this may sound somewhat esoteric, it is exactly what open implementations need - an interface for controlling the implementation strategy that sits behind the primary interface." (http://www2.parc.com/csl/groups/sda/projects/oi/ieee-softwar...)
And that reminds me of dependency injection frameworks. ^_-
That's not to say Kiczales has UML or dependency injection in mind, just that those camps (or at least their stereotypes) worry about the same things. ...Which probably isn't surprising, since it's all programming. :-p
Speaking of which, I've actually been worried about both these issues recently.
I've had a massive interactive story idea in mind for a few years now (http://rocketnia.wordpress.com/2009/04/26/dun-dun-dunnnnn/), and the task of planning out such a story currently has me thinking in terms of describing the automaton-like system of the fictional world in terms of several subplot descriptions which somehow coexist and flesh each other out (no clue how yet).
Meanwhile, I just looked at my Penknife draft in a somewhat new light today, as a language which needs as obvious an implementation of things as possible in order for the action of extending a core utility to have predictable consequences. I've already been avoiding complicated algorithms for fear that they'd be wrong, have unstable APIs, or be hard to customize, and at some point I probably even thought about this practice as a form of putting the implementation out in the open, but something feels different about saying that today. Dunno if it's good or bad. ^_^
Alternatively, sometimes I think of 'extend style extensibility in terms of implementation inheritance from the language (and libraries) to the application. That doesn't sound pretty at all. :-p But that probably won't stop me.
"Or, if using Arc libraries from your Racket program is more trouble than it's worth right now (and I imagine it could be), perhaps you'd like to implement a REPL with Racket's web server. Of course the Arc forum probably isn't the best place to ask about that, the Racket email lists would probably be more help, but I imagine it's probably possible since Racket does have eval."
It's actually kind of an example in (the most advanced part of) the Racket getting started pages:
> Given the "many" example, it’s a small step to a web server that accepts arbitrary Racket code to execute on the server. In that case, there are many additional security issues besides limiting processor time and memory consumption. The racket/sandbox library provides support to managing all those other issues. (http://docs.racket-lang.org/more/index.html)
The racket/sandbox module reference has another example:
These examples complement each other. The first doesn't actually 'eval any client input, and the second uses a TCP socket directly rather than bothering with session handling or HTTP, but each of them picks up the other's slack. I'm a bit surprised there aren't many Racket Web REPLs for Racket the way there are for Arc.
"In fact, in Arubic I plan for strings to just be a special subtype of conses, so you can use car and cdr on strings, and all operators that expect conses should work on strings as well."
Can you set the car of a string to an arbitrary value, like you can with a cons? Is "" nil?
I'm not trying to change your mind, but "all strings are conses" is the kind of thing lots of people think would be nifty only to change their own minds later. :)
"Can you set the car of a string to an arbitrary value, like you can with a cons? Is "" nil?"
Yes and yes. I plan for the only difference between strings and conses is that A) they have a type of 'string, in addition to a type of 'cons, and B) they display differently, surrounded by "" rather than ()
Generally speaking, strings will only contain characters (since that's what the string constructor does), but I don't see why they can't contain arbitrary stuff, if you want to do that.
P.S. I might not actually give them a type of string... in that case the only difference would be that they display differently. That might be an interesting approach.
Actually, you probably didn't notice, but in Arubic/Python, strings literally are a subclass of cons, so they really do behave just like conses in almost every situation. I didn't make (is nil "") return t though. That shouldn't be hard to add, but I've been busy on Arubic/ar.
Yeah, I see it as a poor design for 'is. Personally, I pretend strings are immutable, if only so I can use 'is without regret.
The fact that Arc's 'testify and 'case rely on 'is might be part of it. I and akkartik believe everything should be using 'iso whenever possible instead, and I think akkartik goes as far as to say that 'iso should be named "is" and 'is should be dealt with some other way. There was a recent discussion about replacing today's (iso a b) and (is a b) with (is a b) and (is (id a) (id b)) respectively, and that seems like a pretty nice idea.
Hmm, I hadn't considered renaming iso to is. wart still calls it iso; is no longer exists. (Since it's built atop Common Lisp you can just use eq when you truly need it.)
I like the name iso. is raises philosophical questions about what makes two things equal. iso neatly sidesteps them by evoking the precise image of structural equality.
Oh, oops. Sorry! XD I guess I figured that the name 'is is so good that something oughta use it. And for some reason, I'm not currently concerned that the name "is" should be restricted to the most picky kind of equality operator (as I was in that thread).
---
"iso neatly sidesteps them by evoking the precise image of structural equality."
Right now, I don't believe the purpose of 'iso is very precise either. I think a type's designer should extend 'iso with whatever makes intuitive sense for that type, just as long as it's still an equivalence relation. Guess I'm in an informal mood.
Mutable strings are indeed sort of "strange", especially given Lisp semantic for literals that is different for example from Python (in python the syntax "[1,2,3]" is a list literal, but is indeed equivalent to Lisp "(list 1 2 3)" and not to "'(1 2 3)", i.e. returns a new fresh list at each evaluation). Mutable literals is somewhat counterintuitive and in Python a similar problem happens with default values for parameters (i.e. "def foo(x=[1,2,3]):...") not because the literal is altered but because default value is evaluated only once at function definition time. This idea that what is in the program text is not in this case fixed is a trap in which newbies often fall...