Arc Forumnew | comments | leaders | submit | rocketnia's commentslogin
1 point by rocketnia 5504 days ago | link | parent | on: Playing with aif

I think your 'aif will behave strangely when 'then actually is nil, as in (aif nil nil t t).

Meanwhile, I think the (aif 34) behavior is the way it is so that (aif ...) forms without 'it are identical in semantics to the corresponding (if ...) forms. Meanwhile, the 34 in (if 34) is the else clause; it's preceded by 0 condition-consequence pairs, none of which will succeed (or fail :-p ).

I might be giving the Arc version of 'aif too much credit though. It uses the same 'it expression sometimes as a condition and sometimes as an else clause, which is almost too clever not to be a bug. XD Also, (aif) raises an error whereas (if) doesn't.

Here's a take on 'aif I've recently been thinking about:

  (mac delisting (rest lst . body)
    (w/uniq g-rest
      `(withs (,g-rest ,lst ,rest ,g-rest)
         ,(whenlet (first . body) body
            (iflet (nextvar . body) body
              `(if (acons ,g-rest)
                 (let ,nextvar (car ,g-rest)
                   (delisting ,rest (cdr ,g-rest)
                     ,@body))
                 ,first)
              first)))))
  
  (mac aif body
    (delisting rest body
             nil
      first  first
      then   `(let it ,first (if it ,then (aif ,@rest)))))
I was worried this 'delisting thing would be ugly and unintuitive, but I sorta like the way it looks now that it's actually there on the screen with all its cases in a row. Does it seem intuitive to you?

-----

2 points by akkartik 5503 days ago | link

Thanks for catching that! I've been thinking about that bug: https://github.com/akkartik/wart/commit/90b6df8616a34635feeb.... I figured nobody would ever use aif without using it :)

Hmm, you lost me with delisting. Can you describe its semantics in english? What's that nil for in the call?

And why do we need g-rest? Why not just make it:

  (mac delisting (rest lst . body)
    `(withs (,rest ,lst)
       ,(whenlet (first . body) body
          (iflet (nextvar . body) body
            `(if (acons ,lst)
               (let ,nextvar (car ,lst)
                 (delisting ,rest (cdr ,lst)
                   ,@body))
               ,first)
            first))))
I don't see any change in behavior. I thought maybe you were trying to use lst in two different senses and wanted to be clear about that, but it's not that either..

In wart all args are optional (like javascript), so I could just:

  (mac aif(expr then . more-branches)
    (if (or then more-branches) ; <= modified
      `(let it ,expr
         (if it
            ,then
            (aif ,@more-branches)))
      expr))
Update: Fixed (https://github.com/akkartik/wart/commit/32401beaf45139ae0340...)

-----

1 point by rocketnia 5503 days ago | link

Wouldn't that 'aif have (aif t nil) return t?

Hmm, you lost me with delisting.

It's a utility to cut down on nesting (iflet (a . b) ...) forms, a common pattern I use when giving a utiility different behavior depending on the number of arguments. The primary goal is to have the branches lined up on the right side while the local variables are distributed so they don't get in the way. Particularly, a variable is positioned just before the first branch it's bound in. The rest variable is an exception; it makes sense at every level, but I don't want it to be perceived as the first element of the list, so I've put it in a different place. (Note that it doesn't actually make sense everywhere unless the thing being destructured could be a dotted list or non-nil atom.)

The three branches in my 'aif's use of 'delisting are the zero-element branch (just nil), the one-element branch, and the two-or-more-element branch. The "two or more" thing might be tricky to perceive; maybe the utility should be limited to proper lists and the last case should be preceded by both its usual variable and the rest variable.

Meanwhile, the point of 'g-rest is to allow for destructuring. Destructuring an atom might be weird, but most of the time the rest isn't an atom, and nil can be destructured just fine.

-----

1 point by akkartik 5503 days ago | link

Ok, I'm starting to see. So the code in each branch binds all variables upto that point?

When I see:

  first    first
I'm inclined to read this as, "if you just see a variable (call it first) and nothing else after, just return first." I imagine the evaluation going to the next case and saying[1], "oops, nothing left, ok, go back and execute code for the previous case." Am I on the right track? Hmm, I've never seen anything like this.

And the nil is a branch with an empty 'guard'. Ahhhh.. this is cool :)

[1] Dijkstra hated anthropomorphizing code.

-----

2 points by rocketnia 5503 days ago | link

You're on the right track. ^_^ Like I said, 'delisting just exists to better organize a pattern I use all the time.

A visual example would probably work wonders. Here's the pattern:

  (iflet (<var1> . <rest>) body
    (iflet (<var2> . <rest>) <rest>
      (iflet (<var3> . <rest>) <rest>
        <three-or-more-arg case>
        <two-arg case>)
      <one-arg case>)
    <zero-arg case>)
And here's the same code using (this version of) 'delisting:

  (delisting <rest> body
            <zero-arg case>
    <var1>  <one-arg case>
    <var2>  <two-arg case>
    <var3>  <three-or-more-arg case>)
Any version of 'delisting should have basically the same elements. I'm just not sure where to put them.

Incidentally, halfway through writing that post with 'delisting and 'aif, the 'aif code read (decomposing body ...). XD

-----

1 point by akkartik 5503 days ago | link

Ok, I see the light. That explanation suits me perfectly :)

-----

1 point by rocketnia 5503 days ago | link

I just hope my code doesn't all devolve into "decomposing body" and "ifdecap." >< Talk about anthropomorphizing code.

-----

1 point by akkartik 5503 days ago | link

Hmm, (aif t nil) works fine for me. aif perhaps doesn't need that gensym even if delisting does in general.

-----

1 point by akkartik 5498 days ago | link

Ack, I misunderstood you. I've retreated to a version with car/cdr manipulation; looks like there's no way around it (it's still simpler than the original, but that just makes me wonder what bugs remain)

https://github.com/akkartik/wart/commit/9425c4829abb88052d29...

-----

1 point by rocketnia 5498 days ago | link

I haven't tested it, but I don't see any bugs from here.

I'm a bit surprised you broke up the (if branches ...) condition into two places like that, though. I think it would have been simpler just to replace "(or then more-branches)" with "branches". ^^

-----

1 point by akkartik 5498 days ago | link

:) Yeah I thought of that in the next CL.

-----

1 point by akkartik 5503 days ago | link

Incidentally, does this seem like a good idea?

  arc> (iflet a nil 34 35 a)
  Error: "reference to undefined identifier: _a"
I've modified wart to have iflet mimic aif closely.

  wart> (iflet a nil 34 35 a)
  35
In fact, aif is now directly implemented using iflet (https://github.com/akkartik/wart/commit/2880402dc5ae2d96be7f...). Does this introduce a bug I haven't considered?

-----

1 point by rocketnia 5503 days ago | link

I prefer having 'aif be a special case of 'iflet too. The change you've made would only come up every so often. It might break some code that uses 'iflet for destructuring and the tests some other condition if that fails (whose result can't be destructured), but that's easy enough to fix.

For Penknife I've been thinking about something a bit different:

  [itif x
    do-something-with.it
   ifdecap first rest lst
    [do-something-with first rest]
   iflet result foo.bar
    baz.result
   do do-something-else.]
In this in-progress draft, every if-like construct parses its elses using the same utility. The point is to let them work together better while making the indentation less controversial and confusing than it is in Arc. The downside so far is that every single unconditional else needs a 'do in front of it to look nice, which feels a bit like (cond ((#t ...))).

-----

1 point by akkartik 5503 days ago | link

"It might break some code that uses 'iflet for destructuring and the tests some other condition if that fails (whose result can't be destructured).."

Ah, destructured args!

https://github.com/akkartik/wart/commit/9cb777987467bedf4446...

Thanks for the bug report! :)

-----

1 point by akkartik 5503 days ago | link

ifdecap?

-----

1 point by rocketnia 5503 days ago | link

Penknife's version of (iflet (a . b) x ...) is [ifdecap a b x ...]. There's no destructuring yet, not to mention that Penknife has no notion of a dotted list.

I called 'ifdecap "ifpair" at one point, but I want it to be extensible, with a clear metaphor of first and rest rather than left and right.

-----

1 point by akkartik 5503 days ago | link

Ok, but why decap? Just a random pronounceable name for something low-level you didn't want to use a good name for?

-----

1 point by rocketnia 5503 days ago | link

Can you think of a better one? XD This isn't supposed to be that low-level. I'm working on the Penknife core from a top-down view right now, and I've considered having all argument lists decompose using ifdecap just so that a function can be applied to a user-defined type of list (which might support a user-defined kind of destructuring, for instance; think keyword args).

-----

1 point by akkartik 5503 days ago | link

I wasn't expressing dislike but curiosity. How did you end up with that name? Does it stand for 'decompose a pair'?

-----

1 point by rocketnia 5503 days ago | link

Lol. XD It's short for "if-decapitate."

Come to think of it, ifdecap is a bit paradoxical.... If it's used to destructure argument lists, then it'll be recursive upon itself over and over and probably never get anywhere. (It needs to return two values somehow, which means either the caller or a CPS-style callback will need to destructure those values. I don't plan to look for a multiple-value-return-esque solution.) I guess the built-in argument list type will need some kind of hardwired treatment just like the built-in function type.

-----

1 point by evanrmurphy 5503 days ago | link

> In wart all args are optional (like javascript)

How's this working out for you in general? Any disadvantages?

Does wart still have the `?` syntax for keyword args as in http://arclanguage.org/item?id=13083?

I guess I should try wart and see these things for myself. :)

-----

2 points by akkartik 5503 days ago | link

:) This is a pretty good exhibition of features: https://github.com/akkartik/wart/blob/master/023conditionals.... You can see that the '? for optional' syntax hasn't changed, that $vars and o$vars are convenient and make several macros shorter. I suppose the fact of all args being optional is a little subtle and needs context to read between the lines.

But now I've let you put off trying wart again :) At some point I want to put a little webapp together to make reading wart code more convenient. So you can click 'next' from boot.lisp to go to 000.lisp (next in load sequence), tests are just a click away, etc. Really my only goal is to make it a pleasure to read, and I'd give an arm to hear experiences of lisp programmers reading it.

I haven't had any trouble with optional args (though this is the first time I've noticed them being useful). Or breaking constraints in general. I suspect the problem is not constraints I've broken but constraints I'm not even aware of.

-----

1 point by rocketnia 5504 days ago | link | parent | on: Why so proper, alist?

I don't see the point. o.o I'd much prefer to write '((a 1) (b 2)) rather than '((a . 1) (b . 2)) and destructure using (let (k v) ...) rather than (let (k . v) ...). Actually, I'd write '((a 1) (b 2)) as (objal a 1 b 2), but the destructuring issue is something I'd just deal with and fume over. :-p

Is there any downside?

Those are the downsides. :)

It's more efficient and more isomorphic to a hash table.

I think you save about 1/3 the conses when creating and adding to alists, so there is that.

But isomorphic to a hash table? The most official way we can compare them is with 'tablist and 'listtab, which use the list-of-two-element kind of alist.

Also, IIRC, Rainbow displays tables as #hash((a 1) (b 2)), and I couldn't be happier. There's so much " . nil" cruft when viewing big tables in official Arc.

could potentially eliminate the need for tables in Arc's core

Arc doesn't have enough table support. XP Keys are compared via Racket 'equal? (or via weirder methods in Rainbow and Jarc), and I haven't gone to the trouble to make tables that somehow dispatch via an extensible 'iso.

I want efficient lookup in big tables for the sake of Lathe's namespace system and Penknife's environments, and I'll get that by dropping to the underlying platform if I need to--I already do for weak tables--but I'd rather not. If official Arc ever removes table support, I hope it also adds 'defcall so I can put tables back in.

- solve the optional arg problem

- permit apply to be subsumed by the dot notation

How are those related? The only point of connection I see is that they're other things that could use dotted lists, but even that's not especially true for optional args. Did you mean to say that you suspect some change regarding dotted lists (or just the way we look at them) will help with both alists and these other cases?

-----

2 points by evanrmurphy 5504 days ago | link

> But isomorphic to a hash table?

I'm talking about the core notion of a hash table. It's composed of key-value pairs, not key-value "lists of two". :P This is a restatement of bogomipz's point made elsewhere in this thread.

> Arc doesn't have enough table support.

If alists were better supported, you could use them in place of tables in every case except where the utmost efficiency is required.

But I'm not sure it's even correct to frame this as an axioms vs. efficiency debate. Something I've learned from PicoLisp is that heterogeneous data structures slow down the general case by complicating memory allocation and garbage allocation. PicoLisp manages to be a fast interpreter (say what?), in part because it uses the cons cell for everything [1].

> I'd much prefer to write '((a 1) (b 2)) rather than '((a . 1) (b . 2)) and destructure using (let (k v) ...) rather than (let (k . v) ...).

I think this is a cosmetic issue that has to do only with our visual representation of cons pairs and Arc's incumbent ssyntax.

For example, if you changed the ssyntax so that a.b expanded to (a . b) instead of (a b), then these snippets would be more pleasant to write: '(a.1 b.2) and (let k.v ...) . I'm not actually proposing this particular solution, but it should illustrate my point that the issue is only syntactic/cosmetic.

> How are those related? [...] Did you mean to say that you suspect some change regarding dotted lists (or just the way we look at them) will help with both alists and these other cases?

Well I did say I still have some details to work out. ;)

I think your paraphrase is accurate. A "change regarding dotted lists (or just the way we look at them)" is what I was trying to express with "judicious use of conses" in the grandparent.

---

[1] http://software-lab.de/doc/ref.html#cell

-----

1 point by rocketnia 5504 days ago | link

IMO, a pair is a list of two. What would you propose using for a triple or a singleton?

-----

1 point by evanrmurphy 5504 days ago | link

A pair is a list of two in English because English lists aren't nil-terminated. But Arc lists are, so we're talking about the difference between (key . val) and (key . (val . nil)).

I don't have a great answer to your triple/singleton question yet except to ask that you consider the following:

- The fundamental data structure of lisp is the cons pair, so perhaps pairs warrant some special treatment over singletons, triples, etc.

- The demand for associative arrays in general-purpose programming is far greater than that for any kind of triple-based data structure, which is why tables have their own type in Arc to begin with

Update: Cons pairs are so powerful that we've used them as the base for almost our entire language. And yet the associative array structure (which screams "pair"!) that we've made from them (i.e. alists) is so inadequate that we all outsource that functionality to tables instead. Around tables we've then developed the conveniences for syntax, etc.... Doesn't this seem a bit kludgy for The Hundred-year Language?

-----

2 points by rocketnia 5504 days ago | link

The main advantage of cons pairs, in my mind, is that they're all the same size, so it's easier to reason about them and memory-manage them on a low level. They're also just as powerful as they need to be to support an expressive language. But that doesn't make them ideal abstractions for exploratory programming, especially when an equivalent abstraction in the same language takes fewer characters to type out and is even better supported thanks to 'map, 'any, etc.

-----

1 point by evanrmurphy 5504 days ago | link

Yes, that makes sense. I may have gone somewhat overboard / overly dramatic in this subthread. :) I think I mostly just want alists to be more convenient. Need to think about this more...

-----

1 point by rocketnia 5504 days ago | link

I've been overly dramatic here too. I mostly wanted to help you make sure you were on a path that held water while giving you some hooks to convince me by... but I brought some external pet peeves into the mix and got worked up. XP Please do continue with your train of thought. ^^ Here's hoping the train mixes underwater hooks, or something.

-----

1 point by evanrmurphy 5504 days ago | link

Awesome, thank you. :)

-----

2 points by evanrmurphy 5503 days ago | link

It's something about Arc's built-in types that bothers me. They seem so adhoc. You have this beatiful axiomatic thing going on in the core with conses, and then suddenly tables enter the mix. From that point forward, odd utilities get defined with an if branch that checks for the table type.

In this thread, I've been worried about tables cluttering the core language and you about them not being well-supported enough. In truth, I think both of our concerns are legitimate (yours is for sure, because tables really are better than alists for some applications). The problem is that the present implementation doesn't do either of them justice.

I'd like to know what you think of this proposal: keep the core language definitions to symbols and conses. Then support each additional type in a dedicated file (e.g. numbers.arc, tables.arc, queues.arc). These types can either reach down into Racket to borrow one of its types (likely for numbers or tables) or be annotated constructs built from existing types (likely for queues, trees or alists), and then use the extend idiom to give them support in the various utilities and the reader.

-----

2 points by akkartik 5503 days ago | link

"support each additional type in a dedicated file.. either reach down into Racket to borrow one of its types or be annotated constructs built from existing types, and then use the extend idiom to give them support in the various utilities.."

or defgeneric? 8-) I was moved by the same concerns you describe: I never want to see an (if (isa x 'table) ..) in arc code.

-----

2 points by rocketnia 5503 days ago | link

Agreed with both of you, but I'd go further: I don't want to see (isa x 'cons) or (isa x 'sym) either, if possible. I'd rather every type be treated as equally non-fundamental. Of course, s-expression syntax special-cases those types sorta intrinsically, but I'm sure 'defgeneric could be used along with one of aw's "access the Arc compiler from Arc" patches. ^_^

It might be difficult and/or impossible though, considering that 'defgeneric needs to be defined in terms of something. So does calling, since the most obvious way to specify a custom calling behavior is to give that behavior as a function to call! XD

Like so many other opinions of mine, this is something that's going into Penknife if at all possible, even if the core currently needs a bunch of rewriting to get it to work.

To fix the "'defgeneric needs to be defined in terms of something" issue, I'm currently considering having most things be built-in rulebooks, with rulebook being a built-in type if necessary.

For the calling issue, I'm going to have the built-in call behavior try certain hardwired things first, and only move on to the customizable calling rulebook if those don't work. I intend for it to be possible to replace the interaction environment with one that uses a different call behavior, so even that hardwired-ness should be kinda seamless with the language.

For now, these things are all hand-wavy, and I'm open to better ideas. ^^

-----

2 points by evanrmurphy 5503 days ago | link

> but I'd go further: I don't want to see (isa x 'cons) or (isa x 'sym) either, if possible. I'd rather every type be treated as equally non-fundamental. Of course, s-expression syntax special-cases those types sorta intrinsically

Wow, I'm really interested in whether there's a way to have s-expressions that don't special-case conses and symbols. shader's just-in suggestion [1] makes me think there might be a way to merge conses and symbols into a single type, though. Could it be possible?

---

[1] http://arclanguage.org/item?id=13438

-----

2 points by rocketnia 5503 days ago | link

Essentially, all you need to do is extend 'ac, since compiling is almost all that happens to Arc expressions. In the short term, there's no need to worry about whether a custom type represents a function call, a literal, etc. As long as it compiles, you can start returning it from macros or reader syntaxes.

In the long term, there may be other things that would be useful to extend, like 'ac-macex, 'ac-expand-ssyntax, and 'expand=. Also, it may be easier for a custom syntax type to support 'expand= if there's a separate utility it can extend in order to have all the functionality of a function call. That way it can be 'sref'ed.

-----

2 points by evanrmurphy 5503 days ago | link

Thanks for this guide. It should come in handy for me. :)

If I start messing around with Arc's internals too hard though, I may not be able to resist trying to turn it into an interpreter [1]. I'm too attracted to the notion of first-class environments, eval and fexprs lately. (In this case, I'd be extending eval rather than ac, correct?)

Or maybe I should just stop being such a damn purist. Have to take things one step at a time anyway. ac is a logical place to start.

---

[1] http://arclanguage.org/item?id=13323

-----

1 point by rocketnia 5503 days ago | link

Thanks for this guide. It should come in handy for me. :)

Well, I hope it actually works. :-p

If I start messing around with Arc's internals too hard though, I may not be able to resist trying to turn it into an interpreter.

Yeah. I would have just turned it into Penknife. >.>

I'm too attracted to the notion of first-class environments, eval and fexprs lately. (In this case, I'd be extending eval rather than ac, correct?)

Sure, but there's no interpreting 'eval to build on in Arc (unless you repurpose the macroexpander XD ). I'd find it easiest to approach by building it from scratch--hence kernelish.arc.

Or maybe I should just stop being such a damn purist. Have to take things one step at a time anyway. ac is a logical place to start.

It's all up to whatever you can figure out how to build on, I think.

Also, I'd tell you not to write off purism so quickly, but unfortunately I only like purism in irrational way. ^_^;

-----

3 points by akkartik 5503 days ago | link

Go for the interpreter :)

BTW, remember eight? http://arclanguage.org/item?id=10719

-----

1 point by evanrmurphy 5503 days ago | link

It's come under my radar before [1]. I've read some of the thread you linked to and some of what's on his github [2]. I like the general idea of giving ' and , more power to control evaluation, but I'm afraid I don't grok the language very well yet. :-/

Update: To clarify my confusion, the documentation talks a lot about closures (e.g. that ' does some kind of closure-wrapping), but I thought the language was supposed to be fexpr-based. I don't understand yet what fexprs have to do with closure-wrapping, but I really should study the language more closely.

---

[1] rocketnia referenced it in http://arclanguage.org/item?id=11882, alongside kernel

[2] https://github.com/diiq/eight

-----

3 points by diiq 5503 days ago | link

Eight's documentation is in a terrible state (in part because there are still many things about which I've yet to make up my mind), so blame me for any confusion.

Here's the gist: Fexprs, like macros, take expressions as arguments (duh). Those expressions are made up of symbols (duh). Because a fexpr is evaluated at runtime, those symbols may already be bound to values when the fexpr is called. Eight keeps track of which symbol is bound to which value at the place the expression originated (where the programmer wrote it) --- even if you cons expressions together, or chop them into pieces. This eliminates the need for (uniq), but still allows for anaphoric fexprs when symbol-leaking is desired.

When I wrote the docs on github, I called an expression plus any accompanying bindings a 'closure' (even though it wasn't a function). I also didn't know the word 'fexpr'. I've read a few dozen more old lisp papers since then, and hopefully on the next go-round my vocabulary will be much improved.

-----

1 point by evanrmurphy 5503 days ago | link

Some of your documentation is excellent, actually. This page, for example: https://github.com/diiq/eight/wiki/Better-Questions

-----

2 points by shader 5503 days ago | link

"there might be a way to merge conses and symbols into a single type"

Interesting idea. This might help a lot with implementing lisp in strongly typed languages. I suppose atoms could just be cons cells with nil in their cdr slot. The only problem is then how do you get the actual value out of an atom, and what is it?

-----

2 points by rocketnia 5503 days ago | link

This might help a lot with implementing lisp in strongly typed languages.

Don't most of them have option types or polymorphism of some kind? If you've got a really rigid one, at least you can represent every value the lisp as a structure with one element being the internal dynamic type (represented as an integer if necessary) and at least two child elements of the same structure type and one element of every built-in type you'll ever need to manipulate from the lisp (like numbers and sockets). Then you just do manual checks on the dynamic type to see what to do with the rest. :-p

The only problem is then how do you get the actual value out of an atom, and what is it?

I say the programmer never gets the actual value out of the atom. :-p It's just handled automatically by all the built-in functions. However, this does mean the cons cell representation is completely irrelevant to a high-level programmer.

-----

1 point by evanrmurphy 5503 days ago | link

> I suppose atoms could just be cons cells with nil in their cdr slot.

Could they be annotated conses with symbol in the car and value in the cdr (initialized to nil)? nil itself could then be a cons with the nil symbol in the car and nil in the cdr. This should achieve the cons-symbol duality for nil that's usually desired. (Follow-up question: annotate is an axiom, right?)

Warning: May include sloppy thinking.

-----

1 point by akkartik 5503 days ago | link

I don't want to see (isa x 'cons) or (isa x 'sym) either

Totally with you there.

I don't want to get too hung up on 'purity'. It's ok to use tables in the core if you need them for defgeneric or something. It's ok to have a few isas early on. iso is defined as a non-generic bootstrap version in anarki before eventually being overridden, so stuff like that seems fine to me. I just want to move past the bootstrap process as quickly as possible.

-----

1 point by rocketnia 5503 days ago | link

iso is defined as a non-generic bootstrap version in anarki before eventually being overridden

Sure, that's an okay way to go about it. ^_^ Since I'm doing the Penknife core library stuff from the top down right now, I'm just writing things the way I want to write them, trying to determine what axioms I need before the core library is loaded. If the high-level axioms are defined in another lower-level library, that's just fine, but I don't know why I'd bother with that when I can just put them in the Arc part of the core. :-p

-----

1 point by akkartik 5503 days ago | link

Yeah, makes sense. defgeneric comes far earlier in wart as well. iso pretty much doesn't get bootstrapped (it's available but never used)

-----

1 point by rocketnia 5503 days ago | link

That was a pretty fast reply. I just edited in a bunch of stuff you might have missed. :)

EDIT: Oh, and you edited yours too. XD

-----

1 point by evanrmurphy 5503 days ago | link

Ahh defgeneric! I hadn't made the connection, thanks for pointing it out. :) Is this the writeup you'd recommend?

http://arclanguage.org/item?id=11779

I think I had trouble digesting it a few months ago because it depended on so many utilities I was unfamiliar with: vtables, defmethod, pickles (and it compared with extend, which I didn't understand back then :-o ). Giving it another try...

-----

2 points by akkartik 5503 days ago | link

It's in anarki so perhaps it'd be easier to just play with what the different defgenerics (iso, len, ..) there expand to.

https://github.com/nex3/arc/blob/master/arc.arc#L1734

vtables and pickles aren't utilities, just implementation details for defgeneric.

Basically vtables contains a hashtable for each generic function mapping a type to an implementation. "If len gets a string, do this. If it gets a table, do that." The body given to defgeneric sets up vtable entries for a few default types (cons, mainly :), and defmethod lets you add to vtables later.

If the generic function doesn't find an entry in vtables it falls back on searching the pickles table for a procedure to convert that type to cons, before retrying.

Let me know if this makes sense.

(names: I believe vtables comes from C++, and pickle is the python primitive for serialization)

-----


But I still don't get the (list ,@$s). Why doesn't just $s work?

Suppose $os is the list '(o$a o$b o$c). Then $s is the list '($a $b $c), and the expansion will work like this:

  At macro definition time, while expanding the 'mac macro:
  
  `(mac0 ... `(let ,(mapcar #'list (list ,@$s) (list ,@$os)) ...))
  ==>
  (mac0 ...
    `(let ,(mapcar #'list (list $a $b $c) (list o$a o$b o$c)) ...))
  
  At macro usage time, assuming the expressions to evaluate once are 1,
  2, and 3 and the gensyms are 'a99, 'b99, and 'c99:
  
  `(let ,(mapcar #'list (list $a $b $c) (list o$a o$b o$c)) ...)
  ==>
  (let ((a99 1) (b99 2) (c99 3)) ...)
There are certainly lots of ways to get this wrong:

      At macro definition time:
  `(mac0 ... `(let ,(mapcar #'list $s $os) ...))
  (mac0 ... `(let ,(mapcar #'list $s $os) ...))
      At macro usage time:
  `(let ,(mapcar #'list $s $os) ...)
  Error: The variables '$s and '$os are unbound as non-functions.
  
      At macro definition time:
  `(mac0 ... `(let ,(mapcar #'list ,$s ,$os) ...))
  (mac0 ... `(let ,(mapcar #'list ($a $b $c) (o$a o$b o$c)) ...))
      At macro usage time:
  `(let ,(mapcar #'list ($a $b $c) (o$a o$b o$c)) ...)
  Error: The variables '$a and 'o$a are unbound as functions.
  
      At macro definition time:
  `(mac0 ... `(let ,(mapcar #'list ,@$s ,@$os) ...))
  (mac0 ... `(let ,(mapcar #'list $a $b $c o$a o$b o$c) ...))
      At macro usage time:
  `(let ,(mapcar #'list $a $b $c o$a o$b o$c) ...)
  Error: The values 'a99, 'b99, 'c99, 1, 2, and 3 aren't lists.
  
      At macro definition time:
  `(mac0 ... `(let ,(mapcar #'list ',$s ',$os) ...))
  (mac0 ... `(let ,(mapcar #'list '($a $b $c) '(o$a o$b o$c)) ...))
      At macro usage time:
  `(let ,(mapcar #'list '($a $b $c) '(o$a o$b o$c)) ...)
  (let (($a o$a) ($a o$b) ($c o$c)) ...)
      At runtime, in a place where the macro was used:
  Error: The variables 'o$a, 'o$b, and 'o$c are unbound as
    non-functions.
The tactic I suggest is just always thinking about what "gibberish" you want it to expand to. Find an expanded form that stretches far enough to cover all the use cases of the macro, and work backwards.

  (let ((a99 1) (b99 2) (c99 3)) ...)
We get here by taking the gensyms and the once-only expressions and zipping them together. To do that automatically, we can use (mapcar #'list <something> <something>).

  `(let ,(mapcar #'list <gensyms> <expressions>) ...)
We don't actually have lists of gensyms and expressions at that stage, though. We have variables that hold those things individually. We need to build the lists out of the individual pieces.

  `(let ,(mapcar #'list (list <gs> <gs> <gs>) (list <xp> <xp> <xp>)) ...)
Each gensym is held in a variable of the form '$foo, and each once-only expression is held in the corresponding variable of the form 'o$foo. Although we have direct access to those variables at this stage, we don't have the ability to dynamically choose which variables we use. The variables need to be calculated in the next level up:

  `(mac0 ... `(let ,(mapcar #'list (list ,@<gensym variables>)
                                   (list ,@<expression variables>))
                ...))
Thankfully, we can actually calculate <gensym variables> and <expression variables> directly at this stage, so we do that. Knowing we could get to this stage requires a bit of foresight, but hopefully practice makes it easier.

For the sake of example, there's an alternative approach we could take. Suppose I look at this...

  (let ((a99 1) (b99 2) (c99 3)) ...)
...and I initially think about it as a list of bindings rather than a zip of two corresponding lists. I might construct the list this way...

  `(let ,(list (list <gs> <xp>) (list <gs> <xp>) (list <gs> <xp>)) ...)
...with the intent of generating all those (list ...) calls from the next level up.

  `(mac0 ...
     `(let ,(list ,@(mapcar <function>
                            <list of individual binding creation info>))
        ...))
What function can I use to make a binding? The result needs to be an expression of the form (list <gs> <xp>), where <gs> and <xp> are symbols which resolve to the right variables in the next level down. At this point I might realize I need to zip a <gs> list with an <xp> list, and I'll end up going back to the other solution that way. But the symbol for the <gs> variable is determined from the symbol for the <xp> variable, using #'odollar-symbol-to-dollar-symbol, so I actually just need the <xp> symbol as input to the function, and I can get the list of <xp> symbols the same way you do:

  `(mac0 ...
     `(let ,(list ,@(mapcar (lambda (x)
                              `(list ,(odollar-symbol-to-dollar-symbol
                                        x)
                                     ,x))
                            (remove-if-not #'odollar-symbol-p
                              (strip-rest params))))
        ...))
Patterns like (map [do `(list ,foo._ ,_)] stuff) are pretty common in my Arc code, so this scenario is realistic. In fact, I often try to use quasiquotes iff I'm generating code, so there's a good chance I'd say (map [do ``(,,foo._ ,,_)] stuff) to make the intent clearer. That's right, I might make the intent clearer by using four instances of ` in one definition. Brag brag brag. :-p But really, if it's unclear, I don't know how to avoid it....

I think any helper macro that reduces the quasiquote depth needs at least two quasiquote levels itself (to hide the second level from the outside), but a simpler one can at least take away some of the distraction of the transformation itself:

  (mac0 letlists (vars vals . body)
    `((fn ,vars ,@body) ,@vals))
  
  (mac0 mac ...
    (let os ...
      `(mac0 ...
         `(letlists ,',(map o->dollar os) ,',os
            ...))))
Maybe that's significant enough?

-----

1 point by akkartik 5504 days ago | link

(haven't yet digested your comment)

think about what "gibberish" you want it to expand to

Just to clarify, the gibberish I was referring to was the (sbcl-specific) backq-comma call. It's not gibberish if I know what I want :)

Update: Oh, you're talking about stepping through the code. Yeah, I should just not be lazy and do that. I'm again reminded of that dijkstra comment from yesterday. My mental code simulation skill have atrophied with programming experience. Then again, Dijkstra hated how schools taught their students to step through code pretending to be the computer. Hence his more axiomatic approach to program semantics.

Hmm, I wonder if fallintothis's macro stepping utility would help with this..

Update 2: Ok, digested. You make a lot of sense. It's too bad, already the solution's starting to crystallize in my head as self-evident, and I'm forgetting what it was like to not know how to do this. Your clear comment is not helping :p

The idea that any helper macro would also need nested backquotes is interesting and explains vague memories of past experiences. I'm going to think about that.

Part of the problem is that I've needed nested backticks so infrequently. Perhaps I should just create some contrived examples to exercise this muscle.

-----

1 point by rocketnia 5504 days ago | link

Just to clarify, the gibberish I was referring to was the (sbcl-specific) backq-comma call. It's not gibberish if I know what I want :)

Ah, gotcha. ^_^ I thought that was part of it, but I thought maybe you also meant things like the (list) with no arguments and the (progn ...) with one subexpression. I dunno, I sorta consider that to be gibberish even if we want it. XD

Oh, you're talking about stepping the code.

I guess I am.... Maybe some debug printing would help out. ^^

Actually, maybe something like debug printing is a good way to do unit tests on macros. O_o Put in a meaningless form inside the macro, and have it short-circuit with an escape continuation (or call any other kind of function) if the right test is in progress. You get to verify, collect, or trace intermediate values without managing a spaghetti of one-use functions.

In its simplest form, this could be implemented as a global variable that's usually set to 'idfn.

It's too bad, already the solution's starting to crystallize in my head as self-evident, and I'm forgetting what it was like to not know how to do this.

I was actually really afraid of that. XP Maybe a good reason will dawn on one of us after a while.

Part of the problem is that I've needed nested backticks so infrequently. Perhaps I should just create some contrived examples to exercise this muscle.

The clearest uses of quasiquotation in Arc are 'eval, 'defset, and 'mac. As long as you need to combine two or three of those things, look out. :-p

My trickiest quasiquotation experience was when I wanted (= global!x 2) to effectively do (eval `(= ,'x ,2)) somehow. I forgot that we could actually embed arbitrary values directly into Arc expressions, so I decided to use a temporary global variable and go for (eval `(= ,'x temp)). For encapsulation's sake, I made the global variable a gensym, which meant I had to (eval ...) the whole (defset ...) form. Three layers, and I'm not sure a macro would help.

Once rntz pointed out that I didn't need the temporary variable (http://arclanguage.org/item?id=11612), I changed it to use two layers. But maybe it's still okay as a contrived example. ^_^

My three-layer code, just before I simplified it, is here: https://github.com/rocketnia/lathe/blob/82df9b715f8928cc87cd...

I wonder if quasiquotation layers are something that intrinsically come up infrequently, or if it just has to do with the way 'eval, 'defset, and 'mac come up infrequently. In fact, the reason I defined (= global!x 2) was so that I didn't have to mess with a bunch of quasiquotes each time I wanted to jump up to global scope using 'eval, so my most extreme usage of quasiquotes was a tool to make quasiquotes less frequent.

That example's probably an exception though. I just realized that making a helper macro to remove a layer of quasiquotes is a bit futile because it has to wrap both the root of the quoted code and the missing leaves, essentially becoming another quasiquote. (It could be I'm just not being creative enough.) If the macro abstracts away two layers of quasiquotes, it might be worthwhile, but again, it'll either have all the same "tricks" as two-layered quasiquotes or be restrictive and less general-purpose.

-----

2 points by akkartik 5504 days ago | link

"I thought maybe you also meant things like the (list) with no arguments and the (progn ...) with one subexpression."

Ah, macro generation time vs generated-macro-expansion time :)

I do use debug prints, but they actually seem ineffective for debugging macros. I'm not sure why. Hence the commented-out variants.

"I just realized that making a helper macro to remove a layer of quasiquotes is a bit futile.."

Yeah, I'm not sure either. Perhaps it's not the number of quasiquotes but something else about them. Perhaps there's easy and hard doubled-backticks.

-----


It could technically use another helper: It's a macro that abstracts away an act of backquoting, but there could be a function that abstracts away that backquoting at some point (the macro generating a function call that uses its own backquote form). Since that's not actually "nested macro calls," maybe it doesn't count. ^^

Anyway, I think it's a perfectly legitimate use of nested backquotes.

You hear a lot about how nested backquotes and double commas are difficult.

Well, that link is possibly the first place I've heard it claimed in a general way (not counting individual people having trouble with them), and it was pretty surprising to me....

Do you think it's because of the generating-code-to-generate-code premise itself or just because of idiosyncrasies in the way layers of quoting interfere with each other in the forms of ,', and backslash-doubling?

-----

2 points by akkartik 5505 days ago | link

The couple of times I've tried it I find I get tangled up in how I want to interleave macroexpansion vs evaluation. I'm sure I don't know all the tools available to me (', / ,, / ,', / ,,@ / ...)

I should try to come up with a simple example. Perhaps the canonical one is Peter Norvig's once-only that that link refers to (which also says nested backquotes are hard). I believe it's come up multiple times here, e.g. http://www.arclanguage.org/item?id=9918

Hold on, lemme push my latest commit and break wart on github :) Now you can try to make sense of my struggles at https://github.com/akkartik/wart/blob/9bd437782d6c3e862ae388... if you're so moved. (There's a 'Desired' comment halfway down which shows what I'm trying to get, and the testcase is right at the bottom of the file.)

So far nested backquotes are the only thing I've found that remained hard after using unit tests.

-----

2 points by rocketnia 5505 days ago | link

I think your desired behavior is wrong. You'd like the inner body of a once-only definition to be like this:

  `(let* (($x o$x))
     (+ ,$x 1))
But the macro user would write that body as `(+ ,$x 1), with the backquote and everything, and you don't have that backquote in your hypothetical expansion. What you're probably looking for is:

  `(let* (($x o$x))
     ,`(+ ,$x 1))
Or, more accurately:

  `(let* (($x o$x))
     ,(progn `(+ ,$x 1)))  ; listing all the expressions in the body
Sorry, that's all the time I have to look at it right now. XD

Also, this isn't much of a response to the "generally hard" topic (except to prove this example takes more thought than I've given it so far :-p ).

-----

1 point by akkartik 5505 days ago | link

That is extremely helpful, thanks! I think it's awesome that you can read my ugly code so fast.

Update: Now when I look at it again I notice defmacro! also uses the ,(progn ,@body) trick. Thanks again.

-----


Oops!

  -(mac fexport (var . val)
  +(mac fexport (var val)
     `(fn-fexport ',var ,val))
That would be the bug you found, I think. ^^;

Thanks for the compliment. >.> Anyway, that pattern started when I saw http://awwx.ws/parsecomb0:

  (def on-result (f parser)
    (fn (p)
      (iflet (p2 r) (parser p)
        (return p2 (f r)))))
  
  (mac with-result (vars parser . body)
    `(on-result (fn (,vars) ,@body)
                ,parser))
Since then, I've noticed that defining one-use auxiliary functions like 'fn-fx and 'fn-fexport for macros is nifty in a few ways:

- Less need for w/uniq.

- More straightforward expansions and fewer gensyms when debugging.

- More one-liners, ironically at the expense of brevity. (The argument list is usually written three times this way: Once in the macro signature, once in the function signature, and once in the macro body.) At least it's in bite-size pieces. ^_^;

I've also noticed that 'after and 'protect have the same pattern, and that at least one other person here (don't remember who) uses it as extensively as I do. So it's probably as good as advertised. :-p

-----

3 points by aw 5504 days ago | link

Yes, I find I' often write a macro whose only purpose is to allow a "body" to be written without having to enclose it in a function. For this specialized purpose, it'd be nice to have a macro defining macro to do that... though I haven't thought about how to do that myself yet.

-----


Wow, what a quote. XD Is this a more accurate version? I can't find any search results for yours, but there aren't many for this one either. :-p

  None of the programs in this monograph, needless to say, has been tested on a machine.

-----

3 points by akkartik 5505 days ago | link

I was running off memory :) I'll check with my copy when I get home tonight. (Discipline is one of ~ten books I still own. http://www.reddit.com/r/programming/comments/6f0fz/do_you_bu...)

---

Update 15 hours later: yep, your version is right. A couple more gems in that Preface:

"..it hurts me to see the semantics of the repetitive construct

  while B do S
defined as that of the call of the recursive procedure

  whiledo(B, S)
Do you think the BS was chosen accidentally? :)

I don't like to crack an egg with a sledgehammer, no matter how effective the sledgehammer..

Contrast the Alan Kay quote that says you should find the most complex construct you possibly need, and build everything in terms of it. (again my google fu is weak)

For the absence of a bibliography I offer neither explanation nor apology.

---

My predominant impression after reading Discipline was that EWD was getting high off his programming. He was trying really hard, but failing to get me high as well.

-----

2 points by rocketnia 5504 days ago | link

I like your book management strategy, BTW. :)

Contrast the Alan Kay quote that says you should find the most complex construct you possibly need, and build everything in terms of it. (again my google fu is weak)

If Alan Kay's the person to credit for OOP, I guess that doesn't surprise me. ^_^ A simple basis of complicated things is just fine. For instance, not all our lambdas actually need to be closures, but that doesn't stop us from reasoning about them as closures for the sake of reducing the number of concepts we're juggling.

The problem comes around when people forget how arbitrary any particular complicated basis is. XD I'm looking at you, "Arc needs OOP" threads.

So maybe the Dijkstra and Kay quotes are compatible, in a sense. Kay can be encouraging people to find appropriate foundations from which to implement concepts, and Dijkstra can be encouraging people to perceive the concept itself rather than taking its implementation for granted.

I guess I can't really say that without knowing more of the context of what Dijkstra and Kay believed. Still, a quote more opposite to Dijkstra's might be this one:

Beyond the primitive operators, which by definition can't be written in the language, the whole of the Arc spec will be written in Arc. As Abelson and Sussman say, programs must be written for people to read, and only incidentally for machines to execute. So if a language is any good, source code ought to be a better way to convey ideas than English prose.

- Paul Graham

I think it's similar to foundational math and philosophy. An implementation (model) of a system in terms of another system can admit implementation-specific proofs of things that are false in other implementations. Just because one model or formalization has been found doesn't make it uninteresting to continue considering the motivating informal system on its own merit.

-----

2 points by evanrmurphy 5504 days ago | link

Found that Alan Kay quote in The Early History Of Smalltalk [1]:

"take the hardest and most profound thing you need to do, make it great, an [sic] then build every easier thing out of it."

---

[1] http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltal...

-----

3 points by rocketnia 5504 days ago | link

Awesome. :)

My next questions was, why on earth call it a functional language? Why not just base everuything [sic] on FEXPRs and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said "take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it". That was the promise of LiSP and the lure of lambda--needed was a better "hardest and most profound" thing. Objects should be it.

This so closely parallels the way I'm thinking about Penknife that I suddenly find it spooky that Smalltalk uses square brackets. XD

-----

4 points by rocketnia 5505 days ago | link | parent | on: Implicit gensyms

Have you considered just making 'g an anaphoric variable of 'mac that holds a memoized function that generates gensyms, so that g!x is (g 'x) as usual? ^_^

Then you can explicitly pass 'g to 'foodef. If that isn't enough, 'g could instead be a global function whose behavior changes during the dynamic scope of each macro expansion. Hmm, then you can't have more than one 'g at once, but when would you have that anyway? XD Something like Lathe's namespaces might use multiple instances of 'g, but those occasional cases can fend for themselves.

-----

1 point by akkartik 5505 days ago | link

Ooh, interesting idea.

-----


I hope it's useful to you. The R-1RK is pretty well-reasoned in most regards, so it's some good inspiration. ^_^

...But I can't leave well enough alone. Here are a few key places I disagree with the philosophical choices:

"G3: Dangerous [things] should be impossible to program by accident" is pretty bogus, and even if I give it the benefit of the doubt, it's poorly phrased. IMO, things should be made purposefully difficult only when that makes the useful things easier overall. I think most uses of G3 in Kernel are actually efficiency concerns

"G2: The language primitives should be capable of implementing all the advanced features of R5RS" is a bit scary. I can't say I don't do the same sort of thing by making Penknife a similar experience to Arc, but I suspect heavy inspiration from a single souce is usually more of a bias than a productive motivation. It'd be best to justify each "advanced feature" on its own merit and to be ready to take inspiration from lots of sources. But hey, if someone has have enough time on their hands to do a point-by-point restoration of another good system, or if they don't have enough time to weigh the options, it's a reasonable pursuit. (Maybe. ^^; How much of that is me telling myself that? 'Cause somehow I feel I fall into both the "enough time" and "not enough time" cases, and it sounds a bit like I'm looking for excuses. :-p )

But most importantly, I think "G1a: [All] should be first-class objects" is better served by making every primitive utility's domain explicit and extensible. That is, reducing the number of official ways to contrast the first-class-ness of particular objects. To be fair, Kernel puts a strong emphasis on supporting cyclic data structures, and that's one big case which extensible utilities probably need to plan for in advance (so that extensions aren't limited to being naive recursive traversals). But on the other hand, Kernel's 'equal? doesn't allow a programmer-defined type (an encapsulation) to provide custom behavior, so it actually makes those types second-class. @_@

-----

1 point by evanrmurphy 5505 days ago | link

> "G3: Dangerous [things] should be impossible to program by accident" is pretty bogus, and even if I give it the benefit of the doubt, it's poorly phrased.

The paper actually says, "Dangerous things should be difficult to do by accident" [1]. I don't mean to nitpick, but in this context I think the difference between "difficult" and "impossible" is significant.

Update: I just realized that Shutt's dissertation and the R-1RK are distinct, but the quote is consistent across them.

---

[1] From page 88 of jshutt.pdf, downloadable at http://www.wpi.edu/Pubs/ETD/Available/etd-090110-124904/

-----

2 points by rocketnia 5505 days ago | link

That difference is significant too, but I'm specifically talking about the difference between trying to make something difficult and neglecting to make it easy. I pick a tool in the first place because it lets me do things more easily.

...Oh, I misquoted it. XD Yeah, thanks for catching that.

Another thing I missed was finishing the efficiency sentence. I was going to devote another paragraph to criticizing the efficiency-should-only-arise-naturally rule for being hypocritical--that an eye to efficiency influences and probably compromises other aspects of the design from the outset--but I determined I actually agree with that rule for Kernel. I think efficiency of fexprs is one of the most significant things Kernel stands to prove, so I don't mind an explicit and upfront goal of efficiency, but efficiency isn't in itself a goal motivating Kernel (I think), so it's good for the rule to be nuanced a bit. (Had I completed the sentence, it would have covered some part of that, but I'm not sure which. XD )

-----

1 point by rocketnia 5504 days ago | link

Oh, that dissertation is new(-ish)! Last time I looked into Kernel it was just the R-1RK. Gotta read that sometime. ^_^

-----


In Kernel, applicatives (procedures) have underlying operatives (fexprs), and 'wrap is the axiomatic way to make an applicative, kinda like Arc's use of 'annotate to make macros. I don't know whether or not I'm not particularly attached to 'wrap; I kind of included it on a whim in order to support a nifty definition of 'fn.

As far as the bug goes, I still haven't tested the code, but here are a few fixes I just pushed, which may or may not be relevant:

   (def feval (expr (o env fenv*))
     (case type.expr
       cons  (fapply (feval car.expr env) cdr.expr env)
  -    sym   do.env.expr
  +    sym   (fenv-get env expr)
             expr))
  
   ; The 'applicative type is totally Kernel's idea.
   (def fapply (op args (o env fenv*))
     (case type.op
       fexpr        (rep.op env args)
  -    applicative  ((rep rep.op) env (map [feval _ env] args))
  +    applicative  (fapply unwrap.op (map [feval _ env] args) env)
                    (apply op (map [feval _ env] args))))
  
  ...
  
  +
  +; We use a singleton list so that an applicative can wrap an
  +; applicative. Using 'annotate does nothing to a value that's already
  +; of the given type.
  +
   (fdef wrap (fexpr)
  -  (annotate 'applicative fexpr))
  +  (annotate 'applicative list.fexpr))
  
   (fdef unwrap (applicative)
  -  rep.applicative)
  +  rep.applicative.0)
  +

-----

2 points by rocketnia 5507 days ago | link | parent | on: Implicit gensyms

I have here and there, but renaming something is easy enough in a small codebase.

Still, when I'm writing code I don't know how I'll use in practice, I try to plug up abstraction leaks whenever I can, in the "why write programs with small bugs when you can write programs with no bugs" spirit.

I've been discovering that my purer utilities are useful in unexpected places, where what once was future-proofing is now essential to their newfound role. I don't think this is a particular advantage of hygienic/minimalistic stuff over more off-the-cuff or bundle-of-features systems, but I do enjoy being able to justify the "shape" of a utility completely in terms of its original motive.

...Whoops, you weren't asking about hygiene in general. XD Pretend I'm starting this post from scratch! * whoosh, changes costume*

I'm not sure what being a lisp-1 has to do with it, really. well, actually...

I suppose it's a little less likely for someone to need a local function than a local anything-else, but when they do need that, the local functions can still be captured by conventional gensym-savvy macros, right? So a lisp-2 can develop a culture of "don't use macros inside an 'flet unless you know what they expand to," whereas a lisp-1's corresponding "don't use macros except at global scope unless you know what they expand to" isn't quite as tractable.

When it comes to this kind of variable capture, I haven't encountered the problem or bothered writing code in a way that avoids it, even though I put up some possible techniques in a recent Semi-Arc thread. (This is exactly the issue Semi-Arc's hygiene should eliminate.) At this point, I almost consider my code to be better future-proofed if it's Penknife code that uses hygiene, and I'm just crossing my fingers that the Arc code will never capture in practice.

-----

More