Arc Forumnew | comments | leaders | submitlogin
Proposal: reified environments of closures
4 points by rincewind 5872 days ago | 10 comments
I just watched the "Clojure for Lisp Programmers" talk and was impressed by the conj function and the generic sequence interface to lists, maps and vectors.

In Arc, the function map is accepts both hashtables and lists, but those are hardcoded. If we overload car and cdr, we can treat other things (like lazy lists) as lists:

  (map my-function my-lasy-list)
but then the whole my-lazy-list would be forced evaluated. How can we ensure that map does not only accept lazy lists, but also return lazy lists?

With dynamic bindings we could change the behaviour of map, so it returns a lazy list:

   (dyn-let cons lazy-cons
      (map my-function my-lazy-list))
but dynamic variables are considered harmful and hard to reason about.

To avoid dynamic binding at all costs, I propose a reification of closures, the with-closure function. It takes a function and some bindings and returns the function with the added bindings.

  ((with-closure (let x nil (fn () x)) 'x t))
  => t

  (with-closure map 'cons lazy-cons)
Because map is a recursive function, the modified version should call the modified version, and not the global map, so it gets a bit more complicated:

  (let lazy-map map
    (= lazy-map
      (with-closure lazy-map 'cons lazy-cons)
       lazy-map
      (with-closure lazy-map 'map lazy-map)))
This could be wrapped in a macro:

  (mac rwith-closure (fun . mappings)
     (w/uniq newfun
        `(let ,newfun (with-closure ,fun ,@mappings)
            (= ,newfun (with-closure ,newfun ',fun ,newfun)))))
This would allow us to write a lazy-list modifier that makes a function operate on lazy lists, or more generally to retrofit abstractions onto previously defined functions.


1 point by almkglor 5871 days ago | link

I'm forking Arc with (among other things) a scanner concept.

Briefly, a "scanner" is any object that has methods defined for 'car and 'cdr (among other things, my Arc fork has/will have CLOS-style generic functions, but won't have inheritance; I'm trying to figure out whether I need inheritance at all in a duck-typed language, especially if I go the generics route for code reuse), and defines identity-function bindings for 'scanner and 'unscan.

'scanner usually throws an error (so that you can't map numbers, for example), but you can overload this for your type so that you return a valid scanner.

'unscan converts a scanner to the desired type.

Additionally, a user-defined type may overload two base functions, '<base>collect-on and '<base>each. Briefly, these functions construct a type iteratively ('<base>collect-on) and iterate over the type for as long as necessary ('<base>each - unlike 'each, this has an early out).

A type doesn't have to define '<base>collect-on and '<base>each, since they will work in terms of 'unscan (for '<base>collect-on) and in terms of 'scanner, 'car, and 'cdr (for '<base>each), but if your type doesn't look like a 'cons cell, it will probably be more efficient if you overload them for your type.

'each is defined in terms of '<base>each, and 'map (or more specifically, 'map1) is defined in terms of <base>collect-on and <base>each.

So yes: pretty much I'm just doing this from a different angle, i.e. typesystems and generic programming.

----- comments on your code:

  (let lazy-map map
    (= lazy-map
      (with-closure lazy-map 'cons lazy-cons)
       lazy-map
      (with-closure lazy-map 'map lazy-map)))
This is bugged ~.o . Wanna make a guess why? Hint: (= x (cons x 1) x (cons x 1)) is....

-----

2 points by rincewind 5871 days ago | link

Your generic programming solution is like the one from clojure. Clojure has a "conj" (pun on cons) function that adjoins a value to a collection, depending on the type of the collection. When you conj onto a vector you get a vector and when you conj onto a lazy list you get a lazy list.

My code may seem a bit buggy, but i thought of this example as a spec, so it cannot be wrong :)

-----

3 points by almkglor 5871 days ago | link

> My code may seem a bit buggy, but i thought of this example as a spec, so it cannot be wrong :)

1. If with-closure is a nondestructive operation (returns a copy of the original function plus modifications), then what gets returned is this:

  (= lazy-map1 map)
  (= lazy-map2 (with-closure lazy-map1 'cons lazy-cons))
  (= lazy-map3 (with-closure lazy-map2 'map lazy-map2))
  (= lazy-map lazy-map3)
2. If 'with-closure is a destructive operation (i.e. modifies the given function) then you've just clobbered 'map.

The proper solution would be:

  (= lazy-map nil)
  (= lazy-map-delay
     (fn rest
       (apply lazy-map rest)))
  (= lazy-map (with-closure (with-closure map 'cons lazy-cons) 'map lazy-map-delay))
In addition, 'lazy-cons will probably end up being a macro, because it fools around with execution; if you can arbitrarily replace a function with a macro (i.e. "true first-class macros"), that will require some serious metacompiling at runtime.

-----

1 point by rincewind 5870 days ago | link

You are right.

3. With-closure is a special form with call-by-name semantics for its third argument.

   (let lexical-var value
      (with-closure function 'name lexical-var)
      (= lexical-var anothervalue))
should only work with lexical-var as a lexical variable. When lexical-var is changed, 'name is also changed, like in normal closures. This could be implemeted by sharing structure of the closure-alist in ac.scm.

-----

1 point by almkglor 5870 days ago | link

> This could be implemeted by sharing structure of the closure-alist in ac.scm.

Um. There's no such thing.

The arc-on-mzscheme implementation uses mzscheme functions for its own functions. What I mean is, something like this:

  (fn (x) x)
is exactly the same as mzscheme's:

  (lambda (x) x)
This is still not trivially implementable in mzscheme except by some serious mangling around (i.e. basically each Arc function becomes an object with a "lock" function which assigns free variables to specific values. By default when we execute this function we specifically lock the free variables to the global variables).

And true first-class macros will still remain very difficult.

You might be interested in my discussions with shader about how true first-class macros make it very hard to write efficient interpreters, must less even lousy compilers. http://arclanguage.com/item?id=8015

It's possible, but very difficult, and in general most implementeers will not find it worth their while to implement first-class macros.

-----

1 point by applepie 5865 days ago | link

Should we really be so worried about efficiency?

Of course you are free to design and implement any language you want, and I know you've probably been the main contributor.

But I think the original spirit of Arc was to design a simpler language even if it couldn't be implemented as efficiently:

(Quoting pg, The Hundred-Year Language):

There's good waste, and bad waste. I'm interested in good waste-- the kind where, by spending more, we can get simpler designs. How will we take advantage of the opportunities to waste cycles that we'll get from new, faster hardware?

-----

2 points by almkglor 5865 days ago | link

Of course not. Conceptually, Arc-F is just as clean as Arc. You can implement CLOS-style monomethods in Arc-F from the Arc-F side. You can implement the '+, '-, '* and '/ operators on the Arc-F side (you just need to provide the basic '<base>+ etc. operators on the underlying base system side). And I'm willing to bet most of you would never have imagined that not all functions in Arc-F are functions (some of them are special objects that are interpreted by Scheme) if I never told you about that.

For example:

  <User>tl: +
  #<procedure>
  <User>tl: (list +)
  (#4(l-reductor #<procedure> #<procedure> #<procedure>))
Arc-F divides itself into the "IDEAL" and the "REAL". The "IDEAL" is how the language would be implemented in terms of the most basic axiomatic operators. "REAL" is how it has to be implemented to prevent it from being 10x slower than Anarki (yes, I actually did the IDEAL part first, and only later ported to the REAL part where bits of it are in scheme. We're talking 12 second load times for arc.arc, and arc.arc only).

Even PG himself abandoned the purely axiomatic approach when he found that performance was very, very bad. My aim is to return to a purely axiomatic approach in the IDEAL and to make appropriate hooks for the REAL part to optimize everything. As at is, Arc-F is about 2x-3x slower than Anarki. For me, that's acceptable. And so far, most of the speed loss is from function calls.

-----

1 point by almkglor 5865 days ago | link

As a concrete example: in Arc-F, it's possible to overload '+ by simply overloading <base>+. If you have a new type foo and want to define how to add two foos together, you just overload <base>+. The neat thing is that this won't slow down arithmetic very much, if at all; if you were to write the equivalent overload '+ in Anarki (Anarki doesn't have the <base>+ hook, after all), arithmetic performance suddenly drops precipitously.

Also, overloading <base>+ is simpler: <base>+ only requires that you consider the case of adding two objects. + determines how to handle things when you do (+ foo1 foo2 foo3 foo4), specifically it converts the call to (<base>+ (<base>+ (<base>+ foo1 foo2) foo3) foo4)

So yes: my position is, by focusing on efficiency in implementation, it frees us to actually use the axiomatic approach without throwing in the towel and going non-axiomatic.

-----

2 points by stefano 5863 days ago | link

I like the REAL/IDEAL distinction. Always following the IDEAL path to make the "one hundred year language" without considering performance easily leads to unacceptable performance. Your experience with 'load clearly shows that.

-----

1 point by rincewind 5870 days ago | link

I see it now. The env parameter in ac is the lexical environment at function definition time.

-----