Arc Forumnew | comments | leaders | submit | rocketnia's commentslogin

I think it's that way so that 'complement can be used with macros. I don't think I ever use it that way, though. Either I use (~foo ...), where the metafn meaning takes control, or I use ~foo by itself with foo being a function. I don't give it much thought, 'cause there are good alternatives to ~, like the ones you're talking about.

-----

1 point by waterhouse 5465 days ago | link

I don't think this can be used with a macro. It applies f:

  (mac complement (f)
    (let g (uniq)
      `(fn ,g
         (no (apply (unquote f) (unquote g))))))
... Yet it actually does work.

  arc> (mac no-mac (x) `(no ,x))
  #(tagged mac #<procedure: no-mac>)
  arc> ((complement no-mac) 2)
  t
It's clearly using some kind of black magic with the Arc compiler, because macex-ing it makes it fail.

  arc> (macex1 '(complement no-mac))
  (fn gs2704 (no (apply no-mac gs2704)))
  arc> ((fn gs2704 (no (apply no-mac gs2704))) 2)
  Error: "Function call on inappropriate object #(tagged mac #<procedure: no-mac>) (2)"
...Ohai, special cases, printed right there at the top of ac.scm.

        ; the next three clauses could be removed without changing semantics
        ; ... except that they work for macros (so prob should do this for
        ; every elt of s, not just the car)
        ((eq? (xcar (xcar s)) 'compose) (ac (decompose (cdar s) (cdr s)) env))
        ((eq? (xcar (xcar s)) 'complement) 
         (ac (list 'no (cons (cadar s) (cdr s))) env))
        ((eq? (xcar (xcar s)) 'andf) (ac-andf s env))
Well. That explains it, at any rate...

  arc> ((complement no-mac 2 3 4) (is 2 3))
  nil
Heh.

I do like the side effect that (a:b:c x) is the same as (a (b (c x))) even if a,b,c are macros. That's something I'd hate to give up. This seems it could be implemented in either of some ways:

1. [current] A call to 'compose in functional position is special-cased by the compiler. This seems like a bit of a hack, although it's served pretty well and isn't likely to fail in the near future.

2. The thing that expands ssyntax will expand (a:b c ...) to (a (b c ...)). This would be hard and would suck, because then ssyntax would no longer be just intra-symbol. (An "ssexpand" function that just takes a symbol argument would be insufficient.)

2a. If ssexpansion is done by the reader (which I do believe it should be), like how [f _] becomes (fn (_) (f _)), then this might be tolerable. Being a reader change, this will and should apply everywhere, even inside quotes; for example, "(a '(b:c d.e))" would get read as (a (quote (b (c (d e))))). I think this might be something to aim for.

3. We make macros first class, so they can be applied, and probably make the compiler capable of optimizing such cases. I think this, combined with (2a) as a cheap way to optimize, would be good.

-----

3 points by rocketnia 5464 days ago | link

Yet it actually does work.

That's the metafn meaning I was talking about. >.>

---

This seems it could be implemented in either of some ways

My preferred way, which you didn't mention, is to have macros return intermediate things that can either be unwrapped into expressions or applied as syntax operators themselves. That way "compose" can be a syntax operator and "(compose a b c)" can also be a syntax operator. I describe this idea in context with several of my other ideas in http://arclanguage.org/item?id=13071, "A rough description of Penknife's textual syntax."

In the past couple of days, I've also been toying with the idea of having : be a read macro. It can start a list and stop once it reaches an ending bracket, without consuming that bracket. This would be similar in effect to http://arclanguage.org/item?id=13450.

  (accum acc: each x args: acc:* 2 x)
One thing it gives up is the ability to say (map a:b c); you have to say (map [a:b _] c) instead. Also, it doesn't give you a:b.c, and it's not clear to me how to indent it. :-p

I don't see a good read macro equivalent for a.b; a read macro has to come before the things it reads, like "!a b", but chaining with "!!!!a b c d e" wouldn't be nearly as nice. Any reader-level solution for a.b ssyntax would probably need to ditch the Racket reader (or at least ditch it within certain delimiters), and it would probably use something like Penknife's or ar's approach, biting off a block of text somehow--using brackets, line breaks, whitespace, or whatnot--then parsing it all at once. ...Technically we might be able to replace Racket's list syntax with a syntax that keeps a public stack of subexpressions, and then have . pop off of that stack, but that strategy doesn't help at the top level.

Oh, PS: Semi-Arc is another Arc-like which implements ssyntax in the parser. http://arclanguage.org/item?id=12999

-----

1 point by akkartik 5465 days ago | link

Yeah, in wart call started out as just funcall then turned into a macro so it could handle macros. Hmm, would I need a transform to get that if I somehow make wart a lisp-1?

Also, it always bugged me that call needed to be aware of compose. But now that I have open def it's cleaner.

https://github.com/akkartik/wart/blob/10c173c39508fc20ed42da...

-----

3 points by rocketnia 5465 days ago | link | parent | on: Textile for Arc 0.1

Here's a clickable link: https://gist.github.com/815476 ^_^ They're not clickable in submission bodies.

The fact that you're not escaping HTML characters troubles me. I'd kinda prefer not to risk encountering malevolent JavaScript on a forum, even on a forum where most regulars wouldn't abuse that power.

You're using this code for replacing spans like _em_ and @code@:

  (def txt-span (text st et tag) ; st = start textile; et = end textile -- todo: support span attributes
    (re-replace (string (txt-re-quote st) "(\\S.*?\\S?)" (txt-re-quote et)) text (string "<" tag ">" #\\ 1 "</" tag ">")))
It strikes me that this wouldn't handle nesting well, which may be fine, 'cause these spans are things people almost never need to nest--some exceptions being when attributes are involved or when nesting multiple layers of <sup> and <sub>. To make it a bit less sloppy, I recommend manually incrementing an index through the text, using 'begins to identify start tags and maintaining a stack if necessary, even if it sounds horribly ugly to do it that way. :-p This should also give you a good place to insert attribute-parsing code.

Speaking of spans, I'm not sure why you bother making all the txt-@ global variables. I think all you do is use 'eval to define them and another 'eval to get them back, and you don't even need the second 'eval because you have all the information you need.

   (def txt-spans (text)
     (each (span tag) txt-std-spans*
  -    (= text (eval `(,(sym (string "txt-" span)) ,text)))) text)
  +    (zap txt-span text span span tag))
  +  text)
Also speaking of spans, I'm troubled by the fact that there's a -del- span at all. I hope this syntax doesn't apply to every instance of a hyphen before a whitespace character, 'cause that'll mess up a lot of variable names and links.

I realize you're in the earlier stages of getting to know Arc, and I don't mean to discourage you or anything. I only mean to contribute. ^_^

---

Is there a better way to do `str-split` than the one I use here?

The suggestion that comes to my mind is this, but it's kinda laughable....

  (def str-split (blob delim)
    (unless blob (err "Please don't apply 'str-split to nil."))
    (aand blob
          (subst "%1" "%" it)
          (subst "%2" "#" it)
          (subst "#" delim it)
          (map [aand _
                     (subst "#" "%2" it)
                     (subst "%" "%1" it)]
               (tokens it #\#))))
It's basically the same as your current strategy, but instead of letting any one character like #\u0001 slip through the cracks, I do tons more string construction.

The least hackish way is probably to manually iterate using an index.

---

Is it possible to use a callback for a regular expression replacement, like PHP's preg_replace_callback?

Yep, I think so. Anarki's 're-replace depends on Racket's 'regexp-replace* :

  (def re-replace (pat text replacement)
    ($.regexp-replace* ($.pregexp pat) text replacement))
Racket, in turn, has extensive documentation for 'regexp-replace* and the replace-only-the-first-thing variant, 'regexp-replace: (http://docs.racket-lang.org/reference/regexp.html#(def._((qu...)

Apparently you can just keep using 're-replace but pass a function as the "replacement" argument. It appears to work very much like preg_replace_callback(), except that each captured string is passed as a separate argument rather than in an array:

  arc> (load "lib/re.arc")
  nil
  arc> (re-replace "(.).(.)" "abcdefg" (fn (whole x y) (+ y x whole " ")))
  "caabc fddef g"

-----

1 point by rocketnia 5465 days ago | link

I hope this syntax doesn't apply to every instance of a hyphen before a whitespace character

Oops, I meant to say "a hyphen before a non-whitespace character."

Speaking of which, lots of links have +pluses+ in them as URL-encoded spaces, so that's a troublesome syntax too. What does Textile do in these cases?

-----

4 points by dpkendal 5465 days ago | link

> The fact that you're not escaping HTML characters troubles me. I'd kinda prefer not to risk encountering malevolent JavaScript on a forum, even on a forum where most regulars wouldn't abuse that power.

The original classTextile.php (http://code.google.com/p/textpattern/source/browse/developme... -- warning: big) provides a 'restricted' mode, designed for forum comments etc., in which all input <, > and & characters are escaped, which is what I intend to do here as I continue to work on it.

> It strikes me that this wouldn't handle nesting well, which may be fine, 'cause these spans are things people almost never need to nest--some exceptions being when attributes are involved or when nesting multiple layers of <sup> and <sub>. To make it a bit less sloppy, I recommend manually incrementing an index through the text, using 'begins to identify start tags and maintaining a stack if necessary, even if it sounds horribly ugly to do it that way. :-p This should also give you a good place to insert attribute-parsing code.

While you're right about the attribute-parsing, I'm not too concerned about nesting issues because the current reference implementation (there's a dingus at http://textile.sitemonks.com/) doesn't handle that either. It uses the regexp method too (see classTextile.php), with a callback for parsing attributes.

> Speaking of spans, I'm not sure why you bother making all the txt-@ global variables. I think all you do is use 'eval to define them and another 'eval to get them back, and you don't even need the second 'eval because you have all the information you need.

Thanks. I've now corrected my copy and I'll use your method in the next version.

> Also speaking of spans, I'm troubled by the fact that there's a -del- span at all. I hope this syntax doesn't apply to every instance of a hyphen before a whitespace character, 'cause that'll mess up a lot of variable names and links.

> Speaking of which, lots of links have +pluses+ in them as URL-encoded spaces, so that's a troublesome syntax too. What does Textile do in these cases?

Good point, that's a special case I missed. The function is now:

    (def txt-span (text st et tag) ; st = start textile; et = end textile -- todo: support span attributes
      (re-replace (string "(?<=\\W|^)" (txt-re-quote st) "(\\S.*?\\S?)" (txt-re-quote et) "(?=\\W|$)") text (string "<" tag ">" #\\ 1 "</" tag ">")))
(notice the lookarounds at each end) which should prevent such issues.

> I realize you're in the earlier stages of getting to know Arc, and I don't mean to discourage you or anything. I only mean to contribute. ^_^

Yes! Thank you for your contribution -- it's for advice like this that I released so early.

> The suggestion that comes to my mind is this, but it's kinda laughable....

Yeah, I'll stick with my `str-split` for now. It should be a library function anyhow, so I'd rather keep my version short and simple until there's a version in `strings.arc` or `arc.arc`.

> Apparently you can just keep using 're-replace but pass a function as the "replacement" argument. It appears to work very much like preg_replace_callback(), except that each captured string is passed as a separate argument rather than in an array

Aha, I don't know why I didn't think to try that. I just mentally assumed I would need another function to do it.

Thanks for all your help! I really appreciate it.

(There's now a textile repo at https://github.com/dpkendal/textile, into which I'll slowly be putting improvements.)

-----


Yeah, I like being able to reference a function before it's defined. (Macros annoy me a little for not allowing that.) For me it's a matter of the concept of "definition" being an ambient thing, where something counts as being defined if it's defined anywhere, even later on. It's like how, in a mathematical proof or prose argument, a broad claim may be reduced into a bunch of littler inferences, some handled one-by-one systematically and some left to the reader to fill in. I've read (or tried to read) a bunch of mathematical papers or books that start out building lemma after lemma and climax in a theorem, and those might even be in the majority, but sometimes I have to approach them backwards, and then I have to backtrack to figure out what their terminology means, and it's pretty frustrating.

In education, lots of the time new topics are built upon the foundations the old topics provided, but sometimes they're built upon established motivations and provide all-new foundations, like an analysis course justifying the calculus courses that came before it, or a mechanics course casting Newton's laws in a new light.

For me, the motivation comes pretty early on relative to the implementation. I could decide to put the main control flow algorighm at the top to set the stage for the rest, or I could decide to arrange things according to the order they'll be applied--or in fact I might like having them in the reverse order, the order in which they're needed to get the result from more and more convenient starting positions. That last strategy is probably closest to dependencies-come-first coding, but I don't want to be limited to it, even if I risk choosing a frustratingly haphazard strategy.

-----

1 point by evanrmurphy 5466 days ago | link

Nice summary of the different approaches. ^_^

One reason to favor the dependencies-first approach in Arc is that we have mutability.

If you're only doing single assignment and side effect-free programming, then your code doesn't have a significant order [1]. But insofar as your program is imperative and performing mutations, the order is significant.

A consequence of this is that if you want to be able to take advantage of imperative features, you're making it harder by ordering your code any other way. I say this because even if your code is purely functional right now, when you try to insert some imperative code later, the order is going to start mattering more. And it's going to start seeming tangled and confused if it doesn't build up in the order of execution (at least it does for me).

So dependencies-first programming plays especially well with imperative code. I'm also particularly interested in it at this moment because I'm working on a refined auto-quote mechanism that could be hard to take advantage of if you're not programming this way. ;)

---

[1] Except for the macros wart you alluded to.

-----

1 point by akkartik 5466 days ago | link

Yeah, I agree: I like to see the 'business end' of code up front. aw's article made some good points I'm still mulling over[1], but upgrading things seems like such a rare event compared to the day-to-day use of code. Especially if I manage to keep up my resolution[2] to never rely on any libraries :)

---

[1] http://github.com/awwx/ar now keeps tests in a separate file. Does that weaken the case for defining things before using them? Perhaps you could define your tests bottom-up but write your code top-down, or something.

I still want to try out a test harness that analyzes dependencies and runs tests bottom-up: http://arclanguage.org/item?id=12721. That way you could write your tests in any order and they'd execute in the most convenient order, with test failures at low levels not triggering noisy failures from higher-level code.

[2] http://arclanguage.org/item?id=13219

-----

3 points by aw 5466 days ago | link

http://github.com/awwx/ar now keeps tests in a separate file

Not by design, as it happens. I wrote some new tests for code written in Arc, and stuck them into a separate file because I hadn't gotten around to implementing a mechanism to load Arc code without running the tests.

Though I do view writing dependencies-first as a form of scaffolding. You may need or want scaffolding for safety, or because you're working on a large project, or because you're in the midst of rebuilding.

Does that mean that you always need to use scaffolding when you work on a project? Of course not. If you're getting along fine without scaffolding, then you don't need to worry about it.

Nor, just because you might need scaffolding in the future, does it mean that you have to build it right now. For example, if I had some code that I wanted to rebase to work on top of a different library, and it wasn't in dependency order, and it looked like the rebasing work might be hard, I'd probably put my code into dependency order first to make the task either. But, if I thought the rebasing was going to be easy, I might not bother. If I ran into trouble, then perhaps I'd backtrack, build my scaffolding, and try again.

-----

1 point by rocketnia 5466 days ago | link

Especially if I manage to keep up my resolution to never rely on any libraries :)

I have effectively the same resolution, but only 'cause of Not Invented Here syndrome. :-p Nah, I use plenty of libraries; they just happen to be the "libraries" that implement Arc. I use all kinds of those. :-p

---

http://github.com/awwx/ar now keeps tests in a separate file. Does that weaken the case for defining things before using them?

That file is loaded after the things it depends on, right?

---

...you could write your tests in any order and they'd execute in the most convenient order, with test failures at low levels not triggering noisy failures from higher-level code.

I'm not sure I understand. Do you mean if I define 'foo and then call 'foo in the process of defining 'bar (perhaps because 'foo is a macro), then the error message I get there will be less comprehensible than if I had run a test on 'foo before trying to define 'bar?

---

In any case, aw's post mostly struck me as a summary of something I'd already figured out but hadn't put into words: If a single program has lots of dependencies to manage, it helps to let the more independent parts of the program bubble together toward the top, and--aw didn't say this--things which bubble to the top are good candidates for skimming off into independent libraries. If you're quick enough to skim them off, the bubbling-to-the-top can happen mentally.

Lathe has been built up this way from the beginning, basically. It's just that the modules are automatically managed, and it acts as a dependency tree with more than one leaf at the "top," rather than something like yrc or Wart with a number on every file.

I'm interested in making a proper unit test system for Lathe, so we may looking for the same kinds of unit test dependency management, but I'm not sure yet about many things, like whether I want the tests to be inline or not.

Well, Lathe has an examples/ directory, which I've ended up using for unit tests. It's kind of interesting. Lathe's unit tests have become just like its modules over time, except that they print things to tell you about their status. Being a module, an example automatically loads all its dependencies, and you can load it up and play around with the things defined in it at the REPL, which is occasionally useful for debugging the example itself. But it's pretty ad-hoc right now, and I don't, for instance, write applications so that they load examples as they start up, like you might do.

-----

3 points by akkartik 5466 days ago | link

"Do you mean if I define 'foo and then call 'foo in the process of defining 'bar (perhaps because 'foo is a macro), then the error message I get there will be less comprehensible than if I had run a test on 'foo before trying to define 'bar?"

If bar depends on foo (foo can be function or macro), and some tests for foo fail, then it's mostly pointless to run the tests for bar.

---

"That file is loaded _after_ the things it depends on, right?"

Yeah well, you gotta load code before you can run the tests for it :)

My understanding of aw's point was this: if you load your code bottom-up, then you can test things incrementally as you define them, and isolate breakage faster. Defining the tests after their code is irrelevant to the argument because it's hard to imagine an alternative.

If you put your tests in a separate file and run them after all the code has been loaded, you can still order them bottom-up. So to answer my own question, no, keeping the tests in a separate file doesn't weaken aw's argument :)

-----

3 points by aw 5466 days ago | link

There is a small difference: if you've loaded only the code up to the point of the definition which is being tested when you run the test (either by writing tests in the same source code file as the definitions, or by using some clever test infrastructure), then you prove that your definitions aren't using anything defined later.

Of course you can probably tell whether code is in prerequisite order just by looking at it, so this may not add much value.

-----

1 point by aw 5466 days ago | link

whether I want the tests to be inline or not

Something I've been thinking about, though I haven't implemented anything yet, is that there's code, and then there's things related to that code such as prerequisites, documents, examples, tests, etc. The usual practice is to stick everything into the source code file: i.e., we start off with some require's or import's to list the prerequisites, doc strings inline with the function definition, and, in my case, tests following the definition because I wanted the tests to run immediately after the definition.

But perhaps it would be better to be able to have things in separate files. I could have a file of tests, and the tests for my definition of "foo" would be marked as tests for "foo".

Then, for example, if I happened to want to run my tests in strict dependency order, I could load my code up to and including my definition of foo, and then run my tests for foo.

-----

1 point by akkartik 5466 days ago | link

"the tests for my definition of foo would be marked as tests for foo."

In java or rails each class file gets a corresponding test file in a parallel directory tree. I find it too elaborate, but it does permit this sort of testing classes in isolation.

-----

2 points by rocketnia 5466 days ago | link | parent | on: Implementing prototypes in Arc

What you think of as prototype inheritance, I usually think of as scope shadowing. :) Here's some untested code to show how I'd go about what you're doing:

  ; This kind of table can distinguish between nil and undefined.
  (def singleton-table args
    (when (odd len.args)
      (err "Can't pair the args to 'singleton-table."))
    (listtab:map [list _.0 (list _.1)] pair.args))
  
  (def proto (parent . binds)
    (annotate 'proto
      (list (apply singleton-table binds) parent)))
  
  (= empty-proto* (annotate 'proto nil))
  
  (def proto-get-maybe (self field)
    (whenlet (binds parent) rep.self
      (or binds.field (proto-get-maybe parent field))))
  
  ; Here I use 'defcall, which is defined in Anarki. Rainbow has a
  ; variant too.
  (defcall proto (self field)
    (car:or (proto-get-maybe self field)
      (err:+ "No field \"" (tostring write.field) "\" on "
             (tostring write.self) ".")))
  
  (extend sref (self val . args) (and (isa self 'proto) single.args)
    (= (rep.self.0 car.args) list.val))
Actually, I'd get by with the longhand functions 'proto-get and 'proto-set, but that's not as nice. ^^ I only do that because I try to support plain Arc 3.1 and Jarc, which don't have 'defcall. (Supporting Jarc is nice 'cause of its debugger, and supporting Arc 3.1 is nice 'cause it's the easiest to talk about. Anarki has the most community involvement, and Rainbow is fast.)

-----

2 points by Pauan 5463 days ago | link

I didn't really think of prototypes in the same way as scope, but you're right that they're basically the same thing. I suppose you could say that prototypical inheritance is scope applied to objects, rather than functions.

With that in mind, I'm curious whether prototypes would even be useful in Arc. I don't use them much in JavaScript, preferring to use plain old objects/variables/closures.

In any case, this was more of a theoretical exercise than a practical idea. It was basically me realizing that I could implement prototypes in a language that doesn't have prototypes, by using a function.

I then realized that since Arc is so malleable, I might even be able to extend the built-in eval to treat my special prototype functions as if they were hash tables. That, unfortunately, didn't work out, but I suspect I can do it with Anarki.

-----


  > (def maximin (x) (check x number (apply max (map minimax x))))
  #<procedure: maximin>
  > (def minimax (x) (check x number (apply min (map maximin x))))
  Warning: Oh no, what are you doing?
  > O_O
The proposed changes are backwards-compatible with Arc 3.1, since all they attempt to do is provide sensible defaults for things that presently raise errors.

They're not compatible with Arc programmers who want to get those errors. Not all errors signify places where extensions can roam free. For instance, extending the 'err function itself would be particularly silly. Where and how clearly to draw the line is a matter of opinion, but I think waterhouse and I are both in the "we want errors" camp here. ^^;

-----

3 points by akkartik 5466 days ago | link

As I said before (http://arclanguage.org/item?id=13830), I'm in the camp of "this is probably a bad idea, but let's try it anyway and see where it leads." It's my new resolution: to try harder to not be conservative in trying out design choices with wart: http://arclanguage.org/item?id=13694

I still plan to do 'production' work with wart, so I'm going to try not to go off the deep end. One experiment at a time, try to falsify new changes as fast as possible, etc.

Update: This quote about hedging seems a propos: http://akkartik.name/blog/14251481

-----

1 point by rocketnia 5466 days ago | link

And I'm making Penknife so I can explore things like this in their own quarantined namespaces where they can coexist with my other experiments in the same program. ^^ I'm not sure if my strategy's any good though; I fear it won't put enough pressure on certain experiments to see what their real strengths are. It kinda banks on the idea that people other than me will use it and apply their own pressure to their own experiments.

A strategy I'm not comfortable with? Is this also hedging? :-p Nah, in this case I'm not also taking any other strategy I'd prefer to succeed. Or is that necessary? Maybe I'm hedging, but not diversified. Probably I'm just in a bad place. :-p

-----

2 points by akkartik 5453 days ago | link

"I fear it won't put enough pressure on certain experiments to see what their real strengths are. It kinda banks on the idea that people other than me will use it and apply their own pressure to their own experiments."

Even better if you could get others to put pressure on your experiments.

-----

1 point by rocketnia 5452 days ago | link

"Even better if you could get others to put pressure on your experiments."

Well, that's both sides of the issue I'm talking about. I do want people to put pressure on each other's experiments, but I expect to promote that by reducing the difficulty involved in migrating from one experiment to another. Unfortunately, I expect that'll also make it easier for people not to put any more than a certain amount of pressure on any one experiment.

Or are you talking about my experiments in particular? :)

-----

1 point by akkartik 5452 days ago | link

No, you understood me right. If you make it too easy to fragment the language the eyeballs on any subsurface get stretched mighty thin.

This might actually be one reason lisp has fragmented: it's too easy to implement and fork, and so it has forked again and again since the day it was implemented. Suddenly ruby's brittle, hacky, error-prone parser becomes an asset rather than a liability. It's too hard to change (all those cases where you can drop parens or curlies at the end of a function call), and it keeps the language from forking.

-----

2 points by evanrmurphy 5453 days ago | link

Upvoted for the new quoting style. I think double quotes + italics is a winner for news.arc forums. (I usually do a right angle bracket + italics, but I like the way yours looks better.)

-----

1 point by evanrmurphy 5466 days ago | link

  > O_O
Yeah, that particular warning system I was sketching out gives false positives. It would need to be refined.

> They're not compatible with Arc programmers who want to get those errors. Not all errors signify places where extensions can roam free.

Thanks for reminding that errors are sometimes desirable. I just think we're missing out on such valuable real estate here!

-----


...at the REPL, you'd better be careful not to give "a" a value, or it'll break your code. I think the cognitive overhead of worrying about that far outweighs the cost of using the ' operator.

Thank you so much for saying that. XD All of it. I feel the same way, but I can't help but think I'm just out of my element and resistant to change. Managing a single character here or there isn't a trivial problem, but I personally don't care much either way as long as I can also manage my code well on a large scale. Good to know I'm not the only one.

---

...warn when assigning to an already-bound variable...

I'm too much a fan of extensibility (where redefining things is normal) and of namespaces (where neglectable names are already protected from conflict) for 'safeset to seem like a positive thing, but in this case, it could just be me. ^^

-----


In acknowledgement that the approach of http://awwx.ws/extend is superior :)

Great to see you convert at last. ^_^

Some places I'd go from here, if it were up to me:

- Individual rule names. If I mess up entering a method implementation, I like to be able to edit that command and re-paste it, overwriting the old one. One of aw's older versions of 'extend used names to allow this, and Inform 7 has named rules too. Your 'defgeneric actually had this already, with the types themselves acting as names.

- Explicit rule precedence management, like I have in Lathe (but not in core Penknife). This powers Lathe's inheritance system, for instance.

- A way to do something if a method call runs out of cases, like I've mentioned having in my Penknife draft (but not in the current Penknife or Lathe). This lets someone extend one utility and automatically gain access to a bunch of other utilities. The way I plan to accomplish this--an extra "fail" parameter for every function--would totally conflict with the "thin layer" aspect of Wart, but maybe there's another way. Hmm, I already have half an idea, so you should see another post from me soon. :)

-----

2 points by rocketnia 5468 days ago | link

A way to do something if a method call runs out of cases ... Hmm, I already have half an idea, so you should see another post from me soon. :)

Well, here's the idea, which I've sketched up as a blob of untested and sorta-idealistic Arc. I'm using dynamic variables to simulate a "fail" parameter, but it's not quite as simple as it might sound. Until now, I didn't know it was possible; in fact, the strategy I'm using wouldn't cooperate with similar parameter-simulating frameworks. There are two tricks:

Since the surrounding language is full of fail-unaware "muggle" functions, I have to set up those variables inside the function implementation (for when a failure parameter isn't provided), as well as sometimes outside the function whenever the fail parameter is supposed to be passed.

If we stopped there, then if we failcalled a muggle function, any failure-aware functions it called would think we were failcalling them. To avoid that, we use our second trick: In 'failcall, we test the function to see whether it's fail-aware first, and if it's not, we don't set up the dynamic parameters. The catch is that it may not be easy to "test the function" this way; it depends on whether the underlying language supports 'defcall or weak sets. Here, I'll pretend the language has weak set support. (Using this framework probably invalidates 'defcall anyway, since ideally, you'd be able to 'defcall something so that it was fail-aware. That said, this weak set approach may work quite well with a 'defcall that coerces to 'fn.... Wow, unexpected 'coerce niftiness.)

There's one other thing to note: In my Penknife draft, [failfn fail [a b c] ...] and [failfn* fail [a b c] rest ...] are the most basic kinds of function syntax, but one of these "fail" parameters isn't actually the fail parameter the function was called with; instead, it's an escape continuation that calls the given fail parameter instead of continuing with the function. In my experience with Lathe's rulebooks, this continuation-handling is less efficient than explicit branching in each rule, but I'm glad to sacrifice a little efficiency here. :)

Okay, here's the code. (Remember, it's untested!) Insert your own varieties of dynamic parameters and weak sets:

  ; This branches to a different strategy when a continuation is called.
  ; Note that it's a pretty significant performance bottleneck.
  (def fn-onpoint (alternate body)
    (let (called result)
           (catch:list nil (body:fn args (throw:list t args)))
      (if called
        result
        (apply alternate result))))
  
  (mac onpoint (point alternate . body)
    `(fn-onpoint ,alternate (fn (,point) ,main)))
  
  
  (defdynamic fail* nil)
  (defdynamic failcalling* nil)
  (= fail-awares* (make-weak-set))
  
  (def failcall (func fail . args)
    (if (mem fail fail-awares*)
      (onpoint throw fail
        (w/dynamic (failcalling* t fail* throw)
          (apply func args)))))
  
  (def failable-wrapper (inner-func)
    (ret result (afn args
                  (if failcalling*
                    (w/dynamic (failcalling* nil)
                      (apply inner-func args))
                    ; The default failure is 'err.
                    (apply failcall self err args)))
      (push result fail-awares*)))
  
  
  ; Make partial functions using this.
  ; TODO: Make number-of-argument errors into failures.
  (mac failfn (fail parms . body)
    (w/uniq g-return
      `(failable-wrapper:fn ,parms
         ; We wrap the failures up in a value that should be easy to
         ; pretty-print (whenever we get around to writing such a
         ; pretty-printer).
         (onpoint ,fail [fail*:annotate 'function-failure
                          (list '(failfn ,fail ,parms ,@body) ,_)]
           ; The value of 'fail* shouldn't be used after this point,
           ; but we don't enforce that.
           ,@body))))
  
  ; This is an exception. I don't think it can be implemented without
  ; using 'failable-wrapper directly.
  (= fail-aware-apply (failable-wrapper:fn (func . args)
                        (apply apply func args)))
  
  
  (def fn-ifsuccess (func args then else)
    (failcall else (failfn fail ()
                     (then:apply failcall func fail args))))
  
  ; TODO: See if ssyntax should be supported.
  (mac ifsuccess (success failure (func . args) then . elses)
    `(fn-ifsuccess ,func (list ,@args) (fn (,success) ,then)
                                       (fn (,failure) (if ,@elses))
  
  (def failcall-cases (cases fail wrap-details collected-details . args)
    (ifdecap (case . rest) cases
      (ifsuccess success failure (fail-aware-apply case args)
        success
        (apply failcall-cases
          rest fail (cons failure collected-details) args))
      (fail wrap-details.collect-details)))
  
  
  ; Without further ado, here's one way to explicitly set up a generic
  ; function.
  
  (= fact-cases* nil)
  (= fact (failfn fail args
            (apply failcall-cases fact-cases*
              ; We wrap the failures up in a value that should be easy
              ; to pretty-print (whenever we get around to writing such
              ; a pretty-printer).
              [annotate 'rulebook-failure (list 'fact _)]
              fail nil args)))
  
  (push (failable-wrapper:failfn fail (n)
          (* n (fact:- n 1)))
        fact-cases*)
  
  (push (failable-wrapper:failfn fail (n)
          (unless (is n 0)
            (fail "The number wasn't 0."))
          1)
        fact-cases*)
Since Lathe already has support for dynamic parameters on every platform but Rainbow, and since Racket has support for weak tables, I should be able to get this in Lathe and tested soon enough. I think I've been able to use WeakHashTables from Jarc too, so this will hopefully cover three out of Lathe's four target platforms. (WeakHashTables are a bit of a stretch since they compare with equals(), but all my keys would be procedures, so that's probably close enough.)

-----

2 points by rocketnia 5467 days ago | link

It's in Lathe now! It works on Arc 3.1, Anarki, Jarc, and Rainbow, but only as long as you don't reenter continuations in Rainbow. (That's the same scope of support as Lathe's dynamic boxes.)

https://github.com/rocketnia/lathe/blob/master/arc/failcall....

There were an awful lot of embarrassing bugs to work out, like parameters out of order and stuff, and the fact that it was easy to implement 'fail-aware-apply using 'failfn after all, but it they're squashed. Woo! I even put the 'fact example in there, but I'll probably move that into a separate unit test.

One thing this means is that I should be able to port a lot of my Penknife draft's utilities into Arc. It won't have Penknife's hygiene support, and the module system will still be horribly limited, but Lathe has the advantage of already having a self-ordering rule library and, you know, actually existing. ^_^

Actually, it also means I might be able to reuse my existing Penknife core and hack this onto it. Nahh, the core in my draft is already far better designed for extensibility, even if it doesn't work....

I don't see any difficulty in translating fail parameters to Wart, only complexity. :-p What do you think?

-----

1 point by rocketnia 5467 days ago | link

For what it's worth, I've abstracted away the parameter simulation and put it in dyn.arc, which is where Lathe's dynamic box API is kept. Now you can define your own "secretargs" with default values and have it automatically seem like they're passed to every function in the language. It's not necessarily pretty; the point is to build things like 'failcall on top of it.

  ; Surprise, this code has been tested!
  
  (= lathe-dir* "your/path/to/lathe/arc/")
  (load:+ lathe-dir* "loadfirst.arc")
  (use-rels-as dy (+ lathe-dir* "dyn.arc"))
  
  (= secret1 (dy.secretarg 4))
  (= secret2 (dy.secretarg 5))
  (= secret3 (dy.secretarg 6))
  (dy:call-w/secrets (dy:secretarg-fn (a b c)
                         d secret1
                         e secret2
                       (list a b c d e (dy.secretargs)))
    (list (list secret1 9)
          (list secret3 200))
    1 2 3)
  
  => (1 2 3 9 5 ((#(tagged ...) 9) (#(tagged ...) 200)))
There's a catch, though, which is that general-purpose function-calling functions like 'apply and 'memo don't propagate secretargs, and there's probably nothing I can do about that aside from hacking on the language core. This is the same kind of compatibility-breaking I brought up when we were talking about keyword arguments.

Fortunately, secretargs are pretty general-purpose--they're pretty much just keyword arguments with non-symbol keys--so things like keyword argument systems can be implemented on top of them. Just make a secretarg that stores a special-purpose keyword map.

I'm not sure if that'll come up in Lathe for a long time, though. For now the only benefit is that it's a little easier to explain 'failcall and 'failfn: they use a non-symbol keyword argument (a secretarg) to communicate, and 'failfn makes that argument into a more natural failsafe by chaining it to an escape continuation.

-----

1 point by evanrmurphy 5467 days ago | link

You seem very excited about this feature, but I'm afraid I'm having trouble understanding what it's about. :-/

Is it a sort of base case upon which all functions' extend layers can be built up? What you've written about it so far reminds me of Ruby's method_missing. Is that a related idea?

-----

2 points by rocketnia 5466 days ago | link

You seem very excited about this feature, but I'm afraid I'm having trouble understanding what it's about. :-/

I expected that. ^_^;; I kept trying to find ways to emphasize the motive, but I kept getting swept up in minutia.

---

Is it a sort of base case upon which all functions' extend layers can be built up? What you've written about it so far reminds me of Ruby's method_missing. Is that a related idea?

I think you're on the right track. The point of my fail parameter is to explicitly describe cases where a function is open to extension. Sometimes a function call has a well-defined return value, and sometimes it has a well-defined error value. If it doesn't end in either of those ways and instead fails--by calling its fail continuation--then it's undefined there, and you're free to replace it with your own function that's well-defined in all the same cases and more.

That was the original idea, anyway. It's a little flawed: If another function ever actually catches a failure and does something with it, then its behavior relies on the callee being undefined in certain cases. Take 'testify for instance; if 'testify checks a 'testify-extensions function first but acts as [fn (x) (iso x _)] on failure, then you can't actually extend 'testify-extensions without changing the behavior of 'testify. In fact, if you replace a function with another function that calls the first one ('extend as we usually know it), then you can no longer extend the first one without impinging upon the new one. But in practice, I think it'll work out well enough.

The alternatives ways to deal with extension I see are a) encoding defined-ness in the return value using a wrapper type, and b) throwing an exception. In a language where almost everything's extensible, to have to unwrap every return value would be madness. As for exceptions, it's hard to tell whether the exception came from the function you called or some function it called. In fact, I was actually trying figure out how to have function calls take ownership of exceptions when I finally realized there needed to be a per-function-call ID, generated on the caller's side; this idea refined a little to become fail parameters.

I know, I digressed a bit again. XD Did it work for you this time?

-----

1 point by rocketnia 5466 days ago | link

Also, in my Penknife draft, I'm finding myself much more comfortable using failures rather than exceptions, letting the failures become exceptions when something's called without 'failcall. Failures form a natural tree of blame; this rulebook failed because all these functions failed, and this function failed because this other function failed....

The need to write natural-language error strings diminishes to just the base cases; the rest is pretty-printing and perhaps even a bit of SHRDLU-style "Why did you decide that?" interactive tree-drilling. ^_^

Arc's known for having rotten errors 'cause it shirks the error-checking code; with fail parameters, awesomely comprehensive error reports can be made in almost a single token.

-----


I've considered > too, but I keep wanting to use < and > in names, like for XML, comparators, and very special-purpose conversions (a->b). Conversions can live without it, but I don't know about comparators and XML.

Another idea is /. Arc uses / in variable names, but I can't help but think that 'w-stdout would be just as iconic as 'w/stdout. There could be some potential. ^_^

-----


I thought maybe a compare view would help visualize which parts of the code grew and shrank:

https://github.com/akkartik/wart/compare/e3cda3b487eb4280215...

Now I'm not so sure. :-p It would take some effort to comprehend all those changes at once.

-----

1 point by akkartik 5468 days ago | link

:) Yeah, there was a lot of reorg. I try really hard to make each commit well-behaved, though, so the changelog should be useful.

-----


Even if Arc were supposed to be like other languages (a kinda dubious goal), a.b probably wouldn't mean field-getting, string concatenation (PHP), and function composition (Haskell) all at once. As it is, it's already closer to field-getting than either of those other things; for lists, x.9 is like getting a field named "9", while cadr.x is like getting a field named "cadr".

I actually think cadr.x is a better way to go for field-getting. If the language has good enough namespacing support, variable names are automatically more malleable than specific symbols. If anything, we could just rephrase it as x.cadr. The trouble is that refactoring x.func into (func x y) is even more work; there'd be a demand for an x.func(y) syntax. Not that that would be a bad idea, just a more complicated one. ^_^

(I originally brought up the similar idea of an a'b => (b a) ssyntax at http://arclanguage.org/item?id=13061 .)

Backing up, let's suppose Arc doesn't need to be like other languages. The current meaning of a.b fits my needs pretty well. I use (a b) significantly more often than (a 'b), partly because I tag my types using 'annotate; even if one of my types is represented by an "a!b"-friendly hash table, I access it using "rep.x!field-name", meaning I use one '.' for every '!'. (BTW, do you think ((rep x) 'field-name) would be clearer?)

Since (a b) is so common for me, I like that it uses one of the conveniently un-shifted punctuation characters (on an American keyboard):

  ;,/.'

-----

2 points by Pauan 5467 days ago | link

While I was reading your post again, I realized that "," would also work, though I'm not sure we want to use such a nice character, or save it for something else. I also like the a'b syntax, so there's definitely plenty of options to consider. It's more a matter of deciding whether things should change, and if so, in what way.

P.S. to directly answer your question, yes I think that would be clearer, though more verbose. It's a bit hard for me to pick out the individual elements, because they're all squished together due to the infix syntax.

My current view of the various styles:

  (foo rep/x.field-name)       ; good
  
  (foo rep|x.field-name)       ; undecided; looks pretty weird
  
  (foo rep'x.field-name)       ; tied for favorite

  (foo rep,x.field-name)       ; tied for favorite
  
  (foo rep.x.field-name)       ; okay, but too hard to see the "."s

  (foo rep.x!field-name)       ; don't like the !

  (foo (rep x).field-name)     ; doesn't look lispy enough

  (foo (rep.x 'field-name))    ; good: very lispy, but not too verbose

  (foo ((rep x) 'field-name))  ; too verbose
I actually really like the foo,bar syntax for (foo bar). Especially when considering that the only difference between . and ! is that ! quotes the symbol, so it makes sense that the two would look very similar, right? Also consider this:

  (something foo,bar,qux)

  (foo,bar,qux.corge)

-----

1 point by Pauan 5468 days ago | link

I think it depends on the semantics of the situation. I don't like seeing things like "car.a" or "car.cdr.car.a" because I know that car and cdr are functions, so it kinda screws with my brain.

I feel like function calls should look like function calls, but when adding syntax, they should look different from non-function calls. According to that rule:

  (= foo (list 1 2 3))
  foo.2         ; good
  car.foo       ; bad

  (= foo (obj x 1 y 2))
  foo.x         ; good
  keys.foo      ; bad

  (= foo "bar")
  foo.0         ; good
  downcase.foo  ; bad
In other words, I'm okay with the . syntax when the first argument is a non-function. But I don't like seeing/using the . syntax when the first argument is a function.

For instance, the : syntax is used for functions only. I think the . syntax should be used for non-functions only. It just helps me to cognitively understand a program with the least amount of effort, but others may disagree.

I agree that Arc shouldn't especially try to be like other languages, but since the distinction between . and ! is mostly arbitrary, choosing a syntax that is more appealing to people coming from other languages is a nice bonus. Especially given that JavaScript is quite popular (for better or worse) and has some nice functional aspects to it (closures, lambdas, etc.) which make it conceptually similar to Scheme, even if the syntax is very different.

Note: if it were possible to combine the two without causing big problems (see evanrmurphy's post), then I might be okay with using the same syntax for both forms.

-----

3 points by evanrmurphy 5467 days ago | link

> I think it depends on the semantics of the situation. I don't like seeing things like "car.a" or "car.cdr.car.a" because I know that car and cdr are functions, so it kinda screws with my brain.

> I feel like function calls should look like function calls, but when adding syntax, they should look different from non-function calls.

One of the features of a language like Arc or Scheme (i.e. lisp-1's) or JavaScript is that functions are not treated especially differently from other data types.

Forget special syntax for a moment. In Arc we can call a function with (f x), a macro with (m x), access a list with (xs 0), a string with (s 0) and a table with (h k). We even call each of these, "putting the [function|macro|list|string|table] in functional position." So, even before ssyntax is introduced, Arc has taken pains to go against your wish of making function calls and non-function calls look distinct from one another.

Overloading functional position is very powerful once we do get to ssyntax because it allows us to overload that as well. Now all we have to do is introduce one syntactic convenience, a.b => (a b), and it becomes available to all the types.

If you like the syntax when it represents list, string or table access but not when it's a function or macro call, you could simply not use the syntax on functions and macros. So you have xs.0, s.0, and h.k, and you have (f x) and (m x). Otherwise, I'd consider that the root of your grievance may be that Arc overloads functional position for all these different types to begin with.

Do you think that could be the case?

-----

2 points by Pauan 5466 days ago | link

Actually, I like that Arc overloads so many things. I suspect the primary reason I dislike using . for functions is because of my JavaScript background. After years of programming in JavaScript, it's become very ingrained into me that . means "property access"

I agree that for my code it's a simple matter of not using the syntax in the areas that I don't like it. In fact, I don't need to use any syntax at all: I could use pure S-expressions if I wanted.

I guess what it comes down to is, "I want to use . for property access in tables, or at least use something that's not !" and although it looks weird to me to use . for function calls, I'll concede that Arc is not JavaScript, so I think I can tolerate it.

Thus, combining . and ! might be the best way. Alternatively, I think these would be very nice as well:

  x,y -> (x y)
  x.y -> (x 'y)
Or:

  x'y -> (x y)
  x.y -> (x 'y)
Or:

  x,y -> (x y)
  x'y -> (x 'y)

-----

More