Arc Forumnew | comments | leaders | submit | rocketnia's commentslogin
2 points by rocketnia 4865 days ago | link | parent | on: Wart update

"Instead, I found a very simple way to create enumerable objects in a way that is completely customizable, and doesn't even need to be built-in"

That's my approach too. Or it would be, if I actually got around to building a language out of Cairntaker. :)

-----

2 points by rocketnia 4875 days ago | link | parent | on: Infix support in wart

"We have a rule that's simple to explain, whose implications can be subtle to work out, but which programmers are expected to exercise taste in using. That could describe all of lisp."

I don't think the syntax for numbers is very easy to explain. That's the weak link, IMO.

If it were me, I'd have no number literals, just a tool for translating number-like symbols into numbers. Of course, that approach would make arithmetic even less readable. :)

I think the next simplest option is to treat digits as a separate partition of characters like the partitions for infix and non-infix. Digits are sufficient to represent unsigned bigints with a simple syntax. Then most of the additional features of C's float syntax could be addressed by other operators:

  -20.002e23
  ==>
  (neg (20.@1002 * 10^23))
This hackish .@ operator, which simulates a decimal point, could be defined in Arc as follows:

  (def dot-at (a b)
    (while (<= 2 b)
      (zap [/ _ 10] b))
    (+ a (- b 1)))
You could avoid the need for this hack by treating . as a number character, but then you lose it as an operator.

-----

2 points by rocketnia 4875 days ago | link | parent | on: Infix support in wart

A good reason to make something right-associative is if it takes a Foo and a Bar and gives you a Bar in return. In this case, left-associative grouping would only lead to type errors.

Taking a look at Haskell's right-associative operators, the cons operator (:) seems to follow that rule, and others seem to take into account other idiosyncrasies. Here are my guesses for each operator:

Exponentiation to the power of a natural number (a ^ b ^ c), an integer (a ^^ b ^^ c), or a float (a b c) is right-associative, probably to match the parentheses-less math notation: (a (b c)).

The function type constructor (a -> b -> result) is right-associative (a -> (b -> result)) so that it's easy to use multi-parameter curried functions. For the same reason, function currying/application (func a b) is left-associative ((func a) b).

Low-precedence function application (a $ b $ c) is right-associative. This might be because (a b c) tends to be sufficient for passing the usual curried parameters, so (a $ b $ c) is reserved for cases where the output of each ($) isn't another function to apply. If this is true, ($) usually takes a function and a non-function and returns a non-function, so it's naturally right-associative.

Haskell has a few other right-associative operators out of the box, but I'm not sure why. In some cases, maybe it helps with perfomance.

http://www.haskell.org/onlinereport/decls.html

-----

1 point by rocketnia 4880 days ago | link | parent | on: Types in Nulan

There are a few more approaches if you're going for "everything's a foo" minimalism:

- Numbers and strings can be types. You already talk about using numbers as keys (aka types) for array access, so it's a bit surprising you haven't mentioned this option yourself.

- Numbers and strings can be objects whose types happen to be out of scope in user code. As far as the programmer is concerned, the core utilities were constructed with those types in scope, so it makes sense for them to be able to reach inside. I like the thought of everything being a weak table (and this philosophy underlies my garbage collector, Cairntaker), so this is the kind of choice I would make.

- Numbers and strings can be functions. Why not? :-p

-----

1 point by Pauan 4880 days ago | link

"Numbers and strings can be types."

Hm... I suppose that might work... yeah you're right that might be a really good idea. Thanks, I didn't think about that. But... hm... I wonder if that'll really work...

---

"Numbers and strings can be objects whose types happen to be out of scope in user code."

That's the problem. If I create the object using an ordinary closure to hide the data, how do I actually get the data when doing things like "lt" and "is" comparisons? I'd have to store it in the object itself, in which case now it's exposed to user code.

---

"Numbers and strings can be functions. Why not? :-p"

Because I'm trying to make a clean separation between data and algorithm. Though I already blur that a little by making $vau return an object. Plus I'm not sure how I'd implement that anyways, could you give an example?

-----

1 point by rocketnia 4875 days ago | link

"I'd have to store it in the object itself, in which case now it's exposed to user code."

If the type is out of scope in user code, then there's nothing user code can do to extract that value, right? The user code can't extract a list of types for an object, can it?

---

"Because I'm trying to make a clean separation between data and algorithm. Though I already blur that a little by making $vau return an object. Plus I'm not sure how I'd implement that anyways, could you give an example?"

I don't know what numbers and strings would do as functions, but they could potentially take a secret key and return some other internal representation. This secret key would not be in scope in user code. ;)

Of course, if user code can inspect lexical closure data anyway, it's impossible to hide things this way. I kinda recommend making the %closure key mean "the lexical closure, or nil if it happens to be unavailable."

-----

1 point by Pauan 4875 days ago | link

"If the type is out of scope in user code, then there's nothing user code can do to extract that value, right? The user code can't extract a list of types for an object, can it?"

I was thinking about letting code loop through an object, similar to the "for ... in" loop in JavaScript. An example of where that would be helpful is map, filter, etc.

If I gave that up, then, sure, I could store it in the object and user code wouldn't be able to reach it.

---

"Of course, if user code can inspect lexical closure data anyway, it's impossible to hide things this way. I kinda recommend making the %closure key mean "the lexical closure, or nil if it happens to be unavailable.""

That's right. I think if I'm going to go the "everything is an object" way, then I want everything to be open. Of course, I might change my mind and try some other route...

I think I've figured out what it is that I want in a programming language, at least at a high level. I want a programming language that's basically like Lego blocks. You have these little atomic pieces that are useful in and of themself, but because they share a similar interface, you can plug them together to create bigger pieces.

In other words, it's the Unix philosophy of "write programs that do one thing and do it well" as well as the functional philosophy of "avoiding state". The reason for this is to be able to treat the program as a black box that can be plugged into other black boxes to get the behavior.

I've given up on extensibility and being able to hack other people's code. Why? If their code is insufficient, it's probably easier to just rewrite it from scratch anyways. As long as the new code has the same interface as the old code, it'll work just fine.

So I want a language that has a lot of power and flexibility for defining interfaces, and also a lot of fine-grained power for taking programs and treating them as black boxes. Using objects for everything defeats that, if I allow code to loop through the keys of objects.

-----

1 point by Pauan 4875 days ago | link

I guess I can make vau a primitive, which makes that part easier. I'm still trying to figure out the best way to create interfaces, though...

Haskell's Monads are quite elegant. This is the best tutorial I've found thus far about them: http://mvanier.livejournal.com/3917.html

But I think my objects + gensyms are basically a simple system that lets Nulan have both monads and comonads. Still a lot to think about and figure out!

-----

2 points by rocketnia 4896 days ago | link | parent | on: A new arc challenge

"What I really want is to use juxtaposition to indicate higher precedence"

I think I first heard of that idea from the Gel syntax: http://www.cs.utexas.edu/~wcook/Drafts/2008/gel.pdf

The discussion at http://c2.com/cgi/wiki?SyntacticallySignificantWhitespaceCon... leads to the stillborn Merd language project from 2001-2003, which calls this "horizontal layout": http://merd.sourceforge.net/#features

---

"Whoa, whacky idea alert. What if we use whitespace to indicate operator precedence?"

The Merd site mentions that case, too: "experimentation is needed to know if [horizontal layout] could work for more than one space" http://merd.sourceforge.net/choices_syntax.html#6

I've doodled this idea independently a couple of times, but I see a few issues:

1. Line breaks throw a monkey wrench in the works (though it might be reasonable to prohibit line breaks within { ... }).

2. In many contexts, it can be tough to count spaces. :)

3. Sometimes it's nice to tabulate similar lines of code using extra spaces, and in those cases this feature would be a minor annoyance, since it would force some subexpressions to bail out to parentheses.

  // Horizontal layout interferes:
  var wargle         =   2 * x + 0.03  * y +  200;
  var tricennarious  =  13 * x + 0.4   * y +   50;
  
  // Fix:
  var wargle         =   2 * x + (0.03  * y) +  200;
  var tricennarious  =  13 * x + (0.4   * y) +   50;

-----

1 point by akkartik 4895 days ago | link

Alternatively:

  var wargle           =     2 * x   +   0.03 * y   +   200;
  var tricennarious    =    13 * x   +   0.4  * y   +    50;
Adding tabulation in addition to whitespace precedence would end up making expressions even more loose and baggy than otherwise. The writer would have to count spaces, but it's fairly clean when reading.

---

Thanks for the pointers! I'm going to go look at those now.

-----

1 point by akkartik 4893 days ago | link

I've also been reading about J (http://www.jsoftware.com/help/primer/contents.htm; via the recent http://arclanguage.org/item?id=16728), which might provide some interesting ideas.

-----

2 points by rocketnia 4911 days ago | link | parent | on: Simple quicksort in arc

"Is there some way to get haskell's implicit handling of empty lists?"

I'm pretty sure there's no implicit handling of empty lists. I think the blog author just left out the base case.

---

"I actually find keep with the anonymous arg more readable than the list comprehensions."

I tend to find list comprehensions unreadable anyway. Using `filter` instead of list comprehensions yields fewer tokens in Haskell, too:

   qsort [] = []
  -qsort (p:xs) = qsort [x | x<-xs, x<p] ++ [p] ++ qsort [x | x<-xs, x>=p]
  +qsort (p:xs) = qsort (filter (< p) xs) ++ [p] ++ qsort (filter (p <=) xs)
Note Haskell's special shorthands for currying infix operators. The expression (< p) means a function that performs (input < p), while (p <=) means a function that performs (p <= input).

I know, the point wasn't to talk about Haskell....

---

"I spent some time trying to build qsort using anarki's partition . No luck."

Here's what I found. It does have fewer tokens after all. ^_^

   (def qsort(l)
     (iflet (p . xs) l
  -    (join (qsort:keep [< _ p] xs) list.p (qsort:keep [>= _ p] xs))))
  +    (apply + (intersperse list.p (map qsort (partition [< _ p] xs))))))
Note that 'partition returns (true-elements false-elements). It could reasonably go the other way around.

---

As the first blog commenter notes, "I believe that that commonly cited example of quicksort in Haskell is not really quicksort, in that it does not have the space and time complexity of quicksort." This is probably true of these Arc snippets as well. Could we get a "real" Arc quicksort?

Also, none of these implementations is a stable sort.

-----

3 points by zck 4909 days ago | link

I believe this is a "real" qsort. It's way more verbose than I expected, but I did code it on two nights hours after I should have gone to bed. I'm sure I could name some things better. Maybe I will later.

  (def qsort (lst)
       (let helper (afn (start end)
                        (if (>= start end)
                            lst
                          (let pivot-pos (partition lst start end)
                               (self start
                                     (- pivot-pos 1))
                               (self (+ pivot-pos 1)
                                     end))))
            (helper 0 (- (len lst) 1)))
       lst)


  (def partition (lst (o start 0) (o end (- (len lst) 1)))
       "Partitions a list in-place. 
        This method returns the position the pivot moved to."
       (withs (pivot lst.start
               higher-num-out-of-place (+ start 1)
               lower-num-out-of-place end)
              (until (is higher-num-out-of-place
                         lower-num-out-of-place)
                     (until (or (> lst.higher-num-out-of-place
                                   pivot)
                                (is higher-num-out-of-place
                                    lower-num-out-of-place))
                            (++ higher-num-out-of-place))
                     (until (or (< lst.lower-num-out-of-place
                                   pivot)
                                (is higher-num-out-of-place
                                    lower-num-out-of-place))
                            (-- lower-num-out-of-place))
                     (unless (is higher-num-out-of-place
                                 lower-num-out-of-place)
                       (swap lst.higher-num-out-of-place
                             lst.lower-num-out-of-place)))
              (let pivot-end (if (> lst.higher-num-out-of-place
                                    pivot)
                                 (- higher-num-out-of-place
                                    1)
                               higher-num-out-of-place)
                   (swap lst.start
                         lst.pivot-end)
                   pivot-end)))
examples:

  arc> (qsort (n-of 10 (- (rand 21) 10))) ;; from -10 to 10 inclusive
  (-10 -9 -5 -4 -2 5 6 6 8 9)
  arc> (qsort (n-of 10 (- (rand 21) 10)))
  (-10 -10 -8 -1 3 5 6 7 8 9)

-----

3 points by rntz 4910 days ago | link

Both Haskell & Arc quicksort are functional (they don't mutate their input), which means they can't have quicksort's usual O(1) additional space overhead. Their time complexity is the same, though.

In-place quicksort is most easily done with an array-like datastructure, so vectors for Arc (if they're mutable, which I think they are). It's probably possible to do it by mutating conses as well, but it would be tricky.

-----

2 points by akkartik 4910 days ago | link

Thanks for the tip on currying infix!

I chose to ignore the pedantry on what is and isn't a quicksort :) Hoare's original algorithm has been mutated several times, and I think it's subjective which of those mutations deserves the name and which doesn't. Especially in high-level languages that encourage programmers to ignore low level concerns of space usage. Wikipedia calls the copying version a 'simple' quicksort (http://en.wikipedia.org/wiki/Quicksort#Simple_version), so I went with that title. I think it's equally valid to call the in-place quicksort a destructive quicksort, like the difference between reverse and nreverse in Common Lisp.

-----

1 point by akkartik 4909 days ago | link

Why do you say they aren't stable? I need to add a comparer to test stability:

  (def qsort(l <)
    (iflet (p . xs) l
      (+ (qsort (keep [< _ p] xs) <)
         list.p
         (qsort (rem [< _ p] xs) <))))
..but that aside I think the sort will be stable, because later equal elements always end up in the last qsort, and rem is stable.

-----

2 points by rocketnia 4908 days ago | link

That makes sense. I spoke too soon. :)

-----

2 points by rocketnia 4922 days ago | link | parent | on: On proof and progress in mathematics

Excellent! This rings true for me from beginning to end, and it's pretty influential to think about.

-----


The tl;dr version: "Software engineering has its own political axis, ranging from conservative to liberal."

[...]

"[...]Or at the very least, the conservative/liberal classification should help the two camps steer clear of each other. I think it is probably better to have a harmonious team of all-liberals or all-conservatives than a mixed team of constantly clashing ideologies. It's a lot like how vehicle-driving philosophies can differ regionally -- it's OK if everyone drives in some crazy way, as long as they ALL drive that way."

This whole article is much too divisive. People want to actually pursue things (liberal), and since we can't reach them in an instant, we have to be careful the path we're on is actually getting somewhere (conservative). Without some combination of both attributes, we only get to live for the moment.

There's something to be said for systematically encouraging everyone to live for the moment as either a doer or a thinker, even if it means oppressing anyone who isn't satisfied with that life. Encouragement and oppression are two sides of the same coin of bias. However, I consider this to be harmful bias, and I would encourage the reverse, to celebrate anyone who strives for a seamless synthesis of action and foresight.

Ironically, I'd even celebrate Steve Yegge for this attempt to "help." :-p

-----

2 points by rocketnia 4935 days ago | link | parent | on: Some plans for code reuse

Quote battle!

"The programmers did not handle the exception because the assumption was made that the program was correct until proved at fault, apparently a feature of the programming culture for this system, (this observation is worth an article in itself)."

Unit tests and type checkers are both part of this culture. They prove a program is at fault, and if the program passes, they have nothing further to say.

What's missing is mathematical proof and, for the remaining informal properties, unbiased empirical testing (including empirical testing of the proof checker).

The best way I know to integrate mathematical proof throughout a codebase is total functional programming with dependent typing. Unfortunately, precise dependent types are notable for being inhibitive to code reuse. (But don't take my word for it; I'm just passing along rumor here, lol.)

A straightforward approach to bring reusability to mathematics would be to study the possible faithful translations from one mathematical discipline to another. This field of study already exists: It's category theory.

I don't have any recommendations for empirical testing. :)

-----

2 points by rocketnia 4935 days ago | link | parent | on: Ask AF: Language+forum design?

"You're right, a forum shouldn't be flat."

I thought I was making more of an observation than a suggestion. A forum has acyclic relationships that give it structure:

- "___ is a reply to ___" (establishes a forest of stars[1] or shallow trees, depending on whether people can reply to replies)

- "___ was authored after the typically expected period of editing for ___" (establishes a rather dense partial order)

Module systems are sometimes encumbered with the need to resolve cyclic dependencies. On a forum, with its already acyclic structure, it should be possible to avoid that complexity in the common case.

[1] http://en.wikipedia.org/wiki/Star_(graph_theory)

---

"A dimension has the property that you can start at one well-defined end and travel in one direction and be sure you saw everything. This is a useful property, not because we expect users to do this very often (though some will) but because it hooks into our spatial metaphors for information in an intuitive way."

While I see some value in that kind of pursuit, I'd like to even the field of discussion a bit. Humans may live in a low-dimensional world, but they move around, manage a changing environment, and maintain different levels of human-to-human association even within a single population[citation needed]. I think it's plausible that humans evolved to reason about both spaces and networks.

Maybe an interesting scientific-ish approach would be to look at what parts of the brain are involved in forum use. If there isn't enough activity on the right side of the brain, we could work on spatial metaphors. Once there isn't enough activity on the left side, we can go back to language-like metaphors. :-p

---

"Imagine a forum where you can define new dimensions as 2 functions: one to convert a new post to a coordinate, and another to render a neighborhood of coordinates."

I don't think it needs to be formalized that well yet. If the forum software is itself an open source project like Anarki, that should be stable enough for a small, dedicated community to experiment.

I bet once people can browse source code repositories with forum topic annotations on the side, it'll be a while before other features begin to appear necessary. If nothing else, the UI design and module system integration of these annotations would be finicky enough that the developers would have to chew on it for a while before they take on yet another revolutionary change. :-p

In a more secure and seamless model, well, I'd go with David Barbour's vision and say all state (e.g. the state of a project under development or the content of a discussion) would be hooked together using RDP, and individual state models would have continuous, idempotent interfaces that were designed for view composition and concurrent modification in the first place. The issue of designing forums for programming would naturally decompose into designing composition-friendly repository models, composition-friendly discussion models, the discussion of composition-friendly UI state models, and maybe some remaining issues with composing these models over RDP in practice.

-----

2 points by rocketnia 4935 days ago | link

I did a search for [spacial spatial], and they're both correct spellings. I only mention it because this link came up in the search: http://benking.de/Global-Change/spatial-spacial.html

I feel like that thought process is in an alternate universe to our programming-centric thinking, but it's exactly the same topic!

-----

More