The name of 'pushnew isn't bad at all, so how about the names "construe" for 'consif and "conscheck" for 'conswhen? They would be cousins to the hypothetical 'pushtrue, 'pushcheck, and 'consnew.
The name of 'maybe could be... um... "combinetrue".
"You could also just disallow ssyntax chars in symbols."
How would ssyntax work, then?
Whatever gets passed to 'ssyntax or 'ssexpand oughta be able to contain symbols with ssyntax characters in them, or else they'll be trivial to implement. :-p
If I were designing a language with a.b and (a:b c) stuff, I'd implement that in the parser, like Semi-Arc does. However, I don't consider that Arc-compatible.
Arcueid used to do it that way, by parsing and expanding ssyntax at read time, rather than at compile time the way reference Arc does. It turns out that the reason why ssyntax expansion is thus deferred has to do with the way the ssyntax compose (:) and complement (~) operators work. They cannot be expanded properly by simple lexical substitution the way all other ssyntax operators can be, as they alter the structure of a sexpr that uses them, so it is impossible for the reader to do it. It has to be done once we already have the full sexprs, and that means that it becomes a step at compile time.
On Google: The 14th hit (2nd page) is arclanguage.org/. The 17th hit is paulgraham.com/arc.html. I don't see any more results in the next few pages. This being a Google search, the results are tailor-made according to my personal search history. (Ironically, this means there is some value in asking someone else to Google it for you. :-p )
On DuckDuckGo: The 26th hit (2nd page) is www.paulgraham.com/arc.html. I don't see any more results in the next few pages. DuckDuckGo gives everyone the same search results.
---
Searching for other things...
Searching for [emacs arc] on Google and DuckDuckGo doesn't give me arclanguagewiki on the first ten pages, regardless of whether I use Google or DuckDuckGo. (In fact, DuckDuckGo runs out of results, or at least stops listing them, after about ten page breaks. Each page seems to have more and more results on it as well, so the scrollbar of sisyphus makes it hard to count.)
Searching for [arc lisp], [learning arc], or [install arc] on DuckDuckGo doesn't seem to turn up arclanguagewiki at all. These searches work just fine for me on Google, turning up arclanguagewiki on the first page as you say.
Searching for [arc lang] puts the wiki on the first page using either engine.
Thanks for the feedback. After having announced social integration in search, Google seems to have gone a little cagey on it. I'm not sure if it's because of the mixed reaction to it.
When they first launched it, the tabs to use or not use social search results were right there on the top right section of the search page. They've since been moving it around and it shows only if you click "More search tools" on the bottom left, where one option is "All search results" and another is "Social". And it isn't immediately obvious how much weightage the first gives to social results, if any at all.
Anyway, "All search results" is what I used and it does not appear to display what I get if I pick just the "Social" option, so it should be a decent indication of what the average Google search will show. Haven't used DuckDuckGo yet, will mosey over.
I think a verbalization of "predikit" is a better way to read it. "Predikayte" already has a different meaning, one that could make people think of assertions... essentially the same meaning trouble as "testify," right? :-p
Personally, I think this is all moot, but I'd go with "checkify" or "to-check".
testify
Pro: Only 7 characters long.
Pro: Bunches with related words containing "ify."
Con: Bunches with unrelated words containing "test."
Con: Is a neologism if used in English.
Con: Has non-sequitur homonyms in English (one meaning "claim").
Con: The vocal majority here at Arc Forum seems to dislike it. :-p
predicate
Pro: Bunches with related words containing "icate."
Con: Debatable pronunciation.
"predikayte"
Con: Is a neologism if used in English, I think. It
reinterprets the verb as a verbalization of the "predikit"
version, when "predikit" is actually something of a
nominalized form of "predikayte" in the first
place[Wiktionary]. Is it already used this way?
Con: Has non-sequitur homonyms in English (one meaning
"assume").
"predikit"
Pro: Same as a related term used in English discussion.
"predikahtay"
Con: Has a non-sequitur homonym in Italian (meaning
"preach")[Wiktionary].
Con: Since it has the same spelling as a noun, it may conflict with
other noun-based names (e.g. accessors, local variables).
Con: In English discussion, needs special formatting to look like a
variable name.
checkify
Pro: Bunches with related words containing "check."
Pro: "Check" is a related term that can be dropped casually into
English discussion.
Con: "Check" has many non-sequitur homonyms in English (one
meaning "restrict").
Pro: Bunches with related words contianing "ify."
Con: Is a neologism if used in English.
to-check
Pro: Bunches with related words containing "check."
Pro: "Check" is a related term that can be dropped casually into
English discussion.
Con: "Check" has many non-sequitur homonyms in English (one
meaning "restrict").
Pro: Bunches with related utilities containing "to-".
Con: Is a downright technical term if used in English.
Con: "To check" could be seen as an infinitive form.
I believe Rainbow has unit tests intended for "any Arc implementation." I also think there are extensive unit tests in ar, Nu, and Wart, but I don't know that they're very language-implementation-agnostic (especially with Wart). Nu's benchmarks are meant to be runnable on multiple Arc implementations for comparison purposes, so they could be a good start.
Personally, I consider something to be my kind of Arc once it's easy for me to adapt Lathe (my library of frameworks) to its quirks, even if Lathe doesn't work out of the box--and even if not all features of Lathe are available. That is to say, whatever set of assumptions I implicitly rely on in Lathe forms a common subset of several Arc-like languages, and I might as well call that common subset "Arc." I admit it's vague (what features of Lathe are optional and what aren't?) and that it isn't necessarily the same as what someone else would call Arc. Nevertheless, Lathe has a few unit tests to make this goal more discrete, in the examples/ directory.
I've had "try Lathe with Arcueid" on my mind for a while, and I keep applying my motivation to other things instead. Actually, the way I think of it is more like "try Lathe with Rainbow.js, Arcueid, the Nu family, and the most recent version of Anarki," so when I do work on it, I dedirect my attention toward the sub-task of getting Rainbow.js to load libraries. >.>
It's safe to use these as a specification.If you fire up an arc3 repl within the rainbow src directory you can run the same tests to verify you get the same behaviour.
Hmm, core-evaluation-test.arc seems to hang Anarki, as well as Arc 3.1.
Well, I've tried to run the tests that do work fine with 3.1 and Anarki under Arcueid and find a lot of issues. For starters, I had no idea that Arc treats symbols with |'s specially. Looks like more accidental behavior inherited from MzScheme/Racket. Scheme48 says that '|abc| contains illegal characters. Guile creates a symbol |abc|. I don't see anything in R^6RS that mandates any of this behavior. Heh, looks like I've got a lot of work to do!
Apparently the bars in symbols is a sort of convention when it comes to case sensitivity of symbols in Scheme. It seems that Arc, in its current implementation anyway, is unintentionally inheriting a lot of onions from MzScheme...
"I also think there are extensive unit tests in Nu"
Older versions of Nu had lots of unit tests, yeah, but I haven't ported them over to the latest version of Nu yet and probably won't do so anytime soon, as I've unfortunately lost the motivation to work on anything Arc related.
---
"Nu's benchmarks are meant to be runnable on multiple Arc implementations for comparison purposes, so they could be a good start."
Yeah, but for now all the Arc implementations need to be built on top of Racket, so it works for ar, Arc 3.1, Nu, etc. but not, say, Rainbow. Getting it to work with non-Racket processes is probably doable - albeit difficult - and it would come at the cost of accuracy in the tests.
The accuracy problem could be mitigated with some sort of namespace system such as the one Nu could have if I ever actually built the damn thing. But in that case, you might actually be better off building the benchmark tester program in C/D/Go/Racket/whatever and using FFI to talk to the different implementations...
"The combination of quoted parameters and caller-scope seems to do everything vau does, as far as I can see. I'd love comments on this way of decomposing things."
Can you implement vau like this?
; (vau (a b c) env ...body...)
; ==>
; (fn '(a b c) (let env caller-scope ...body...))
;
(def vau '(parms env body)
(eval (list fn (list quote parms)
(list* let env 'caller-scope
body))
caller-scope))
If so, I think you're pretty well off the way you are. ^_^
What is 'caller-scope bound to at the top level?
---
"It seems all its drawbacks wrt vau involve hygiene, which I want to explore ignoring."
What drawbacks are those?
---
"Phew, that was a two-week-long tangent that involved me ripping wart down to its foundations before building it up again, and revisiting all my assumptions about how things work."
I get the sense that vau/wrap/unwrap are the 'structured' equivalents of caller-scope. They're more well-behaved, and they make it harder to do super ugly things like functions reaching in to modify their caller's scopes ^_^. This well-behavedness also makes it tractable to specify their semantics, prove theorems about the calculus, regularity, smoothness, being hygienic, and whatnot.
Actually, I see them as just being stylistic differences... Kernel already lets you mutate the caller's scope via $set! so I assume you're talking about the parent of the caller's scope... Yeah you can't do that, but you could make a language very similar to Kernel, with that one thing changed if you wanted to.
Or perhaps you're talking specifically about functions mutating their environment... well you can do that in Kernel too:
(wrap ($vau ... env ...))
The above creates a function that has access to its dynamic scope. This isn't used most of the time in Kernel, but it is used in a couple places, like the "get-current-environment" function. Most of the time you would use $lambda.
I think the benefit of vau/wrap/unwrap is that it makes it super easy to coerce between functions/fexprs. It's also very clean and easy to reason about. But I don't see them as being necessarily more "well behaved" than wart's approach.
One other benefit: the environment argument is local, rather than a global hard-coded "caller-scope". This not only lets you write it shorter (such as "env") but also avoids collisions in the case of nested $vau's:
($vau ... env1
($vau ... env2
...))
I suppose in that one case, $vau is more well-behaved. You can emulate that behavior with let, though:
For all my rhetoric about "give the programmer absolute powa!!1!" I'm uncomfortable making caller-scope too easy to use ^_^. It'll just be the next case of, "when I understand why I need this power, I'll mix it in."
If I just saw 'caller-scope in pseudocode somewhere, I'd implement it as an anaphoric variable of 'fn (and I'd implement a non-anaphoric variant of 'fn to base it on, so we'd basically be at vau again :-p ). So at the global scope, my implementation would just treat 'current-scope as a global variable.
For a third option, maybe 'caller-scope at the top level should be the scope of eval's caller. I'm not sure the point of that, and it might make the language even harder to optimize, but I'm just throwing it out there.
Yeah, but then you can say (eval ... caller-scope) at the top-level, and if your language has a facility for setting variables in a particular scope, you can say this:
(env= caller-scope foo 5)
And you can say this:
(let global caller-scope
...
(env= global foo 5)
...
(eval ... global))
Kernel already does this via the "get-current-environment" function:
Just to be clear: Kernel's "get-current-environment" is not the same as the environment argument passed to fexprs. But since wart is implementing the environment argument as a global hard-coded "caller-scope" variable, I think it makes sense to merge the two in wart, even though I think it's cleaner to have them be two separate things, as in Kernel.
Which reminds me... rocketnia, you mentioned being able to define vau in terms of wart's magical fn. You're mostly right, except... there will be two ways to grab the environment: the env parameter and the implicit global caller-scope. So that's a bit of a leaky abstraction.
On the other hand, it's quite easy to define wart's magical fn in terms of vau... with no leaky abstractions! The only catch is that caller-scope will always refer to the global scope inside vau forms... that could be changed too, by extending/overwriting vau, but I didn't do that:
(= get-current-environment (wrap (vau () e e)))
;; don't define this if you don't want caller-scope at the global level
(= caller-scope (get-current-environment))
;; simple fn, without magical quoted args or caller-scope
(= fn (vau (parms . body) env
(wrap (eval `(,vau ,parms nil ,@body)))))
;; omitting other stuff like afn, no, caris, list, etc.
;; but they can all be defined in terms of simple fn
...
;; makes these variables hygienic
(with (with with
eval eval
let let)
;; this recreates the part of wart that is currently handled in C++
(= parse-fn (fn (parms env)
(if (caris x 'quote)
(list (cdr parms) nil)
((afn (x vars parms)
(if (no x)
(list (rev parms)
(rev vars))
(caris (car x) 'quote)
(self (cdr x)
vars
(cons (cdar x) parms))
(let c (car x)
(self (cdr x)
`((,eval ,c ,env) ,c ,@vars)
(cons c parms)))))
parms nil nil))))
;; overwrite with the complex fn which has quoted args and caller-scope
(= fn (vau (parms . body) env
(w/uniq new-env
(let (parms vars) (parse-fn parms new-env)
(eval `(,vau ,parms ,new-env
(,with ,vars
(,let caller-scope ,new-env
,@body)))
env))))))
I have not run the above code, but it should give the general idea... Basically, what it does is it takes this...
(fn (a 'b . c)
...)
...and converts it into this (where "g1" is a gensym and "with", "eval", and "let" are hygienic):
(vau (a b . c) g1
(with (a (eval a g1)
c (eval c g1))
(let caller-scope g1
...)))
I know the above code looks verbose, but keep in mind that quoted args and implicit caller-scope is currently handled in C++, whereas I recreated it from scratch using vau + simple functions (defined with wrap).
One more caveat: I'm not sure how you implemented caller-scope. If it's hardcoded into every function such that functions can't have an argument called "caller-scope" then the above will work the same way. But if it's a global implicit parameter like in Nu and ar, then you'll need to search the arg list for an argument called "caller-scope", which is doable but would require some more code, especially for handling argument destructuring.
---
Anyways, I'm not saying you should necessarily do it this way in wart, but... I think it's cleaner, conceptually, for functions to be defined in terms of vau, rather than vau being defined in terms of functions.
Of course, as I already said, if you care about cleanliness, I'd suggest just using Kernel, which doesn't support the magical fn like wart, which I consider to be a good thing.
"Which reminds me... rocketnia, you mentioned being able to define vau in terms of wart's magical fn. You're mostly right, except... there will be two ways to grab the environment: the env parameter and the implicit global caller-scope. So that's a bit of a leaky abstraction."
Hmm... yeah, it's a pretty leaky abstraction. It's one of these features with ambient authority:
(if designed like a global variable assigned at the beginning of each
call)
Right now, get the most recently started call's caller's environment.
(if designed like a continuation mark)
Right now, get the deepest-on-the-stack call's caller's environment.
(if designed like a local variable bound by each (fn ...))
Given a non-global lexical environment, its local variables must have
been initially bound by some call, so get the lexical environment
that was used to evaluate that call.
---change of topic--
...Actually, the first two of these designs are broken! Say I define a simple fexpr like this one:
(def idfn '(result)
(eval result caller-scope))
Now if I run the code ((fn () (idfn caller-scope))), the call stack at the evaluation of 'caller-scope looks something like this:
((fn (a) ((fn () (idfn caller-scope)))) 1) ; eval'd in global scope
((fn () (idfn caller-scope))) ; eval'd in local scope of (fn (a) ...)
(idfn caller-scope) ; eval'd in local scope of (fn () ...)
(eval result caller-scope) ; eval'd in local scope of idfn
caller-scope ; eval'd in local scope of (fn () ...)
IMO, the value we get should be the local scope of (fn (a) ...), so that it doesn't matter if we replace ((fn () (idfn caller-scope))) with ((fn () caller-scope)). However, the calls to 'idfn and 'eval are deeper on the stack and their start times are more recent, so we get one of those caller scopes instead.
Does Wart have this awkward behavior, akkartik?
---/change of topic---
To get back to the point, the local-variable-style version of the feature (the only version that works?) doesn't strike me as completely undesirable. It lets the programmer evaluate expressions in arbitrary ancestor stack frames, and that isn't a shocking feature to put in a debugger.
Sure, it makes it almost impossible to hide implementation details and keep privilege from leaking to all areas of the program, but it could be useful in a language where all code that runs in a single instance of the language runtime has been personally scrutinized (or even written) by a single programmer. That's the primary use case akkartik has in mind anyway.
I'm impressed you're managing to keep track of that with mutation. Are you setting 'caller-scope on every entry and exit of a function and on every entry and exit of an expression passed to 'eval?
Oh, Pauan had mentioned 'caller-scope being a "global hard-coded" variable, and I didn't see you disagreeing, so I assumed you were just assigning to it a lot behind the scenes. :-p
It's likely that as I reflect on vau, wart will start to look more like Kernel.
But I don't think it will become Kernel. Kernel seems really elegant as long as you give up quote and quasiquote. I suspect if you bolt them onto Kernel it'll have slid from its sweet-spot peak of elegance. A language that encourages quoting and macros will diverge significantly from Kernel.
One simplistic way to put it: Kernel is scheme with fexprs done right, while wart tries to be lisp with fexprs done right. It's far from that local extremum, though. It's even possible the wisdom of caring about hygiene in a lisp-1 is deeper than I realize. Perhaps quasiquote needs to go the way of dynamic scope.
"I'm not sure how you implemented caller-scope. If it's hardcoded into every function such that functions can't have an argument called "caller-scope" then the above will work the same way."
If you want 'caller-scope to be built into the language and you want to skirt the issue of name collisions, you could devote a separate reader syntax like "#caller-scope" to it and make it a type of its own.
"..if your language has a facility for setting variables in a particular scope.."
In wart scopes are just tables. It's not a good idea to modify them, but nobody will stop ya :)
"Kernel's "get-current-environment" is not the same as the environment argument passed to fexprs."
Can you elaborate? In
($vau foo env
..)
(get-current-environment) would return the callee scope, and env contains the caller scope, right?
Wart doesn't currently have an easy way to get at the callee scope (you could peek inside (globals)), but it's on my radar to change that if necessary.
In the above, env1 is the lexical scope where the $vau is defined. It does not include bindings for foo, env1, env2, or env3, but contains all the bindings on the outside of the $let, where the $vau is defined.
env2 is the dynamic scope where the $vau is called, it does not include bindings for foo, env1, env2, or env3.
env3 is the environment of the $vau itself. This environment inherits from env1, and thus is lexical just as env1 is. And so, it contains all the bindings of env1, in addition to bindings for foo, env1, and env2, but not env3.
So, by using this combination of lexical and dynamic scope, you can express a huge variety of different things in Kernel.
First, we use $let to bind the variable 'global to the global environment using get-current-environment. Then we try to evaluate the symbol 'x in that environment. The variable 'x of course does not exist in the global environment, so it throws an error.
Then, we create a $vau form and assign it to the variable $foo in the global environment. This is the same as saying ($define! foo ...) at the top level, but is necessary when inside a $let.
Now, at the bottom, we use $let to bind the variable 'x to 10 and then call $foo with the single argument 20. First, $foo tries to evaluate the variable 'x in the dynamic environment "env". This should display 10, because it's using the binding in the $let where $foo is called.
We then evaluate the variable 'env in the $vau's dynamic environment. But the variable 'env does not exist in the dynamic environment, so it throws an error.
Then, we use a $let inside the $vau to grab a hold of the $vau's inner environment and bind it to the variable 'inner. We then evaluate the variable 'x in the $vau's inner environment. This should display 20, because it's using the binding inside the $vau, which is bound to the $vau's argument 'x which was 20 when calling the $vau.
We then evaluate the variable 'env in the $vau's inner environment, which evaluates to the dynamic environment, because inside the $vau, the variable 'env is bound to the dynamic environment.
Lastly, we evaluate the variable 'global in the $vau's inner environment. This evaluates to the global environment because the $vau's inner environment inherits from the top-level $let where the variable 'global is bound.
--
Here is a picture showing the environments in the above code:
The global environment is pink, the first blue environment is the $let's environment that inherits from the global environment, plus a binding for the variable 'global. The green is the $vau's environment which inherits from the $let's environment, plus a binding for the variables 'x and 'env. And the yellow is the $let's environment that inherits from the $vau's environment.
Lastly, the second blue environment is the dynamic environment that is bound to the 'env variable inside the $vau when it is called.
Thanks. So what Kernel calls the dynamic environment is identical to caller-scope, right?
I think I find this confusing because when I hear 'dynamic environment' I keep thinking of dynamic scope (special variables in common lisp). If we only create bindings using let and combiner parameters, the dynamic environment is just the caller's static environment, is that right?
In Kernel, the dynamic environment is exactly analogous to dynamic scope. The difference is that lexical scope is the default, and you have to explicitly ask to evaluate things with dynamic scope, rather than earlier Lisps which used dynamic scope by default and didn't even have lexical scope.
Special variables in Common Lisp are similar to dynamic scope, except you're flagging only certain variables to be dynamic, whereas the dynamic environment in Kernel lets you evaluate any variable with dynamic scope.
---
"If we only create bindings using let and combiner parameters, the dynamic environment is just the caller's static environment, is that right?"
Yes. In fact, in Kernel, $let is defined just like it is in Arc, using $lambda. And $lambda uses $vau, and $vau is the only way to create new environments in Kernel, as far as I know. So, environments are all about $vau.
---
By the way, I just edited the post you're replying to, and added a picture.
The above should return a list containing two elements: the number 10 and $bar's environment. The way I think of it is like this:
($define! $bar
($vau (x) env
($foo x env)))
That is, the $bar fexpr implicitly passes its own internal environment to the $foo fexpr. And $let's work this way as well, so that this:
($let ((x 10))
($bar x))
Would be equivalent to this:
((wrap ($vau (x) env
($bar x env)))
10)
So it's really all about $vau's passing their own internal environment whenever they call something else, similar to the idea of all functions implicitly passing their own continuation when calling something else.
Very cool. While struggling with the $vau concept I thought about whether I could build lazy eval directly using $vau, without first building force and delay (SICP 4.2). But it's not quite that powerful, because lazy eval also needs the ability to stop computing the cdr of a value. Your intuition tying $vau to continuations is a similar you-see-it-if-you-squint-a-little correspondence.
I somewhat remember either the Kernel Report or John's dissertation mentioning the similarities between continuations and environments, but I might be mistaken on that...
---
"I thought about whether I could build lazy eval directly using $vau"
Directly? Not sure, but ยง9.1.3 of the Kernel Report does show how to build promise?, memoize, $lazy, and force as a library using existing Kernel features.
"Special variables in Common Lisp are similar to dynamic scope, except you're flagging only certain variables to be dynamic, whereas the dynamic environment in Kernel lets you evaluate any variable with dynamic scope."
Ah! I considered this approach when I was building dynamic variables in wart, but it seemed less confusing to attach the scope to the binding. Wart is unlike common lisp in this respect. SBCL:
wart> = foo 34
wart> def bar() foo.
wart> bar.
34
wart> let foo 33 bar.
34
wart> making foo 33 bar.
33 ; dynamic scope
I'm starting to appreciate that $vau and first-class environments give the programmer a lot more power (aka rope to hang himself with :) than caller-scope.
The reason for this is that Common Lisp explicitly states that if you use "let" on a dynamic variable, it dynamically changes that variable for the scope of the "let". So Common Lisp combines both "let" and Racket's "parameterize" into a single form, rather than keeping them separate.
You don't define variables to have dynamic scope. They automatically have dynamic scope when you evaluate them in the dynamic environment:
($vau () env
(eval 'x env))
The above, when called, will evaluate the variable 'x in whatever scope it was called in. Thus, the above $vau has just "made" the variable 'x dynamic. But of course this ability to evaluate things in the dynamic scope is only available to fexprs.
---
If you're talking about something similar to Racket's parameterize, you should be able to build that using dynamic-wind. Then something like this...
Warning: untested and may not work. However, if it does work then it will work for all variables, global or local, without declaring them as dynamic before-hand. So this will work:
($let ((x 5))
... x is 5 ...
($parameterize ((x 10))
... x is 10 in here ...
)
... x is 5 again ...
)
The catch is that in order to parameterize global variables, the $parameterize form itself needs to be at the top level, because it's mutating the immediate ancestor environment.
If you want to parameterize something other than the immediate environment, use $parameterize-redirect which accepts an environment as the first argument, and then evaluates in that environment instead.
Not for me. If you need it, the IP address I get is 207.97.227.245, the same as for other GitHub Pages sites.
But I see how having just the IP address isn't going to help you, since it doesn't distinguish between GitHub Pages. XD Maybe you could temporarily add julialang.github.com to your hosts file or something.
The site at http://julialang.github.com/ 301-redirects to http://julialang.org/. I did mention julialang.github.com above, but I was reluctant to linkify it since it probably wasn't going to help on its own. :-p
Come to think of it, adding a hosts file entry for julialang.github.com probably won't help. Adding one for julialang.org might work though. (If I keep changing my mind, I'll be right sooner or later!)
If your browser supports gzip (and whose doesn't?) it'll probably be about 80KB of data, nothing to worry about, lol. If it doesn't, it'll be about 295KB. If it doesn't and there isn't any caching going on, it'll be about 375KB (because currently, (load ...) makes two duplicate AJAX requests).
You can still go here to get a REPL with no libraries:
It uses an "e" option in its call to Console.mainAsync(). This corresponds to Java Rainbow's -e command line parameter, and it takes an Array of strings, each string containing a single Arc command to evaluate. The results are printed as part of the initial output, the part before the first prompt shown.
The first Arc command passed into "e" is (assign load-from-web (fn ...)), which results in the big (fn ...) at the beginning of the output.
The second, which is only passed in when auto-loading the libraries, is (load-from-web), and that performs all the libraries' own side effects (visible as the "* redefining..." lines) and finally results in nil.
"A continuation can be invoked from the thread (see Threads) other than the one where it was captured."
Okay, that's not very helpful on its own.... The rest of the Racket docs don't seem to help either.
---
Based on nothing but intuition, what I expect to happen in your example is for the REPL thread to print "alive", sleep for the specified amount of time, print "dead", and terminate.
Now that I mention it, that could be what you're seeing. XD I don't know why "dead" didn't print in the full transcript you posted, but maybe the thread terminated before it got a chance to flush? How long did it take for the Racket REPL to appear?
By the way, you say "Trying to do the same thing in Arc 3.1," but what was the first attempt in? I'm guessing Anarki, but maybe it was some member of the Nu family. :)
If you say (thread (x 1)), does that act the same as the first thread? At the least, I'm guessing it won't wipe out the REPL.
Hmm, considering the behavior we're seeing, I think that the following is true: a continuation captured from a different thread will execute in the thread that invokes it, not in the thread it comes from, thus ccc doesn't have the ability to resurrect the dead, nor will it interrupt execution of the thread that is already running. This is the reason why invoking the continuation above from the REPL kills the REPL. The REPL thread was "possessed by the spirit of the continuation" so to speak, so when the continuation terminated so did it, and thus we got dumped into the Racket REPL. I suppose that what Arcueid should do in order to emulate this behavior is terminate when the continuation finishes execution, because its REPL thread would then exit when the continuation returned to nowhere.
I suppose endowing ccc with the power of necromancy is a little too much. XD
As I read the list, I noticed they were severely out of order, with unconvincing excuses for why one thing led to the next and poorly placed this-is-where-s###-gets-real moments like "You begin to understand." :-p But that's fine. The list is probably a valid reflection of the author's own experience with programmers, and I doubt it stereotypes programmers in any harmful way.
What follows is an autobiography by counterexample. That is, it's my anecdotal evidence for the original order being out of whack.
For a high-level view (or for the sake of tl;dr), my progression is 1-8-M-4-5-7-2-10-7-9-9-3-9-M-6-5-9, and I consider my progress to be incomplete in levels 2, 8, 10, and M.
---
Level 1, The Read-and-Type
Level 8, Meta Man (In the sense that I approached every problem by designing a new language, not in the sense that I actually implemented easy-to-use languages and tools! I'm still working on that, lol.)
Mystery Level, Computer Scientist (in the sense that I hoped my pseudocode languages were advancing the state of the art, though they really weren't)
Level 4, Object-Dot-Method Guy
Level 5, Multiple Paradigm Man (in the sense that I knew that I could get certain things easily from one language that were more difficult or impossible in another)
Level 7, Architect Astronaut (not because I confused complexity with value, but because I was finally able to shoehorn abstract concepts into existing languages rather than keeping them in pseudocode)
Level 2, The Script Kiddie (in the sense that I started using filesystem calls sometimes but not on a whim)
Level 10, Language Oriented Designer (in the sense that I decided it would always be true that with certain shoehorned pseudocode, it makes life easier to admit it's a language of its own)
Level 7, Architect Astronaut (in the sense that my languages would have had kitchen sinks at this point in my life)
Level 9, Functional Nirvana (but only in the sense that I knew I'd learn a lot about custom syntax from a lisp)
Level 9, Functional Nirvana (in the sense that as I approached lisp--by way of Groovy--I discovered the sweetness of using first-class procedures for list comprehension, custom control flow, event handlers, etc.)
Level 3, The Librarian
Level 9, Functional Nirvana (in the sense that I actually started using a lisp, Arc)
Mystery Level, Computer Scientist (in the sense that I realized the language-oriented technique I wanted to pursue was not available in any language, and I expected to need to advance the state of the art)
Level 6, Architect Apprentice (Thanks to my frustration at the places Arc's modularity, customizability, and extensibility falls short (especially its modularity), and the fact that there was no other language I liked better, I've focused on Arc library design in particular and learned what approaches I liked and what approaches I could expect even if I didn't like 'em.)
Level 5, Multiple Paradigm Man (in the sense that every way to improve on Arc's code reusability seemed to require a compromise)
Level 9, Functional Nirvana (in the sense that I've realized that expressing certain kinds of modularity takes a reactive model and/or a static type system, and that an understanding of pure FP does me wonders for elucidating the semantics of both of these things)
---
For humility's sake, here's a list (a checklist?) of the levels I know I haven't fully explored:
---
Level 2, The Script Kiddie (in the sense of not being shy to write a program to automate a mundane task)
Level 8, Meta Man (in the sense of implementing easy-to-use languages and tools)
Level 10, Language Oriented Designer (in the sense of implementing easy-to-use tools for myself and others to make languages with)
Mystery Level, Computer Scientist (in the sense of framing my work in academic terms)
Hey, that is comprehensive. And I do agree that the levels in the article reflect the author's own experience, which he also admits. Like yours, most people would follow paths different from the progression in the article, though I don't think most would document it quite as thoroughly!