If they're immutable, then great. But using shared memory to transfer state is flawed, and can cause race conditions, etc. That's why we're using the actor model to begin with, right?
I suppose they can't really be immutable, or we couldn't do hot code loading. How does Erlang do it?
Heh, you're right. So, in that mindset, why not give a lot of nice macros for controlling share vs copy, and make the default be copy? Then programmers could control nearly everything. Of course, they could always hack on your vm if they really wanted tons of control.
But still, concurrent writing to a global variable sounds dangerous.
I kind of like the idea of them being "registered processes." I'll have to do some more thinking on that.
>It doesn't, actually...
Yes, that answers some of the question, but I was a bit more interested in how they implemented their hot code loading. The code still exists for a while as the existing processes continue to use it. But they eventually phase out the functions and swap to the new ones.
IMHO, hot code loading is a very nifty feature. Combined with remote REPL makes it especially useful. I don't know how well current lisps support hot swap, but I don't think it can work effectively without a concurrent system.
> Yes, that answers some of the question, but I was a bit more interested in how they implemented their hot code loading
It's in the OTP library actually. For example, they have a standard gen_server module. The gen_server would look approximately like this in snap:
(def gen-server (fun state)
(<==
('request pid tag param)
(let (state . response) (fun state param)
(==> pid (list 'response tag response))
(gen-server fun state))
; hot code swapping!!
('upgrade new-fun)
(gen-server new-fun state)
('stop)
t))
So yes: hot swapping just means sending in a message with the new code ^^
It's actually more complex than that - they generally make messages include a version number so that nodes with new versions can communicate to nodes with older versions. This versioning is, in fact, part and parcel of the gen_server series of functions. Requests to servers are made via functions which send a message (with tags and other metadata such as versions abstracted away) and wait for a receive.
I think what they say is, the programmers good at concurrency write the gen_server and other parts of the OTP, while the average programmers write application code ^^
Much of Erlang isn't implemented in the VM level ^^
It makes sense that they wouldn't do that at the vm level. Your code even makes sense, though I thought "let" only assigned one variable.
I'm still not quite able to read arc fluently, so any explanations of the subtle that I likely missed will always be appreciated. Come to think of it, any explanations of any code would be nice, as the thoughts and reasons behind code don't always come out in the source itself. And I also like learning new things :)
I'm using pattern matching. Although Arc doesn't actually have pattern-matching built in, someone wrote a pattern matching library a long time ago using macros: http://arclanguage.com/item?id=2556http://arclanguage.org/item?id=1825 . The modern evolution uses something like p-m:def to define a pattern-matching function, p-m:fn to create an anonymous pattern-matching function, etc.
('request pid tag param)
The pattern above means "match a 4-element list, whose first element is the symbol 'request, and which has 3 more elements that we'll call pid, tag, and param".
(let (state . response) (fun state param)
This is a destructuring. It simply means that (fun state param) should return a cons cell, with the 'car of the cell being placed in state and the 'cdr being placed in response. So we expect fun to return something like (cons state response)
(==> pid (list 'response tag response))
Note the use of 'tag here. We expect that 'tag would be a 'gensym'ed symbol, and is used in pattern matching so that the client can receive the message it's looking for.
(gen-server fun state))
Plain recursion.
; hot code swapping!!
('upgrade new-fun)
(gen-server new-fun state)
Functions (that aren't closures) can be safely cached because they're immutable. If we assume arc.arc is part of the 'spec' (and hence itself immutable) then we can safely link each process to the same functions, but give each one it's own global bindings, maybe?
> arc.arc is part of the 'spec' (and hence itself immutable)
But ac.scm itself is not immutable. cref lib/scanner.arc , which redefines 'car and 'cdr (which are in ac.scm). If ac.scm, which is even more basic than arc.arc, is itself not immutable, then why should arc.arc be immutable?
So no.
In arc2c functions are represented by closures. Pointers to closures are effectively handles to the actual function code.
Now the function code is immutable (that's how arc2c does it - after all, all the code has to be written in C). When a function is redefined, we create a new closure, which contains a pointer to the new function code (which was already compiled and thus immutable), then assign that closure to the global variable.
Basically my idea for a cache would also have an incremented update pointer:
class SymbolAtom : public Atom {
private:
std::string name;
Generic* value;
size_t version;
public:
friend class Process;
};
class Process : public Heap {
/*blah blah heap stuff...*/
private:
std::map<Atom*, pair<size_t, Generic*> > g_cache;
public:
Generic* get_global(Atom* a){
std::map<Atom*, pair<size_t, Generic*> >::iterator i;
i = g_cache.find(a);
// not in cache
if(i == g_cache.end()){
Generic* mycopy = a->value->clone(*this);
g_cache[a] = pair<size_t, Generic*>(a->version, mycopy);
return a->value;
} else {
pair<Atom*, pair<size_t, Generic*> > ip = *i;
pair<size_t, Generic*> ipp = ip.second();
// no change, return value
if(a->version == ipp.first()){
return ipp.second();
} else {
//recache
Generic* mycopy = a->value->clone(*this);
g_cache[a] = pair<size_t,Generic*>(a->version, mycopy);
return mycopy;
}
}
}
}