I think in a case like hosting web apps, you can get away with using the OS's scheduling to run lots of instances of your server software and call it architectural concurrency. Witness Rails' standard deployment model with a pack of mongrels. Each rails instance is single-threaded, but you are running several on the machine and let the OS juggle them.
I think the appeal of Erlang and others like it is if you have a single app that you want to speed up with additional cores (it's hard to run a database on a bunch of commodity PCs as separate instances... it's done that way today, but will multimaster DBs really scale to 10,000 machines? Probably have to ditch the central DB for something more parallel) I think that if we have really hit the single core speed limit, then this eventually will matter, but not until the single thread becomes too big for one core to handle. (Assuming software requires more resources over time here... who knows.)
To get away from the webapp example... I imagine, for the foreseeable future, your web browser is really only going to need one core, and having more cores just means you can run more browsers at a time (like a pack of foxes, if you will), but a single browser won't get any speedup. In 10 years, when everything on the web is rendered in OpenGL 5 instead of text markup, maybe your browser will want some more parallelism (or maybe we'll just farm that off to dedicated graphics hardware).
I don't think it's really something critical for Arc right now. It's hard to follow the Erlang model without become Erlang (just like it's hard to do everything Lisp does without becoming Lisp), but I think that having parallelism baked into the language (or at least thought about) would be neat from a personal-coding-fun standpoint.
right now, two cores makes Javascript intensive apps not suck. For instance, without two cores I can't handle Google's Apps, with two cores they function nicely.