Simply add an optional argument to `table' that specifies that all keys will be of a certain type. If it's specified that they'll be integers, then the compiler can initially implement the table as a vector rather than a hash. If the programmer then proceeds to use large or sparsely-distributed integers as keys, silently convert back to a hash-based implementation the next time the garbage collector runs.
vincenz is right : Lua deals with vectors this way. Actually, it doesn't even bother to ask you whether you are using it as a vector or as a hash table. Each table is implemented as both a vector (for numeric keys) and a regular hash table (for other keys).
This way, you've got performance (and lua is one of the fastest "scripting" languages) with a really minimal language.
I think that would be a good approach for Arc too : don't bother with vectors, hash tables are enough :)
Are you advocating the style commonly seen in PHP programming?
$a = array();
$a[0] = 'foo';
$a[1] = 'bar';
/* Fine, $a is an integer-indexed array. */
$a['another metasyntactic variable'] = 'quux';
/* Is $a an array (vector) or a hash table? I'm confused. */
for ($i = 0; $i < 3; $i++) {
echo $a[$i];
}
/* Oops, we looped one too many. No, wait, this prints:
* foo bar quux
* Huh?
*/
(PHP automatically assigns as the numerical index the maximum numerical index plus one to the "hash value".)
Hash tables and arrays/vectors are two entirely different concepts. Please do not confuse them with each other.
PHP's brokenness doesn't mean Arc has to imitate it. You'd get a vector up until the $a['another metasyntactic variable'] line, at which point it would turn into a hash table with two integer keys and one string key. Asking for $a[2] would always either return nil or throw an exception regardless of whether it was a hash table or a vector at the time.
Why such focus/fuss on the implementation data structure?
They're both just maps from a key to a value, let "the compiler" figure out how best to store them. (If you are mapping on top of a 'foreign' object that is a completely different thing, making it look the same may or may not be confusing)
It's not just implementation, but also semantics.
A vector is an ordered, O(1) access/modification, structure.
A hashtable is (usually) a non-ordered, amortized O(1) access/modification structure. The implementation details don't matter, but these characteristics do, since they are important to reason about your algorithms.
(Edit: You may think you can treat integer-indexed hashtables as vectors. Suppose you have a 'map->list' operation that applies fn to given aggregate type and returns the list of results. The semantics of (map->list fn a-list a-vector) is obvious. (map->list fn a-list a-hashtable) is not so obvious, and eventually you'll end up different map functions for different purpose. So it's a trade off---fewer data types and more specialized functions, or more data types and fewer
generic functions. Of course where is the optimal point is arguable.)
Can the compilers be smart enough to figure out the data structure that minimize your algorithm's computation complexity? I don't know. Eventually, maybe, but at that time algorithms will be described in very abstract level, I guess.
Of course you can merge these two characteristics (adding order info to hashtables, which I believe Ruby just did recently). It's a trade off, affecting constant factor.
That's basically my point: there's no reason to clutter the language semantics with the distinction. The reason I'm discussing implementation is to show that it can still be handled efficiently.