no need for bst-trav since there is an elements function. You can just map over the elements.
in the data declaration 'a' is a generic type, but Haskell will infer that the type must be an instance or class Ord (i.e. implement <,<=,>=,>,max,min,==). (If the type is not an instance of Ord it will tell you so at compile time)
Actually yes, in Haskell it is the same. Lazy evaluation means you could take the 4th element of a map over an infinite list. For example: "map (*2) [1..] !! 4"
I would argue that relying on overloaded comparison functions is a bad idea. Not all things have a 'natural ordering,' and even for those that do you might want to override it at some point.
I think reasonable people can differ on this. Passing comparators as arguments is an approach more typical of dynamically-typed languages (e.g. CL and Scheme 'sort'). In a statically-typed language, it is more common to use a comparison operator that specializes on type (e.g. Haskell Prelude 'sort'). If a set of objects don't have a 'natural ordering', they might not belong in a Binary Search Tree in the first place.
You have just seen a bunch of code that gets the job done, whereas there is still an argument about whether the lisp code should be passed a comparison function. If you are going to make this argument you should show some actual code, because it is obvious from looking at this example which way is better.
The line deriving(Eq,Show) means that you can now do this:
Prelude BST> Tree 'a' Empty Empty
Tree 'a' Empty Empty
Prelude BST> let a = Tree 'a' Empty Empty
Prelude BST> a
Tree 'a' Empty Empty
Prelude BST> let b = Tree 'b' Empty Empty
Prelude BST> b
Tree 'b' Empty Empty
Prelude BST> a == b
False
Prelude BST> let c = Tree 'c' a b
Prelude BST> let d = Tree 'c' a b
Prelude BST> c == d
True
All this is guaranteed at compile time. Moreover, the haskell version is much more readable. The use of modules removes unnecessary function prefixing, and there is no doubt about what the arguments to functions are when they are pattern matched.
Curiously, though, the arc version actually has less tokens by my measurement
but here is the interesting thing- now lets count just the unique tokens
bst.arc 41
bst.hs 47
This is in part due to haskell's pattern matching semantics where the same function definition is repeated multiple times, and the '|' character is repeatedly used. Also, there are more functions in the arc version, and in both pieces of code, variable names are usually one character and function names have multiple characters.
A more balanced tree will speed up 'find' and 'insert'. If those actions are performed far more often than 'remove', it would indeed be a good idea. Without knowing the use case, I cannot say which I would prefer.
Using an unstable language may not be a good idea for you. You might be better off learning a different lisp like scheme or newLisp. But you should probably state what you want to do- you said you want to explore, but what are you exploring?. Do you want to write scripts in lisp instead of bash? Are you trying to write a specific kind of program?
I have to recommend avoiding newLisp; it's actually a very old-fashioned Lisp and most likely an evolutionary dead-end. Clojure is another new Lisp dialect that seems like it's worth a look. It runs on the JVM, which could be a big win in the short-term.
First I just want to play. With what I would play? Yes, do what I do in bash would be a good idea. I'd also like to program things for my website. I have ideas of how some things should be on the web but I cannot implement them.
I think this is partly just a communication problem- Dale Carnegie would advise to take a different tone. Instead of saying "politically correct", something more apologetic would might better- "I didn't have time yet- of course a better character set will be supported in the future"
I read his announcement and I completely disagree. Strings are pretty basic, and getting them right is part of the work of a language designer. They're more important than macros. Not getting strings right can cripple a language.
And to call not supporting unicode "offensive" is missing the point. Only supporting ascii makes the language less powerful. It means you can't use Arc for solving problems involving text manipulation in languages other than English. That's a big space. Only supporting UTF-8 would make more sense.
And for all the Java bashing nowadays, Java got Strings right, and Perl, Python, PHP and Ruby didn't.
Java unicode support has historical been a mess too. They assumed that 16 bits would always and forever be enough for any code point. This was only "fixed" in 2004 and the warts are still there.
I suppose the lesson to take away is that just about every single language has messed up characters sets. It can't then be a fatal mistake but certainly isn't one that makes any sense to repeat.
I can't believe he said that. At this stage no one really expected Arc to have any sort of UTF/Unicode/I18N support. He should have kept that for himself and then the users would have built the libraries on top of Arc. Well, I guess time will tell. Will keep an eye on it.