Arc Forumnew | comments | leaders | submitlogin
Sorting a list by most common elements.
4 points by markkat 4886 days ago | 24 comments
I'm looking for a good way to sort a list by the most common elements. i.e. '(1 3 3 4 4 5 4) -> '(4 4 4 3 3 5 1).

I've been messing around in this neighborhood:

(= codes­ (coun­ts '(1 2 2 3 3 5 5 5)))

(maptable (fn (k v) (sort­ (comp­are > v) (keys­ codes­))) codes­)

but I'm not there yet. :/



3 points by akkartik 4886 days ago | link

  (def sort-by-commonest(l)
    (sort (compare > counts.l) l))
This is a fun question. The trick is to pass the list to a second place in sort.

---

I had a longer solution because I wasn't using compare or counts, so thanks for your hints.

Turns out I've been using a hand-rolled implementation of counts called freq. And, you know, I like my version better:

  (def freq(l)
    (ret ans (table)
      (each x l
        (++ (ans x 0)))))
Compare to the current counts:

  (def counts (seq (o c (table)))
    (if (no seq)
        c
        (do (++ (c (car seq) 0))
            (counts (cdr seq) c))))
Ugh. I'm starting to think do is a code smell. And so, perhaps, is if :)

---

One possible solution involves passing in just the keys of counts and repeating them the right number of times. But that feels ugly. In the real world you don't have duplicate objects, you often have objects with equal features but lots of other different aspects.

  (def freq(l (o f idfn))
    (ret ans (table)
      (each x l
        (++ (ans f.x 0)))))

  (def sort-by-commonest(l (o f idfn))
    (let h (freq l f)
      (sort (compare > h:f) l)))
Now you can try this:

  arc> (sort-by-commonest '((fox 1) (cat 1) (dog 1) (cat 2) (dog 3) (dog 2) (fish 1)) car)
  ((dog 1) (dog 3) (dog 2) (cat 1) (cat 2) (fox 1) (fish 1))
And this continues to work:

  arc> (sort-by-commonest '(1 3 3 4 4 5 4))
  (4 4 4 3 3 1 5) ; => not 5 1 because it's a stable sort
---

Song stuck in my head: REM's "What's the frequency, Kenneth?"

http://www.youtube.com/watch?v=ZIOXQp-YGpQ

-----

3 points by thaddeus 4885 days ago | link

Somehow I doubt this would be any better + I haven't even considered performance,.. but just for fun:

  (def sort-by-commonest (xs)
    ((afn (rx rs)
       (if (empty rx) rs
           (let (x n)(commonest rx)
             (self (rem x rx)(join rs (n-of n x))))))
        xs nil))

  arc> (sort-by-commonest '(1 3 3 4 4 5 4))
  (4 4 4 3 3 5 1)
  
Generally speaking, I avoid iterative updates to tables as I often find it ends up being less efficient than consing a collection together. Also,... this is not a stable sort as yours is, but maybe I can get some points for creativity?

-----

1 point by akkartik 4885 days ago | link

Point granted :)

-----

3 points by akkartik 4884 days ago | link

Turns out I can simplify the more general version so we don't need to change counts:

  (def sort-by-commonest(l (o f idfn))
    (let h (counts:map f l)
      (sort (compare > h:f) l)))
On balance this seems like a better idea. I've pushed it out to anarki: http://github.com/nex3/arc/commit/daff609d1a

-----

4 points by thaddeus 4883 days ago | link

I was thinking it would be interesting to investigate the different approaches and see if being idiomatic would also be better performing (not that i am sure what defines idiomatic with arc).

For testing I am only using the basic case (i.e no accounting for applying functions to items as they are sorted). These following examples are only a preliminary run and intended only to start looking at comparable approaches. One thing I'm noticing right off the bat, is that Method 1 & 2 has code that does already account for applying functions, therefore could be potentially sped up..? Are there other things I should think about here? If the goal were to create the fastest version then are there other ideas that anybody else is willing to pitch in? Or if the goal were to create the most succinct code are there other ideas here too? And, in general, is there anything I am not giving consideration for?

Thanks.

  ; random data - ugly code / quickly hacked
  ; --------------------------------------------------------------------
  (def jumble (seq)
    (let tb (table)
      (each i seq 
        (withs (k (rand 50) v (tb k))
          (fill-table tb (list k (cons i v)))))
     (flat (vals tb))))

  (= data 
    (jumble 
      (let xs nil
	(repeat 100
          (push (with (n (rand 100) x (rand 100))
	    (n-of n x)) xs))
         (flat xs))))
	
	
  ; Method #1 -> succinct, performs very well + stable sort
  ; --------------------------------------------------------------------

  (def freq (l (o f idfn))
    (ret ans (table)
      (each x l
        (++ (ans f.x 0)))))

  (def sort-by-commonest1 (l (o f idfn))
    (let h (freq l f)
      (sort (compare > h:f) l)))

  arc> (time10 (do (sort-by-commonest1 data) nil))
  time: 2399 msec.


  ; Method #2 -> very succinct, performs well + stable sort
  ; --------------------------------------------------------------------

  (def sort-by-commonest2 (l (o f idfn))
    (let h (counts:map f l)
      (sort (compare > h:f) l)))

  arc> (time10 (do (sort-by-commonest2 data) nil))
  time: 2436 msec.


  ; Method #3 -> not so succinct , terrible/terrible performance + unstable sort
  ; --------------------------------------------------------------------
  
  (def sort-by-commonest3 (xs)
    ((afn (rx rs)
       (if (empty rx) rs
           (let (x n)(commonest rx)
             (self (rem x rx)(join rs (n-of n x))))))
        xs nil))

  arc> (time10 (do (sort-by-commonest3 data) nil))
  time: 8052 msec.


  ; Method #4 -> not so succinct, very fast + stable sort 
  ; --------------------------------------------------------------------

  (def sort-by-commonest4 (seq)
    (flat 
      (sort (fn (x y)(> (len x) (len y))) 
        ((afn (xs ys)
           (if (empty xs) ys
               (withs (x  (car xs) 
                       f  (testify [isnt _ x]) 
                       r  (reclist [if (f:car _) _] xs)
                       c1 (len xs)
                       c2 (len r))
                (self r (cons (firstn (- c1 c2) xs) ys)))))
	  (sort > seq) nil))))
	
  arc> (time10 (do (sort-by-commonest4 data) nil))	
  time: 1257 msec.
[edit: http://blackstag.com/data.seq < The realized data in a file if anyone is interested.]

-----

4 points by thaddeus 4883 days ago | link

  ; And Method #5 -> some juice squeezing
  ; --------------------------------------------------------------------

  (def sort-by-commonest5 (seq)
    (mappend (fn (x) x) 	  
      (sort (fn (x y)(> (len x) (len y))) 
	 ((afn (xs cxs ys)
	    (if (empty xs) ys
                (withs (x  (car xs) 
                        f  (testify [isnt _ x]) 
                        r  (reclist [if (f:car _) _] xs)
                        cr (len r))
                (self r cr (cons (firstn (- cxs cr) xs) ys)))))
	     (sort > seq) (len seq) nil))))

  arc> (time10 (do (sort-by-commonest5 data) nil))	
  time: 1035 msec.
Note, I was wrong with #4. Both 4 & 5 are still not stable sorts... I should have seen that.

Also, I wrote an equivalent function in Clojure (again - for fun):

  (defn sortby-commonest0 [seq]
    (flatten 
      (sort #(> (count %1)(count %2))
  	     (afn [xs ys]
	       (if (empty? xs) ys
                   (let [x  (first xs)  
                         r  (for [i xs :while (= i x)] i)]
                   (self (drop (count r) xs) (cons r ys))))
	         (sort seq) nil))))

  => (time10 (sortby-commonest0 data))
  time: 340 msec.

-----

1 point by akkartik 4883 days ago | link

You don't like the dot ssyntax at all, do you? :)

Out of curiosity, how do the succinct versions do in clojure?

-----

1 point by thaddeus 4883 days ago | link

I love the dot syntax, but Clojure doesn't have any of the intra-symbol syntax's that arc has and I really struggle getting back into using them :)... even [ _ ] is foreign to me now, but just for you ^_^:

  (def sort-by-commonest5 (seq)
    (mappend [do _]	  
      (sort (fn (x y)(> len.x len.y)) 
	((afn (xs cxs ys)
	      (if empty.xs ys
                  (withs (x  car.xs
                          f  (testify [isnt _ x]) 
                          r  (reclist [if (f:car _) _] xs)
                          cr len.r)
              (self r cr (cons (firstn (- cxs cr) xs) ys)))))
	   (sort > seq) len.seq nil))))

 arc> (time10 (do (sort-by-commonest5 data) nil))
 time: 1073 msec.


  For sortby-commonest0, freshly loaded, here are the first 10 runs:

  (3081 2102 1803 1585 1564 1529 1596 1813 1571 1580)

  As you can see with the JVM it warms up and running the same code gets faster...
  For example the second run (and my 340 must have been down the road?):

  (581 553 539 540 543 550 541 542 544 536)

  Then I tried a comparable succinct approach in Clojure 
  (no counts, so I used frequencies):

  (defn sort-by-commonest01 [xs]
    (let [h (frequencies xs)]
      (sort (comparitor > #(h %)) xs)))

  first 10:  (4568 2750 2495 2125 2399 2131 2123 2133 2056 1087)
  second 10: (1064 1074 1070 1069 1070 1101 1058 1062 1090 1066)
It's really hard to make decent comparisons (pun intended)... i.e. I mean should I only compare the first attempts, freshly loaded to make valid comparisons?

Not sure what I gain from this... I think it's fair to say the succinct version is slower than the longer version. Also I haven't even considered what the idiomatic approach would be in Clojure and I'm guessing Clojure has a built in function that's written by some code wizard :)

-----

2 points by rocketnia 4882 days ago | link

I don't know how much this is going to help, but I have several suggestions:

a) Where you say "mappend [do _]", you can just say "apply + nil". Actually, I've been using "apply join", but it turns out that performs much worse:

  arc> (time10:mappend idfn (n-of 10000 nil))
  time: 468 msec.
  nil
  arc> (time10:apply join (n-of 10000 nil))
  time: 50454 msec.
  nil
  arc> (time10:apply + nil (n-of 10000 nil))
  time: 437 msec.
  nil
b) It might help to test with both '< and '>.

c) Not that it'll really save anything when it comes to the overall computational complexity, but this version of list length comparison will only traverse the lists up to the end of the shorter one:

  (def longer-list (a b)
    (if a
      (if b
        (longer-list cdr.a cdr.b)
        t)
      nil))
d) Where you say "(if empty.xs ys <else>)", you can probably squeeze out more performance with "(if xs <else> ys)". You can actually use "(iflet (x) xs <else> ys)" here too, but I don't know what that'll do to the performance.

e) There's no need to call 'testify.

After incorporating a bunch of this feedback--I didn't pit '< against '> --the code might look like this:

  (def sort-by-commonest5r1 (seq)
    (apply + nil
      (sort longer-list
        ( (afn (xs cxs ys)
            (if xs
              (withs (x car.xs
                      f [isnt _ x]
                      r (reclist [if (f:car _) _] xs)
                      cr len.r)
                (self r cr (cons (firstn (- cxs cr) xs) ys)))
              ys))
          (sort > seq) len.seq nil))))
At this point I'd make some more significant changes to get rid of all the length arithmetic:

  (def sort-by-commonest5r2 (seq)
    (apply join
      (sort longer-list
        ( (afn (rest bins)
            (iflet (sample . rest) rest
              ( (rfn self2 (bin rest)
                  (if rest
                    (let first car.rest
                      (if (is sample first)
                        (self2 (cons first bin) cdr.rest)
                        (self rest (cons bin bins))))
                    (cons bin bins)))
                list.sample rest)
              bins))
          (sort > seq) nil))))
However, for some reason those last changes actually perform worse:

  arc> (time:repeat 100 sort-by-commonest5r1.data)
  time: 18390 msec.
  nil
  arc> (time:repeat 100 sort-by-commonest5r2.data)
  time: 20437 msec.
  nil
  arc> (time:repeat 100 sort-by-commonest5.data)
  time: 19594 msec.
  nil
(My Arc setup is apparently slower than yours.)

-----

1 point by akkartik 4882 days ago | link

I don't follow which of those results is for clojure vs arc. sort-by-commonest0 ranges from 3081 to 536ms, while the clojure sort-by-commonest01 ranges from 4568 to 1066ms? I'm surprised that clojure is so consistently slower.

Can you show the first twenty runs for the arc sort-by-commonest2 vs the clojure sort-by-commonest01?

-----

1 point by thaddeus 4882 days ago | link

A few things....

1. I made a big mistake on my time calculator for Clojure... it was using nanoTime with rounding errors. I had to re-write it. This is now correct with no rounding:

  (defmacro millitime*
    ([expr]
       `(let [start# (System/currentTimeMillis) 
              ret# ~expr
              stop# (System/currentTimeMillis)]
          (- stop# start#))) 
    ([n expr]
      `(map (fn[~'x](~'millitime* ~expr))(range 0 ~n))))

  2. Clojure runs:

  (def data (read-string (slurp (str rootDir* "data.seq"))))

  (defn sortby-commonest-Clojure-Long [seq]
    (flatten 
      (sort #(> (count %1)(count %2))
  	(afn [xs ys]
	  (if (empty? xs) ys
              (let [x  (first xs)  
                    r  (for [i xs :while (= i x)] i)]
              (self (drop (count r) xs) (cons r ys))))
       (sort seq) nil))))
	
  => (millitime* 10  (sortby-commonest-Clojure-Long data))
  (27 26 23 17 17 16 16 16 17 18)
  
  run2: (6 17 16 17 16 17 16 17 6 7)
  run3: (9 6 6 7 6 6 6 7 6 6)	

  ; restarted & reloaded data

  (defn sort-by-commonest-Clojure-succinct [xs]
    (let [h (frequencies xs)]
      (sort (comparitor > #(h %)) xs)))	
	
  => (millitime* 10 (sort-by-commonest-Clojure-succinct data))	
  (42 28 26 22 22 24 21 21 21 20)

  run2: (10 10 10 10 11 10 10 10 11 10)	
  run3: (10 11 10 10 10 12 11 10 10 10)

  3. Arc Runs:

 ; new time code for arc 
  (mac millitime (expr)
    (w/uniq (t1 t2)
    `(let ,t1 (msec)
       (do ,expr
            (let ,t2 (msec)
                (- ,t2 ,t1))))))

  (mac millitime10 (expr)
    `(accum xs
       (each i (range 1 10)
         (xs (millitime ,expr)))
      xs))


  (def sortby-commonest-Arc-Long (seq)
    (mappend [do _]	  
      (sort (fn (x y)(> len.x len.y)) 
	((afn (xs cxs ys)
	   (if empty.xs ys
               (withs (x  car.xs
                       f  (testify [isnt _ x]) 
                       r  (reclist [if (f:car _) _] xs)
                       cr len.r)
                (self r cr (cons (firstn (- cxs cr) xs) ys)))))
	   (sort > seq) len.seq nil))))

   arc> (millitime10 (sortby-commonest-Arc-Long data))
   (110 103 104 104 105 105 105 105 104 104)

   run2: (105 103 105 105 105 105 105 104 106 103)
   run3: (106 106 106 105 104 101 107 105 105 104)

   ; restarted & reloaded data
   
   (def sort-by-commonest-Arc-succinct (l (o f idfn))
    (let h (counts:map f l)
      (sort (compare > h:f) l)))

   arc> (millitime10 (sort-by-commonest-Arc-succinct data))
   (241 238 238 240 237 237 234 236 235 234)
  
   run2: (238 238 274 283 239 239 239 239 237 240)
   run3: (240 239 238 238 239 239 242 243 239 237)

  Here's rocketnia's new version:
  ; restarted & reloaded data

  arc>(millitime10 (sort-by-commonest5r1 data))
  (110 102 105 104 104 104 101 106 105 104)

  run2: (105 102 104 104 103 104 102 105 105 103)
  run3: (103 106 105 104 102 104 104 105 102 106)
  
Whew, this whole timing things sure takes allot of time :)

-----

1 point by akkartik 4873 days ago | link

Ok, so arc's basically 10x slower than clojure for this example.

-----

2 points by thaddeus 4873 days ago | link

Yes, .... But, Clojure being faster was not really the part I cared about. What I feel this shows is that the straight forward idiomatic approach is, while concise, not always the optimal solution. The investigation was an attempt to consider other approaches, gain a better understanding of the languages and find some optimal paths. Comparing Arc to Clojure shows both languages have a similar ratio in performance metrics for the two cases, which allows me to normalize my approaches and not assume that an approach for one is equally applicable to the other.

-----

2 points by akkartik 4873 days ago | link

Yeah, I wanted to correct/clarify my comment at http://arclanguage.org/item?id=15076 for people coming here from search engines.

I'm not surprised that the readable way is less performant; I'm reassured that the price for readability is quite low. I was totally unsurprised that both require similar optimizations given that both are fairly mature toolchains. Compilers by and large get their performance the same way.

The one thing that can cause the optimal optimizations to shift is different machines with different capacities and hierarchies of caches and RAM. But this is a compute-bound program so even that's not a concern.

Update: I should add that I enjoyed this summary of yours. The previous comment just gave data and left your conclusions unspoken.

-----

2 points by thaddeus 4872 days ago | link

I agree, and my apologies, I did give only data - at the time of the writing I was trying not to draw conclusions and leave the door open for other options to come forward, but I should have followed up.

I'm not so sure I can agree with the price for readability being quite low. One could call it premature optimization, but a ~50% gain is pretty significant in my mind. Had it been 20% or less I would probably go forward and spend less of my time attempting alternate approaches, but at ~50% I think playing around and learning the boundaries and their benefits can yield positive results for me.

And, besides, stuff like this is just plain fun!

-----

2 points by akkartik 4872 days ago | link

:)

(No criticism intended; no apologies necessary.)

It's 50% if you do just that. In a larger app it's a difference of 1ms.

Now you could argue that everything in an arc program will be slower so I should consider it to be 50%. Lately I've been mulling the tradeoff of creating optimized versions vs just switching languages. When I built keyword extraction for readwarp.com I was shocked how easy it was to add a C library using the FFI. Why bother with a less readable, hard-to-maintain arc version for 2x or 5x if you can get 50-100x using an idiomatic C version?

The whole point of a HLL is to trade off performance for readability; I'd rather back off that tradeoff in a key function or two.

---

Shameless plug section

Wart currently has an utterly simple FFI: create a new C file, write functions that take in Cells and return Cells, run:

  $ wart
and they get automatically included and made available.

But I want to do more than just an FFI. I have this hazy idea of being able to specify optimization hints for my code without actually needing to change it or affect its correctness. They might go into a separate file like the tests currently do.

I dream of being able to go from simple interpreter to optimized JIT using just HLL lisp code (and a minimum of LLVM bindings). So far the prospect of attempting this has been so terrifying that I've done nothing for weeks..

-----

2 points by thaddeus 4872 days ago | link

"In a larger app it's a difference of 1ms." ...

I think that's generalizing too much. Using that one function with our nominal data set may only cost some ms, but what if you're dealing with hundreds of millions records? It may then, even though only representing .001% of your code base, account for 90% of your operating time - which is when someone normally kicks in with the "premature optimization" argument, which I can't really argue against, other than to say optimizing code is a skill that is generally done well by those who take it into account to begin with.

"Now you could argue that everything in an arc program will be slower so I should consider it to be 50%"

I wouldn't think this to be the case. I'm sure there's a tonne of juice squeezing one can do, but as a general statement, having played around with a lot of arc's code, I would guess most optimizing would yield less than 10%, but these 50+%, while they are few and far between are still worth the effort (to me).

The key, in all this, is to understand these languages well enough to make good judgment calls on where best to invest ones time.

I can't say much about wart and the rest (you're much deeper into language design than I am). :)

-----

2 points by akkartik 4872 days ago | link

"what if you're dealing with hundreds of millions records?"

Then just write it in C :)

Let me try to rephrase my argument. Sometimes you care about every last cycle, most of the time you care about making it a little bit faster and then you're done. Sometimes your program needs large scale changes to global data structures that change the work done, sometimes it needs core inner loops to just go faster. Sometimes your program is still evolving, and sometimes you know what you want and don't expect changes.

just a little faster + work smarter => rearchitect within arc.

just a little faster + faster inner loops => rewrite them in C

every last cycle + rigid requirements => gradually rewrite the whole thing in C and then do micro-optimizations on the C sources. You could do arc optimizations here, but this is my bias.

every last cycle + still evolving => ask lkml :)

If you're doing optimizations for fun, more power to you. It's pretty clear I'm more enamored with the readability problem than the performance problem.

I don't want to sound more certain than I am about all this. Lately I spend all my time in the 'just a little faster' quadrant, and I like it that way.

"The key, in all this, is to understand these languages well enough to make good judgment calls on where best to invest ones time."

Ideally they have profiling tools. For arc I use the helpers at the bottom of http://arclanguage.org/item?id=11556.

-----

3 points by thaddeus 4872 days ago | link

I don't disagree with your thought's, however I don't think they account for the TCQ aspect of everyone's situation.

Let's put put it another way. In my situation, if I have to look a using C then it's already game over[1]. However I can, with my limited amount of time (lunches, evenings and weekends) become proficient enough in Clojure (with a continual thanks to Arc).

What I am suggesting is that knowing the language well enough to easily identify these 50%+ hitters is a matter of finding low hanging fruit and at the same time becoming a better Arc/Clojure programmer. It does not mean I want to change my direction in the heavily-optimized/low-level to high-level/low-maintenance/readable code continuum.

[1] I'm not a programmer[2], I am a business analyst that works with software that simulates oil & gas production from reservoirs that sit several hundred kilometers underground. Simulation scenario's are run for 20 year production periods across hundreds of interdependent wells. The current software tools run the simulations across many cores, they take 8 days to run to completion and they cost about a half a million dollars per seat (+18% per year maintenance fees). This cost can be attributed to the R&D that occurred 8-10 years ago (i.e they required a team(s?) of P.Engs writing software in C to maximize the performance). Eight years ago you couldn't do (pmap[3] (sortby-commonest or whatever.... )) so easily. Nowadays I have the opportunity to create a 70% solution all by my lonesome, costing only a portion of my time. Hence why understanding the language well enough to find the low hanging fruit and not having to use C, is probably bigger deal to me.

[2] Well, maybe I shouldn't say this... rather I should say it's not my day job. I have no formal education in the field. I'm pretty much self taught.

[3] pmap is the same as map only it distributes the load across multiple processors in parallel.

:)

-----

2 points by akkartik 4872 days ago | link

Ah you're right. I wasn't taking individual strengths and weaknesses into account in my analysis.

-----

1 point by thaddeus 4872 days ago | link

> several hundred kilometers underground...

Lol, that's a gross overstatement, I meant to say several hundred meters (not that it's relevant to the topic) :)

-----

1 point by thaddeus 4882 days ago | link

note that sort-by-commonest0 & sort-by-commonest01 are both Clojure.

-----

1 point by markkat 4885 days ago | link

Oh, and by the way, it did make sense of the zoo:

  (sort-by-c­ommonest '(dog­ dog cat fox fox dog fish)­)
  (dog dog dog fox fox cat fish)
I went with the first since it serves my purpose, but actually, it might be more future-looking to use the second.

Thanks again!

-----

1 point by markkat 4885 days ago | link

Thanks akkartik! That is concise. :)

This is going to lead me to rethink some of my previous functions.

-----