Missing LU and LRU is linear complexity - performance

Description

Profiling some code with YourKit showed a hotspot in cache statistics on (miss) for LU and LRU caches.

Basically two issues: (count (keys {....})) is a count of a seq, not efficient. Replaced with a count of the map.

Secondly, (apply f anything) is slow. Profiler showed that the (apply min-key) was really slow. This is mitigated by using a c.d.priority-map instead. On a priority-map, (ffirst {}) or (first (peek {}).

Also inverted the logic for threshold comparison. Since there is a (build-leastness-queue) that populates statistics, should caches should favor codepaths for the state of being full all the time?

Environment

Mac Oracle JDK, Linux OpenJDK

Activity

Show:
Alex Miller
March 22, 2016, 4:38 PM
Ghadi Shayban
September 12, 2012, 3:27 PM

Patch in proper format

Ghadi Shayban
September 7, 2012, 4:49 AM

I take back the part about (apply) being slow. I think it's more the fact that it's doing a linear scan on keys every single time.

(apply) does show up a lot in a profiler, but I haven't figured out why. (apply + (range big)) seems to even be slightly faster than (reduce +) on the same range.

Completed

Assignee

Fogus

Reporter

import

Labels

Approval

None

Patch

Code

Priority

Major