Skip to content

High performance in-memory cache

Go Reference Mentioned in Awesome Go

Motivation

I once came across the fact that none of the Go cache libraries are truly contention-free. Most of them are a map with a mutex and an eviction policy. Unfortunately, these are not able to reach the speed of caches in other languages (such as Caffeine). For example, the fastest cache from Dgraph labs called Ristretto, was faster than competitors by 30% at best (Otter is many times faster) but had poor hit ratio, even though its README says otherwise. This can be a problem in real-world applications, because no one wants to bump into performance of a cache library 🙂. As a result, I wanted to make the fastest, easiest-to-use cache with excellent hit ratio.

Features

  • Simple API: Just set the parameters you want in the builder and enjoy
  • Autoconfiguration: Otter is automatically configured based on the parallelism of your application
  • Generics: You can safely use any comparable types as keys and any types as values
  • TTL: Expired values will be automatically deleted from the cache
  • Cost-based eviction: Otter supports eviction based on the cost of each item
  • Excellent throughput: Otter is currently the fastest cache library with a huge lead over the competition
  • Great hit ratio: New S3-FIFO algorithm is used, which shows excellent results

Contribute

Contributions are welcome as always, before submitting a new PR please make sure to open a new issue so community members can discuss it. For more information please see contribution guidelines.

Additionally, you might find existing open issues which can help with improvements.

This project follows a standard code of conduct so that you can understand what actions will and will not be tolerated.

License

This project is Apache 2.0 licensed, as found in the LICENSE.