What is Python's lru_cache and when to use it?

January 28, 2026 | ⏱ 13 min read

Have you ever come across scenarios where your Python code is repeatedly calling functions that return values which are completely determined by its input parameters?

If such functions are light weight - i.e. they only do very simple calculations, we don’t have to worry too much about them. But what if there is a frequently called function doing some heavy and deterministic calculations - meaning, its output is completely determined by input parameters.

Well, Python has a nice little(yet powerful) built in function decorator that you can easily use in your code without having to install any additional dependencies. And its thread safe as well!

from functools import lru_cache — to the rescue!

lru_cache is one of Python’s most useful decorators for performance optimizations. With correct knowledge on how to use it, lru_cache would greatly speedup your code!

However, there are some considerations you need to make sure before start using it. In this article I am going to walk you through when and how to use lru_cache and also when not to use it with examples. By the end of reading this post, you would be confident using it to optimize your python code.

The API

@functools.lru_cache(maxsize=128, typed=False)

  • maxsize - The maximum number of recent calls to save in the cache. Default is 128 and when maxsize is set to None, the LRU feature is disabled and the cache can grow indefinitely without a bound.
  • typed - Default is False. This determines whether to cache function arguements with different types seperately. (ex: when typed=True, input parameter 3.0 and 3 are considered different resulting in two cache items)

LRU = Least Recently Used

lru_cache is a function that caches(also known as memoizes) function results along with their input parameters. Under the hood, it is a dictionary that stores

  • Key: function arguements as a tuple
  • Value: function result

Sounds simple right?

So where the least recently used part come from? Well, as everything else, this cache also can have a size that we can define at declaration using maxsize parameter(we can also have unlimited cache size by setting maxsize=None as well). However, when the cache is full, it removes the least recently used items from it and hence it is called a least recently used(LRU) cache.

The problem of Expensive Repeated Calculations

Take a look at the following example when calculating the Fibbonacci sequence

def fibonacci(n):
    """Calculate nth Fibonacci number - SLOW!"""
    print(f"Computing fib({n})")
    if n < 2:
        return n
    return fibonacci(n-1) + fibonacci(n-2)

Now if I called result = fibonacci(10) it will be painfully slow! Because if you look closely, what happens under the hood is,

Computing fib(10)
Computing fib(9)
Computing fib(8)
Computing fib(7)
Computing fib(6)
Computing fib(5)
Computing fib(4)
Computing fib(3)
Computing fib(2)
Computing fib(1)
Computing fib(0)
Computing fib(1) # Computed again!
Computing fib(2) # Computed again!
Computing fib(1) # Computed again!
Computing fib(0) # Computed again!
Computing fib(3) # Computed again!
Computing fib(2) # Computed again!
Computing fib(1) # Computed again!
...

You would see a lot of repeated calculations are done which is indeed a waste of computing resources. Imagine calling fibonacci(35) that would make staggering 29 million function calls! Energy bills are quite high nowadays, the same function could have done much better by simply decorating with our hero lru_cache!

Decorating with @lru_cache

In the below code, nothing has changed except decorating the same function as above with @lru_cache with a maximum cache size of 128.

from functools import lru_cache

@lru_cache(maxsize=128)
def fibonacci(n):
    """Calculate nth Fibonacci number - FAST!"""
    print(f"Computing fib({n})")
    if n < 2:
        return n
    return fibonacci(n-1) + fibonacci(n-2)

If we call the lru_cache decorated fibonacci function now, as result = fibonacci(10), below would happen

Computing fib(10)
Computing fib(9)
Computing fib(8)
Computing fib(7)
Computing fib(6)
Computing fib(5)
Computing fib(4)
Computing fib(3)
Computing fib(2)
Computing fib(1)
Computing fib(0)

You would notice a significant fall of the number of actual function invocations in above, since for all the following duplicate function invocations, the results are directly taken from the LRU cache saving an aweful lot of time and energy! Still not impressed? Let’s do a timing comparison for the above function.

import time
from functools import lru_cache

# Without cache
def fib_slow(n):
    if n < 2:
        return n
    return fib_slow(n-1) + fib_slow(n-2)

# With LRU cache
@lru_cache(maxsize=None)
def fib_fast(n):
    if n < 2:
        return n
    return fib_fast(n-1) + fib_fast(n-2)

# Test with n=45
print("Without cache:")
start = time.time()
result = fib_slow(45)
print(f"Result: {result}, Time: {time.time() - start:.10f}s")
# Result: 1134903170, Time: 153.0848743916s

print("\nWith cache:")
start = time.time()
result = fib_fast(45)
print(f"Result: {result}, Time: {time.time() - start:.10f}s")
# Result: 1134903170, Time: 0.0000000000s  <-- that was lightening fast!!

Did you just see the above results I’ve got from my machine? The function without the cache took around 153 seconds for the completion, whereas the same function for the same calculation with lru_cache ran instaniously to get the same result!! Isn’t that fantastic?

I hope you could imagine the amount of performance gain we can reap from this awesome function decorator.

Basic Usage

Given below are some examples of how you could use lru_cache

from functools import lru_cache

# Default: maxsize=128
@lru_cache
def my_function(x):
    return x * 2

# Custom cache size
@lru_cache(maxsize=256)
def bigger_cache(x):
    return x * 2

# Unlimited cache (no eviction)
@lru_cache(maxsize=None)
def unlimited_cache(x):
    return x * 2

# Here the cache size is 100, and the type of parameters considered for caching
# ex: my_typed_cache(3) and my_typed_cache(3.0) would be cached seperately
@lru_cache(maxsize=100, typed=True)
def my_typed_cache(x):
    return x * 2

Cache statistics

As every cache lru_cache also has statistics associated with it to represent,

  • hits: Times results were returned from cache
  • misses: Times function had to do the computations(real function invocations)
  • maxsize: Maximum size of the cache
  • currsize: Current number of cached items

Lets have a look at the below example

from functools import lru_cache

@lru_cache(maxsize=20, typed=True)
def my_super_complex_function(a, b):
    print(f"my_super_complex_function was called! a={a}, b={b}")
    return a+b

Now lets use call this function a couple of times

# Let's use above function a couple of times
my_super_complex_function(1, 2) # my_super_complex_function was called! a=1, b=2
my_super_complex_function(1.0, 2.0) # my_super_complex_function was called! a=1.0, b=2.0

If we check the cache statistics now, you may note the current size of the cache to be 2 rather than 1, because this is a typed cache causing it to treat (1, 2) and (1.0, 2.0) as different parameters. In addition both the above calls are cache misses(misses=2), and hence hits=0

print(my_super_complex_function.cache_info()) # CacheInfo(hits=0, misses=2, maxsize=20, currsize=2) 

Now if we want to hit the cache, we need to pass the parameters equal to something what we have provided before.

res = my_super_complex_function(1.0, 2.0) # Here you would not see the print statement from the my_super_complex_function
print(f"The result is {res}") # The result is 3.0

If we check the cache statistics now, you would see it has one hit and thats where we have taken the above result.

print(my_super_complex_function.cache_info()) # CacheInfo(hits=1, misses=2, maxsize=20, currsize=2)

Clearing the Cache

Its really easy to clear the cache. Example

my_super_complex_function.clear()
print(my_super_complex_function.cache_info()) # 

When to use lru_cache

lru_cache is a perfect candidate for following problems

1- Recursive functions with overlapping subproblems ex:-

  • In the above examples, the problem of Fibonacci(n) is split into Fibonacci(n-1) and Fibonacci(n-2)
  • Factoral numbers, ex:- factorial(5) = 5 * factorial(4)
    @lru_cache(maxsize=None)
    def factorial(n):
      if n <= 1:
          return 1
      return n * factorial(n-1)
    
  • Pascal triangle
    @lru_cache(maxsize=None)
    def pascal_triangle(row, col):
      if col == 0 or col == row:
          return 1
      return pascal_triangle(row-1, col-1) + pascal_triangle(row-1, col)
    

2- API calls or Database queries requesting the same static data

import requests

@lru_cache(maxsize=100)
def fetch_weather(city):
    """Cache weather data for 100 cities"""
    response = requests.get(f"https://api.weather.com/{city}")
    return response.json()

# First call hits API
weather = fetch_weather("London")  # API call

# Subsequent calls use cache
weather = fetch_weather("London")  # From cache!

3- Mathamatical functions to to deterministic calculations

import math

@lru_cache(maxsize=None)
def is_prime(n):
    """Check if number is prime"""
    if n < 2:
        return False
    for i in range(2, int(math.sqrt(n)) + 1):
        if n % i == 0:
            return False
    return True

# Filter prime numbers from a list
numbers = list(range(1000))
primes = [n for n in numbers if is_prime(n)]
# Each is_prime call is cached

4- Computations with python’s property members. Property() is python’s built in function to enhance encapsulation and better control of accessing class attributes. To read more on that I’ve found this to be useful.

class DataProcessor:
    def __init__(self, data):
        self.data = data
    
    @property
    @lru_cache(maxsize=1)
    def total(self):
        """Expensive computation cached as property"""
        print("Computing total...")
        return sum(self.data)

processor = DataProcessor([1, 2, 3, 4, 5])
print(processor.total)  # Computing total... 15
print(processor.total)  # 15 (from cache)

When not to use lru_cache?

The knowledge about when not to use lru_cache is paramount to properly use this function in applications without causing hideous bugs. Given below are some examples.

1- When the arguements are Unhashable.
lru_cache requires the funtion’s input parameters to be hashable and it would give errors otherwise. Examples for hashable data types are tuple, string, numbers(int, float), bool, frozensets, NoneType Examples for unhashable data types are dict, list, set, bytearray.

@lru_cache
def process_data(data_dict):  # This Won't work and would give Error: unhashable type: 'dict'
    return sum(data_dict.values())

To read more about this h, I’ve found this article to be useful.

2- When functions have side effects.

@lru_cache  # This will be a Bad idea!
def send_email(to_address, message):
    # Side effect: sends actual email
    email_service.send(to_address, message)

# If called twice, only first call sends email!
send_email("user@example.com", "Hello")  # Sends email
send_email("user@example.com", "Hello")  # Does nothing! (cached)

3- Function using/returning time sensitive data.
If your function include any time sensitive information to be associated to the result, do not use lru_cache as it will return cached results instead.

import time

@lru_cache  # Returns stale data!
def get_current_time():
    return time.time()

print(get_current_time())  # 1234567890.123
time.sleep(5)
print(get_current_time())  # 1234567890.123 (same! wrong!)

4- Functions with large return values.
Since results from the function are cached in memory, do not use if the function returns large values consuming a lot of memory.

@lru_cache(maxsize=1000)  # This could consume GB of RAM!
def load_large_image(filename):
    # Returns 10MB image
    return open(filename, 'rb').read()

# With 1000 cached images = 10GB of memory!

Best practices

It is always adviced to thoroughly analyse your candidate function before decorating with @lru_cache to assure it does not have any of the above undesirable properties. And in addition, if such a function contains any loging/debug statements, you would only see them in the first use since the consecutive calls are not actually invoking the original function, instead result is retrieved from the cache. The size of the cache is also needs to be fine tuned for optimum performance since maxsize=None or higher values would cost you with a large RAM utilization. It is therefore recommended to start with the default maxsize=128 then monitor its utilization with cache_info().

Remember - lru_cache trades memory for speed!!

Thank you!

Hope you have learned something new from this blog that you can apply in your daily coding to improve the performance of your projects. I wish you good luck with lru_cache. Happy coding!