This site is supported by donations to The OEIS Foundation.


From OeisWiki
Jump to: navigation, search

This article needs more work.

Please help by expanding it!

The term memoization (not a misspelling of "memorization") is the simplest form of dynamic programming. It involves storing ("memoizing") values and recalling them later rather than re-generating the values. Some languages, like Maple, Mathematica and Maxima, have syntactic features that allow memoization of values without explicitly writing out instructions on saving and restoring values, see below. Memoization can drastically reduce computation time, especially in the case of recursively defined functions. However it is not efficient in all cases, and there is a time-memory tradeoff.


Factorial numbers (singly recursive function)

For the following examples, we will use the factorial function defined with the recurrence relation , with 0! = 1, even though most of these computer algebra systems have built-in commands for factorials.

Suppose that, after defining the factorial function, you have the program calculate 42! It gives you the answer and then you ask for 47! With memoization, there is no need to start on anew; instead, the program can skip straight to compute .


In Maple functions and procedures, memoization is activated by adding the option remember. Values are then stored and available in a "remember table", that can be deleted using the function forget(). The table can also be created and filled by explicitely assigning results to the function for given parameters, viz f(x) := value. As an example, we give the following implementation of the factorial function:

> f := x -> x*f(x-1):   # so far any call to f(...) would yield an infinite recursion
> f(0) := 1:            # now f(n) yields the correct result for any nonnegative integer n 
> f(5);
> op(4,op(f));          # this yields the "remember table"
        table([0 = 1])
> op(3,op(f));          # note that the "remember" option is not set, though
        operator, arrow
> f := proc(n) option remember; n*f(n-1) end: f(0):= 1: f(5);
> op(4,op(f));          # now all results, also from intermediate computations, are automatically stored
        table([0 = 1, 1 = 1, 2 = 2, 3 = 6, 4 = 24, 5 = 120])


Mathematica implements memoization by defining a function setting its own value:[1]

fac[0] = 1
fac[n_] := fac[n] = n * fac[n-1]


In Maxima, the implementation is done via "array functions" defined using square brackets around the argument. See the discussion "counting primes" for an example.


PARI/GP does not have built-in memoization. It can be accomplished directly; here is a verbose implementation:

fac(n)={  /* uses the global variable fac_table to store results */
  if(n < 2, return(1));
  if(fac_table == 'fac_table, fac_table = [1]); /* create the table if not yet done */
  if(#fac_table < n, fac_table=concat(fac_table,vector(n-#fac_table))); /* extend it to required size */
  if(fac_table[n], fac_table[n], /* return it, if the result has already been computed */
     fac_table[n] = n*fac(n-1))  /* else compute and store it */

Since we aim at illustrating an example involving a recursive definition, the results are still computed in recursive manner. Thus, an initial call of fac(100) will yield 100 nested calls. However, a subsequent call to fac(200) will not need 200, but only 100 nested calls, since recursion will stop at fac(100) which has already been computed and stored. This way one can avoid a limitation of the maximal size of the argument due to limited stack size: It is sufficient to compute some intermediate results to cut down the level of recursivity.

See for some more examples.


Perl has a Memoize module.


In Sage, memoization is activated by applying the memoization decorator 'CachedFunction' to the function.

def f(x):
    return 1 if x <= 1 else x*f(x-1)
for i in (0..9) : print f(i)

If only a loop over the values is required it is more economic to use an iterator as this does not require a remember table.

def F(max = 1000) :
    n = 0
    f = 1
    while n <= max:
        yield f
        n += 1
        f *= n
for f in F(9) : print f


In Scheme one can define a new syntactic form called 'definec', which works otherwise as a normal 'define' for defining functions, but limited to defining unary functions whose range must be limited to non-negative integers, adds a "wrapper code" for "caching" (or "memoizing") any of the computed values. Note that the implementation below does not require that the values were limited to integers, so it is perfectly possible to cache also lambda-forms and closures, for example.

The implementation below uses also a hard upper-limit for the size of cache (the global variable *MAX-CACHE-SIZE-FOR-DEFINEC*), meaning that if the function is called with argument greater than that value, the result is computed in ordinary way, without any caching. By defining *MAX-CACHE-SIZE-FOR-DEFINEC* to zero, this memory-conserving feature can be turned off.

(define *MAX-CACHE-SIZE-FOR-DEFINEC* 290512)

(define-syntax definec
 (syntax-rules ()
  ((definec (name arg) e0 ...)
     (define name
          ((_cache_ (vector #f #f #f #f #f #f #f #f #f #f #f #f #f #f #f #f))
            (lambda (arg)
              (cond ((null? arg) _cache_) ;; This is for debugging, allowing the programmer see the currenct cache with nil-argument
                    ((>= arg *MAX-CACHE-SIZE-FOR-DEFINEC*) ;; No caching if over the limit!
                         e0 ...
                        (if (>= arg (vector-length _cache_)) ;; If past the current cache size?
                            (set! _cache_                    ;; Then increase the size of the cache.
                                          (min *MAX-CACHE-SIZE-FOR-DEFINEC*
                                               (max (1+ arg)
                                                    (* 2 (vector-length _cache_))
                        (or (vector-ref _cache_ arg) ;; Either the value is already in cache (non-#f)
                            ((lambda (res)           ;; Or we have to compute it the first time...
                               (vector-set! _cache_ arg res) ;;    and then store the result into cache
                               res                           ;;    before returning it to the caller.
                              (begin e0 ...)         ;; ... which (the computing) actually happens here
              ) ; cond
          ) ; letrec-definitions
       ) ; letrec
     ) ;; (define name ...)
 ) ;; syntax-rules

The above code works for example with MIT/GNU Scheme (release 7.7.0 onward). For older Scheme (or Lisp) systems using more traditional backquote-style "unhygienic" macros (e.g. MIT Schemes before the release 7.7.0.), refer to code in [1]

Fibonacci numbers (doubly recursive function)

The Fibonacci numbers are defined by a double recursion, which means that the number of function calls doubles for each recursion level, thus quickly getting prohibitive! Memoization can save us from the exponential explosion in time and space that would be required. (Of course we assume here that we don't know about the Binet formula.)

Ackermann function

The Ackermann function[2] is a total computable function that is not primitive recursive.

cannot possibly be computed by simple recursive application of the Ackermann function (reducing to additions of 1) in any tractable amount of time. A practical method of computing functions of this type is to use memoization of intermediate results.


External links

  • Paul E. Black, "memoization", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed., U.S. National Institute of Standards and Technology. 14 August 2008. (accessed March 26, 2012) Available from: