Ackermann’s Function and “Really Big Numbers”

Today we return to to the topic of mathematics with a look at Ackermann’s Function. Anyone who has studied computer science has probably encountered this function (though I’m sure many have gone on to forget it). It is most commonly defined as follows:

This is actually a variant of Wilhelm Ackermann’s original function, often called the Ackermann–Péter function. It is quite a simple function to define and to trace, and it is very easy to implement in just about any programming language, including Python:

def ackermann(m, n):
  if m == 0:
    return n + 1
  elif n == 0:
    return ackermann(m - 1, 1)
  else:
    return ackermann(m - 1, ackermann(m, n - 1))

However, its complexity (in terms of run-time and memory) grows quite quickly. As such, it is often used as an exercise in teaching students more complex forms of recursion, and also as a test case in compiler development for optimizing recursion. It also has some interesting mathematical properties for particular values of m:

The numbers in the case of A(4,n) are quite large. Indeed, one could describe Ackermann’s function as “computing really really large numbers really really slowly.” Although the numbers grow quickly, the function is really just doing subtraction and recursion. We can take advantage of the properties described above, however, to make some shortcuts that yield a much more efficient function.

def ackermann(m, n):
  while m >= 4:
    if n == 0:
      n = 1
    else:
      n = ackermann(m, n - 1)
    m -= 1
  if m == 3:
    return 2 ** ( n + 3) - 3
  elif m == 2:
    return 2 * n + 3
  elif m == 1:
    return n + 2
  else: # m == 0
    return n + 1

With this version computing A(m,n) for m≤3 becomes trivial. And this makes computations for m≥4 possible. Or at least A(4,2), which we can actually run in python to reveal the 19,000 digit answer.

You can see the full value on this page. Computing A(4,3) is infeasible. Even with the optimizations, most computers will run out of memory trying to compute this value. But one can still reason about these rather large numbers. Let us move from the more common function we have been using to Ackermann’s original version:

This version has three arguments instead of two, and on the surface it may seem a bit more complicated. However, different values of the third argument p yield very familiar operations.


Again, this function is a rather inefficient way to compute addition, multiplication and exponentiation, but it is an interesting way to reason about them and extrapolate to other more exotic operations. For example, if we take p to be 3, we get the following operation.

Just as m x n is adding m to itself n times, and exponentiation mn is multiplying m by itself n times, this new operation (sometimes called tetration) is the next level: exponentiating m by itself n times.

Such an operation grows so fast as to be uncomparable to exponential growth. It grows even too fast to compare to the gamma function which have explored on CatSynth in the past. This series of ever higher-order operations is often noted with an arrow, called Knuth’s up-arrow notation after legendary computer scientist Donald Knuth.

Using this notation, we can define as sequence of numbers, called Akermann Numbers, where each is an ever higher-order operation of the element applied to itself.

  • 1↑1 = 11 = 1,
  • 2↑↑2 = 2↑2 = 22 = 4,
  • 3↑↑↑3 = 3↑↑3↑↑3 = 3↑↑(3↑3↑3) = 

Even just the 4th number is this sequence is so large that we can’t even easily notate it with exponents. So forget about the 5th number in sequence. But a question that interests me is what about interpolating with real numbers. What is the 1 1/2th Ackermann number? That question, if it even has an answer, will have to wait for another time.