Fast Modular Exponentiation
Our next results (Fermat's little theorem and its generalization, Euler's totient theorem) deal with modular exponentiation, i.e., with expressions of type where are integers. But before stating and proving this result, let us discuss the purely algorithmic question: how to compute reasonably fast?
How would you compute and ?
Computing this in python directly gives the following result:
print((314 ** 271) % 123)
print((314159265358 ** 2718281828) % 123456789)
# 38
# it will be crashed...
If you try this code on your computer, you can see that your computer is struggling. Because we know that pc uses which methot. This method consists of taking the exponent of the number and then taking its modulo.
The right way to compute modular exponentiation in python is to use the built-in pow method. The following code produces the result in the blink of an eye. We will learn what is going on under the hood below.
print((314 ** 271) % 123)
print((314159265358 ** 2718281828) % 123456789)
# 38
# 32073907
Problem
Compute .
Of course, starting with computing is a bad idea. Note that , and hence the answer is .
Problem
Prove that is divisible by .
We know . If we write instead of . And it is equal to zero. So, it's divisible by .
Problem
Find .
Since , answer is .
Now we can compute , , , ... sequentially, multiplying each time by and keeping only the remainder modulo . In this way we do not need to store large numbers (only numbers smaller than ). Does this approach work for our example ?
The size of numbers is no more a problem, but we need to perform multiplications --- too many. Can we do better? This is a general question for computing powers (that makes sense not only for modular computations).
Let be some number. Can you compute in less than 63 multiplications (that would be needed if we compute , , , ..., sequentially)?
For this problem the answer is especially easy to guess, since and repeated squaring helps:
In this way the exponents grow faster ( instead of at every step), and we use only multiplications.
In general, to compute for we need multiplications.
For we may multiply by . Since we have also computed previous powers of 2, we may use that and (note that ). Similar trick can be used in the general.
def fast_modular_exponentiation(b, e, m):
assert m > 0 and e >= 0
if e == 0:
return 1
if e == 1:
return b
if e % 2 == 0:
return fast_modular_exponentiation((b * b) % m, e // 2, m)
else:
return (fast_modular_exponentiation(b, e - 1, m) * b) % m
print(fast_modular_exponentiation(123456789222, 456788, 598723))
# 32073907
The program implementing this idea computes in a fraction of a second.
Problem
Prove that the number of multiplications when computing (for ) both in this program and in the method described above (using binary notation) is the same: where counting the bits and ones in we use the binary representation of .
It turns out that this is not exactly the minimal number of multiplications needed to compute . For example, for we get 6 multiplications with both our approaches, while one can compute it in 5 multiplications:
See Wikipedia article about addition chains that minimize the number of multiplications.