Number-theoretic algorithms are widely used, especially in cryptography. These schemes depend on generating large prime numbers and on the difficulty of factoring large integers into primes.
Number-theoretic algorithms measure input size by the number of bits needed to represent a number. For an integer , the bit length is , so a linear number-theoretic algorithm is one that runs in time.
Integer Multiplication
The standard (manual) method for multiplying large numbers uses long multiplication. To multiply two -digit numbers, this method performs single-digit multiplications.
123
× 456
------
738 ← 123 × 6
6150 ← 123 × 50
49200 ← 123 × 400
------
56088
Can We Multiply Faster?
For centuries, was the best we had. Key breakthroughs:
- 1963: Karatsuba’s algorithm — time
- 1971: Schönhage–Strassen algorithm —
- 2019: Harvey & Van der Hoeven — optimal time
Karatsuba’s Multiplication Algorithm
Any Two Numbers Can Be Split In Half
Let and be two -bit integers, split into high and low -bit parts:
Therefore, their product can be expanded as:

This expansion reduces the number of multiplications from four to three (, , and ) resulting in time.
Note that the is a linear shifting operation as multiplying a number of the form is simply shifting the number, and the additional and subtraction are linear.
EXAMPLE
Given and
Then to multiply these two numbers, first we want to have it as even number of bits, so pad
0to resulting in .
- is always an even number
To make and equal size, pad so that it has the same number of bits as , we then get
- .
Now they are both bit numbers, where .
So starting with this even number, we can break it into halves of size
See how:
If we shift by places, then we get . This is the same as due to the way binary works.
Pad to make the same length and , i.e.
If we added to , we get: which is
Any number and , ensuring that they are even (have digits), you can break it into two halves of bits and the number is nothing but
Divide & Conquer Approach
The form above decomposes the multiplication uv (problem of multiplying two 2d-bit numbers) into: three d-bit multiplications: 1 U1V1 2 (U1 − U0)(V1 − V0) 3 U0V0 plus some simple left shifting* and some additions/subtractions. * multiplication by any 2 k requires shifting the bits leftwards by k positions –reason!
This can be repeated to continually divided, so a bit multiplication can be divided into 3 bit operations + operation, this can be repeated to get 9 operations until some base case. This base case doesn’t have to be due to hardware implemntation that can do multiplication up to a certain fixed size (128 bit register can do 2 64 bit multiplcations). So get it down to a level where you can multiply on the hardware and return
The time complexity of this divide-and-conquer approach can be expressed using the recurrence relationship: T(2d) = 3T(d) + cd, where c is some constant. Solving this recurrence yields the O(d log2(3)) -time algorithm
