Euclidean algorithm is a stepwise process that starts from two numbers \(a\) and \(b\), and produces a series of equations.
In each step, you divide the previous divisor with the previous remainder, like so:
\[a = q_1 b + r_1,\quad 0 \leq r_1 < b\]
\[b = q_2 r_1 + r_2,\quad 0 \leq r_2 < r_1\]
\[r_1 = q_3 r_2 + r_3,\quad 0 \leq r_3 < r_2\]
\[r_2 = q_4 r_3 + r_4,\quad 0 \leq r_4 < r_3\]
\[\vdots\]
\[r_{n-1} = q_{n+1}r_n + 0\]
We can be guaranteed that eventually we will end up with a \(0\) remainder, as the integers \(r_i\) keep getting smaller and smaller.
The main relation between the Euclidean algorithm and the greatest common divisor is simple:
The \(\gcd(a,b)\) is the last non-zero remainder in the Euclidean algorithm that starts from \(a\) and \(b\).
The proof follows directly from the following lemma:
If \(a,b,q,r\) are integers with \(a = qb + r\), then \(\gcd(a,b) = \gcd(b,r)\).
Let us prove this lemma:
The Euclidean algorithm is extremely fast. We will not show this here, but for two numbers \(a,b\) the algorithm takes \(\log_2(ab)\) steps. Even for \(100\)-digit numbers, this will be no more than around 650 steps, something a computer can handle quickly. Better bounds can be obtained.