ALGO1 : Introduction à l'algorithmique

Cours Magistral 3

  • Ce cours traite du paradigme "Diviser pour Régner",
  • Un important résultat théorique est donné (et prouvé) :

  • Ce notebook commence par donner quelques algorithmes "Diviser pour Régner" assez classiques, et permettent d'illustrer les deux premiers cas du "master theorem" (recherche dans une liste triée ou un tableau trié, tri fusion,

  • Ce théorème ne suffit pas à couvrir tous les différents algorithmes "Diviser pour Régner", avec comme contre exemple l'algorithme de tri rapide,

  • Puis on implémente les deux algorithmes présentés dans le cours, la multiplication de grands entiers par l'algorithme de Gauss-Karatsuba, et la multiplication de grandes matrices par l'algorithme de Strassen (Straßen).


Quelques algorithmes "diviser pour régner" classiques

Recherche dans un tableau trié en $\mathcal{O}(\log(n))$

Si un tableau T = [a1, ..., an] est trié, et qu'on donne une valeur x, on cherche un indice i tel que T[i] = ai = x, s'il existe (sans spécification s'il n'est pas unique), et une erreur si x n'est pas présent dans le tableau.

En utilisant des indices left et right, qu'on fait augmenter ou diminuer, on évite de faire des recopies du tableau.

La complexité de cet algorithme est en $\mathcal{O}(\log(n))$.

Avec le master theorem, on a $a=1, b=2, k=0$ : on divise les entrées en $a=2$ entrées de tailles $\leq b=2$ plus petites, sur lesquels on applique un traitement constant ($\mathcal{O}(n^{k=0})$ avant l'appel récursif (aucune recopie, juste des changements d'indices !).

In [643]:
def dichotomy_in_array(array, value, left=0, right=None):
    if depth > len(array):    raise KeyError
    if right is None:
        right = len(array) - 1
    n = right - left + 1
    if n == 0:                raise KeyError
    elif n == 1:
        if array[left] == value:
            return left
        else:                 raise KeyError
    index_of_middle = left + (n // 2)
    middle_of_list = array[index_of_middle]
    if value < middle_of_list:  # search on left
        return dichotomy_in_array(array, value, left=left, right=index_of_middle - 1)
    elif value > middle_of_list:  # search on right
        return dichotomy_in_array(array, value, left=index_of_middle + 1, right=right)
    else:
        return index_of_middle

Et avec juste un peu d'affichage pour (un rappel ou pour) comprendre le fonctionnement :

In [206]:
def dichotomy_in_array(array, value, left=0, right=None, debug=False, depth=0):
    if depth > len(array):
        raise KeyError
    if right is None:
        right = len(array) - 1
    n = right - left + 1
    if debug: print(f"    {'  '*(depth+1)}Searching for {value} in sequence of size {n} = {array[left:right+1]}")
    if n == 0:
        raise KeyError
    elif n == 1:
        if array[left] == value:
            return left
        else:
            raise KeyError
    index_of_middle = left + (n // 2)
    middle_of_list = array[index_of_middle]
    #if debug: print(f"    {'  '*(depth+1)}{value} >/</=? {middle_of_list}")
    if value < middle_of_list:  # search on left
        if debug: print(f"    {'  '*(depth+1)}-> going left...")
        return dichotomy_in_array(array, value, left=left, right=index_of_middle - 1, debug=debug, depth=depth+1)
    elif value > middle_of_list:  # search on right
        if debug: print(f"    {'  '*(depth+1)}-> going right...")
        return dichotomy_in_array(array, value, left=index_of_middle + 1, right=right, debug=debug, depth=depth+1)
    elif value == middle_of_list:
        if debug: print(f"    {'  '*(depth+1)}-> found at index {index_of_middle} !")
        return index_of_middle
    else:
        raise KeyError

Faisons quelques essais :

In [85]:
n = 16
example_of_array = list(range(n))
In [86]:
for value in example_of_array:
    print(f"\n    Looking for value {value} in {example_of_array} :")
    dichotomy_in_array(example_of_array, value, debug=True)
    Looking for value 0 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 0 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 0 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left...
          Searching for 0 in sequence of size 4 = [0, 1, 2, 3]
          -> going left...
            Searching for 0 in sequence of size 2 = [0, 1]
            -> going left...
              Searching for 0 in sequence of size 1 = [0]
Out[86]:
0
    Looking for value 1 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 1 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 1 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left...
          Searching for 1 in sequence of size 4 = [0, 1, 2, 3]
          -> going left...
            Searching for 1 in sequence of size 2 = [0, 1]
            -> found at index 1 !
Out[86]:
1
    Looking for value 2 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 2 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 2 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left...
          Searching for 2 in sequence of size 4 = [0, 1, 2, 3]
          -> found at index 2 !
Out[86]:
2
    Looking for value 3 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 3 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 3 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left...
          Searching for 3 in sequence of size 4 = [0, 1, 2, 3]
          -> going right...
            Searching for 3 in sequence of size 1 = [3]
Out[86]:
3
    Looking for value 4 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 4 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 4 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> found at index 4 !
Out[86]:
4
    Looking for value 5 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 5 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 5 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going right...
          Searching for 5 in sequence of size 3 = [5, 6, 7]
          -> going left...
            Searching for 5 in sequence of size 1 = [5]
Out[86]:
5
    Looking for value 6 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 6 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 6 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going right...
          Searching for 6 in sequence of size 3 = [5, 6, 7]
          -> found at index 6 !
Out[86]:
6
    Looking for value 7 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 7 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left...
        Searching for 7 in sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going right...
          Searching for 7 in sequence of size 3 = [5, 6, 7]
          -> going right...
            Searching for 7 in sequence of size 1 = [7]
Out[86]:
7
    Looking for value 8 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 8 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> found at index 8 !
Out[86]:
8
    Looking for value 9 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 9 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 9 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> going left...
          Searching for 9 in sequence of size 3 = [9, 10, 11]
          -> going left...
            Searching for 9 in sequence of size 1 = [9]
Out[86]:
9
    Looking for value 10 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 10 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 10 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> going left...
          Searching for 10 in sequence of size 3 = [9, 10, 11]
          -> found at index 10 !
Out[86]:
10
    Looking for value 11 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 11 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 11 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> going left...
          Searching for 11 in sequence of size 3 = [9, 10, 11]
          -> going right...
            Searching for 11 in sequence of size 1 = [11]
Out[86]:
11
    Looking for value 12 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 12 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 12 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> found at index 12 !
Out[86]:
12
    Looking for value 13 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 13 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 13 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> going right...
          Searching for 13 in sequence of size 3 = [13, 14, 15]
          -> going left...
            Searching for 13 in sequence of size 1 = [13]
Out[86]:
13
    Looking for value 14 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 14 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 14 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> going right...
          Searching for 14 in sequence of size 3 = [13, 14, 15]
          -> found at index 14 !
Out[86]:
14
    Looking for value 15 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Searching for 15 in sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right...
        Searching for 15 in sequence of size 7 = [9, 10, 11, 12, 13, 14, 15]
        -> going right...
          Searching for 15 in sequence of size 3 = [13, 14, 15]
          -> going right...
            Searching for 15 in sequence of size 1 = [15]
Out[86]:
15

Et maintenant des essais sur des entrées de tailles croissantes :

In [119]:
import random

def random_sorted_sequence(n, minint=0, maxint=1000):
    sequence = [random.randint(minint, maxint) for _ in range(n)]
    return sorted(sequence)
In [151]:
def test_dichotomy_in_array(n, debug=False, array=None):
    if array is None:
        array = random_sorted_sequence(n)
    value = random.choice(array)
    return dichotomy_in_array(array, value, debug=debug)
In [152]:
test_dichotomy_in_array(16, debug=True)
      Searching for 324 in sequence of size 16 = [31, 73, 120, 137, 165, 188, 223, 226, 324, 355, 374, 404, 462, 561, 653, 743]
      -> found at index 8 !
Out[152]:
8
In [153]:
import sys
sys.setrecursionlimit(10000)
In [154]:
T = random_sorted_sequence(100)
%timeit test_dichotomy_in_array(100, array=T)
3.75 µs ± 759 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [155]:
T = random_sorted_sequence(1000)
%timeit test_dichotomy_in_array(1000, array=T)
4.84 µs ± 610 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [156]:
T = random_sorted_sequence(10000)
%timeit test_dichotomy_in_array(10000, array=T)
5.09 µs ± 131 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [157]:
T = random_sorted_sequence(100000)
%timeit test_dichotomy_in_array(100000, array=T)
5.36 µs ± 417 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [158]:
T = random_sorted_sequence(1000000)
%timeit test_dichotomy_in_array(1000000, array=T)
5.55 µs ± 280 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

La complexité semble bien logarithmique en $n$ (ie., $\mathcal{O}(\log(n))$) !

Recherche dans une liste triée en $\mathcal{O}(n)$

C'est comme la recherche dans un tableau trié, sauf qu'on recopie la liste de gauche ou de droite lors des appels récursifs.

La complexité de cet algorithme est en $\mathcal{O}(n)$.

Avec le master theorem, on a $a=1, b=2, k=1$ : on divise les entrées en $a=2$ entrées de tailles $\leq b=2$ plus petites, sur lesquels on applique un traitement linéaire ($\mathcal{O}(n^{k=1})$ avant l'appel récursif (à cause des recopies !).

In [207]:
def dichotomy_in_list(sequence, value):
    n = len(sequence)
    if n == 0:            raise KeyError
    index_of_middle = n // 2
    middle_of_list = sequence[index_of_middle]
    if value < middle_of_list:  # search on left
        # creating this list takes O(n/2) time
        left_list = sequence[:index_of_middle]
        return dichotomy_in_list(left_list, value, debug=debug, depth=depth+1)
    elif value > middle_of_list:  # search on right
        # creating this list takes O(n/2) time
        right_list = sequence[index_of_middle:]
        return index_of_middle + dichotomy_in_list(right_list, value, debug=debug, depth=depth+1)
    else:
        return index_of_middle

Et avec juste un peu d'affichage pour (un rappel ou pour) comprendre le fonctionnement :

In [131]:
def dichotomy_in_list(sequence, value, debug=False, depth=0):
    n = len(sequence)
    if debug:
        print(f"    {'  '*(depth+1)}Sequence of size {n} = {sequence}")
    if n == 0:
        raise KeyError
    index_of_middle = n // 2
    middle_of_list = sequence[index_of_middle]
    if value < middle_of_list:  # search on left
        left_list = sequence[:index_of_middle]
        if debug: print(f"    {'  '*(depth+1)}-> going left, in {left_list} of size {len(left_list)}...")
        return dichotomy_in_list(left_list, value, debug=debug, depth=depth+1)
    elif value > middle_of_list:  # search on right
        right_list = sequence[index_of_middle:]
        if debug: print(f"    {'  '*(depth+1)}-> going right, in {right_list} of size {len(right_list)}...")
        return index_of_middle + dichotomy_in_list(right_list, value, debug=debug, depth=depth+1)
    elif value == middle_of_list:
        if debug: print(f"    {'  '*(depth+1)}-> found at index {index_of_middle} !")
        return index_of_middle
    else:
        raise KeyError

Faisons quelques essais :

In [132]:
n = 16
example_of_list = list(range(n))
In [133]:
for value in example_of_list:
    print(f"\n    Looking for value {value} in {example_of_list} :")
    dichotomy_in_list(example_of_list, value, debug=True)
    Looking for value 0 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left, in [0, 1, 2, 3] of size 4...
          Sequence of size 4 = [0, 1, 2, 3]
          -> going left, in [0, 1] of size 2...
            Sequence of size 2 = [0, 1]
            -> going left, in [0] of size 1...
              Sequence of size 1 = [0]
              -> found at index 0 !
Out[133]:
0
    Looking for value 1 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left, in [0, 1, 2, 3] of size 4...
          Sequence of size 4 = [0, 1, 2, 3]
          -> going left, in [0, 1] of size 2...
            Sequence of size 2 = [0, 1]
            -> found at index 1 !
Out[133]:
1
    Looking for value 2 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left, in [0, 1, 2, 3] of size 4...
          Sequence of size 4 = [0, 1, 2, 3]
          -> found at index 2 !
Out[133]:
2
    Looking for value 3 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going left, in [0, 1, 2, 3] of size 4...
          Sequence of size 4 = [0, 1, 2, 3]
          -> going right, in [2, 3] of size 2...
            Sequence of size 2 = [2, 3]
            -> found at index 1 !
Out[133]:
3
    Looking for value 4 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> found at index 4 !
Out[133]:
4
    Looking for value 5 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going right, in [4, 5, 6, 7] of size 4...
          Sequence of size 4 = [4, 5, 6, 7]
          -> going left, in [4, 5] of size 2...
            Sequence of size 2 = [4, 5]
            -> found at index 1 !
Out[133]:
5
    Looking for value 6 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going right, in [4, 5, 6, 7] of size 4...
          Sequence of size 4 = [4, 5, 6, 7]
          -> found at index 2 !
Out[133]:
6
    Looking for value 7 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going left, in [0, 1, 2, 3, 4, 5, 6, 7] of size 8...
        Sequence of size 8 = [0, 1, 2, 3, 4, 5, 6, 7]
        -> going right, in [4, 5, 6, 7] of size 4...
          Sequence of size 4 = [4, 5, 6, 7]
          -> going right, in [6, 7] of size 2...
            Sequence of size 2 = [6, 7]
            -> found at index 1 !
Out[133]:
7
    Looking for value 8 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> found at index 8 !
Out[133]:
8
    Looking for value 9 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> going left, in [8, 9, 10, 11] of size 4...
          Sequence of size 4 = [8, 9, 10, 11]
          -> going left, in [8, 9] of size 2...
            Sequence of size 2 = [8, 9]
            -> found at index 1 !
Out[133]:
9
    Looking for value 10 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> going left, in [8, 9, 10, 11] of size 4...
          Sequence of size 4 = [8, 9, 10, 11]
          -> found at index 2 !
Out[133]:
10
    Looking for value 11 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> going left, in [8, 9, 10, 11] of size 4...
          Sequence of size 4 = [8, 9, 10, 11]
          -> going right, in [10, 11] of size 2...
            Sequence of size 2 = [10, 11]
            -> found at index 1 !
Out[133]:
11
    Looking for value 12 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> found at index 4 !
Out[133]:
12
    Looking for value 13 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> going right, in [12, 13, 14, 15] of size 4...
          Sequence of size 4 = [12, 13, 14, 15]
          -> going left, in [12, 13] of size 2...
            Sequence of size 2 = [12, 13]
            -> found at index 1 !
Out[133]:
13
    Looking for value 14 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> going right, in [12, 13, 14, 15] of size 4...
          Sequence of size 4 = [12, 13, 14, 15]
          -> found at index 2 !
Out[133]:
14
    Looking for value 15 in [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] :
      Sequence of size 16 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
      -> going right, in [8, 9, 10, 11, 12, 13, 14, 15] of size 8...
        Sequence of size 8 = [8, 9, 10, 11, 12, 13, 14, 15]
        -> going right, in [12, 13, 14, 15] of size 4...
          Sequence of size 4 = [12, 13, 14, 15]
          -> going right, in [14, 15] of size 2...
            Sequence of size 2 = [14, 15]
            -> found at index 1 !
Out[133]:
15

Et maintenant des essais sur des entrées de tailles croissantes :

In [134]:
import random

def random_sorted_sequence(n, minint=0, maxint=1000):
    sequence = [random.randint(minint, maxint) for _ in range(n)]
    return sorted(sequence)
In [161]:
def test_dichotomy_in_list(n, debug=False, sequence=None):
    if sequence is None:
        sequence = random_sorted_sequence(n)
    value = random.choice(sequence)
    return dichotomy_in_list(sequence, value, debug=debug)
In [162]:
test_dichotomy_in_array(16, debug=True)
      Searching for 217 in sequence of size 16 = [32, 42, 72, 85, 107, 175, 217, 343, 396, 446, 577, 622, 708, 753, 819, 980]
      -> going left...
        Searching for 217 in sequence of size 8 = [32, 42, 72, 85, 107, 175, 217, 343]
        -> going right...
          Searching for 217 in sequence of size 3 = [175, 217, 343]
          -> found at index 6 !
Out[162]:
6
In [163]:
import sys
sys.setrecursionlimit(10000)
In [164]:
L = random_sorted_sequence(100)
%timeit test_dichotomy_in_list(100, sequence=L)
2.13 µs ± 84 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [165]:
L = random_sorted_sequence(1000)
%timeit test_dichotomy_in_list(1000, sequence=L)
6.73 µs ± 185 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [169]:
L = random_sorted_sequence(10000)
%timeit test_dichotomy_in_list(10000, sequence=L)
40.3 µs ± 1.01 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [168]:
L = random_sorted_sequence(100000)
%timeit test_dichotomy_in_list(100000, sequence=L)
853 µs ± 89.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [170]:
L = random_sorted_sequence(1000_000)
%timeit test_dichotomy_in_list(1000_000, sequence=L)
34.2 ms ± 2.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [171]:
L = random_sorted_sequence(10_000_000)
%timeit test_dichotomy_in_list(10_000_000, sequence=L)
436 ms ± 46.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

La complexité semble bien linéaire en $n$ (ie., $\mathcal{O}(n)$) !

Tri fusion (merge sort)

  • Problème :

    • entrée = tableau T = [a1, ..., an] de n valeurs
    • sortie = tableau trié par ordre croissant
  • Algorithme :

    • on divise en deux le tableau, gauche = [a1, ..., a_n/2] et droite = [a_n/2+1, ..., an],
    • on trie récursivement les deux sous tableaux,
    • on fusionne (merge) les deux sous tableaux en un.

La fusion est simple à réaliser : on commence tout à gauche des deux tableaux, on avance dans le tableau de gauche ou le tableau de droite, séquentiellement, jusqu'à avoir épuiser leurs valeurs, en prenant la valeur de gauche tant qu'elle est plus petite que celle de droite.

La complexité de cet algorithme est en $\mathcal{O}(n \log(n))$.

Avec le master theorem, on a $a=2, b=2, k=1$ : on divise les entrées en $a=2$ entrées de tailles $\leq b=2$ plus petites, sur lesquels on applique un traitement linéaire ($\mathcal{O}(n^{k=1})$ avant et après l'appel récursif.

In [197]:
def merge(left, right):
    result = []
    left_idx, right_idx = 0, 0  # growing pointers
    while left_idx < len(left) and right_idx < len(right):
        # this loop terminates because left_idx + right_idx is strictly increasing
        # and bounded by len(left) + len(right)
        # change the direction of this comparison to change the direction of the sort
        if left[left_idx] <= right[right_idx]:
            result.append(left[left_idx])
            left_idx += 1
        else:
            result.append(right[right_idx])
            right_idx += 1
    # we still have values to take on the left
    if left_idx < len(left):
        result.extend(left[left_idx:])
    # we still have values to take on the right
    if right_idx < len(right):
        result.extend(right[right_idx:])
    return result
In [198]:
def merge_sort(m):
    if len(m) <= 1:
        return m
 
    middle = len(m) // 2
    # separating the array in two pieces is easy with Python
    # but keep in mind, this takes a linear time to copy the arrays!
    left = m[:middle]
    right = m[middle:]
 
    sorted_left = merge_sort(left)
    sorted_right = merge_sort(right)
    return list(merge(sorted_left, sorted_right))

Quelques tests :

In [199]:
L = random_sorted_sequence(100)
%timeit merge_sort(shuffle(L))
305 µs ± 18.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [200]:
L = random_sorted_sequence(1000)
%timeit merge_sort(shuffle(L))
4.7 ms ± 733 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [201]:
L = random_sorted_sequence(10_000)
%timeit merge_sort(shuffle(L))
61.4 ms ± 9.57 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [202]:
L = random_sorted_sequence(100_000)
%timeit merge_sort(shuffle(L))
661 ms ± 41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [203]:
L = random_sorted_sequence(1000_000)
%timeit merge_sort(shuffle(L))
8.27 s ± 213 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

On pourrait vérifier que la complexité est bien en $\mathcal{O}(n \log(n))$.

In [433]:
import timeit
try:
    from tqdm import tqdm_notebook as tqdm
except ImportError:
    def tqdm(iterator, *args, **kwargs):
        return iterator
In [357]:
values_n = np.array(np.floor(np.logspace(2, 6.5, num=50)), dtype=int)
values_L = [ random_sorted_sequence(n) for n in values_n ]
In [358]:
values_times = [
    timeit.timeit(
        stmt=f"merge_sort(shuffle({L}))",
        number= 10000 if n <= 1000 else (1000 if n <= 10000 else (100 if n <= 100000 else 10)),
        globals=globals()
    )
    for (n, L) in tqdm(list(zip(values_n, values_L)))
]
  0%|          | 0/50 [00:00<?, ?it/s]
  2%|▏         | 1/50 [00:03<02:27,  3.01s/it]
  4%|▍         | 2/50 [00:07<02:42,  3.38s/it]
  6%|▌         | 3/50 [00:12<03:03,  3.90s/it]
  8%|▊         | 4/50 [00:18<03:27,  4.51s/it]
 10%|█         | 5/50 [00:26<04:06,  5.47s/it]
 12%|█▏        | 6/50 [00:35<04:59,  6.80s/it]
 14%|█▍        | 7/50 [00:48<06:11,  8.63s/it]
 16%|█▌        | 8/50 [01:04<07:36, 10.87s/it]
 18%|█▊        | 9/50 [01:25<09:22, 13.73s/it]
 20%|██        | 10/50 [01:53<12:05, 18.15s/it]
 22%|██▏       | 11/50 [02:31<15:39, 24.08s/it]
 24%|██▍       | 12/50 [02:35<11:25, 18.03s/it]
 26%|██▌       | 13/50 [02:40<08:41, 14.10s/it]
 28%|██▊       | 14/50 [02:46<07:02, 11.73s/it]
 30%|███       | 15/50 [02:54<06:08, 10.54s/it]
 32%|███▏      | 16/50 [03:05<06:01, 10.63s/it]
 34%|███▍      | 17/50 [03:18<06:15, 11.38s/it]
 36%|███▌      | 18/50 [03:35<06:55, 12.99s/it]
 38%|███▊      | 19/50 [03:56<08:00, 15.49s/it]
 40%|████      | 20/50 [04:22<09:22, 18.76s/it]
 42%|████▏     | 21/50 [04:56<11:13, 23.23s/it]
 44%|████▍     | 22/50 [05:38<13:29, 28.91s/it]
 46%|████▌     | 23/50 [05:44<09:51, 21.89s/it]
 48%|████▊     | 24/50 [05:51<07:35, 17.53s/it]
 50%|█████     | 25/50 [05:59<06:05, 14.62s/it]
 52%|█████▏    | 26/50 [06:09<05:19, 13.32s/it]
 54%|█████▍    | 27/50 [06:22<05:04, 13.26s/it]
 56%|█████▌    | 28/50 [06:43<05:38, 15.39s/it]
 58%|█████▊    | 29/50 [07:06<06:12, 17.72s/it]
 60%|██████    | 30/50 [07:33<06:50, 20.53s/it]
 62%|██████▏   | 31/50 [08:10<08:06, 25.59s/it]
 64%|██████▍   | 32/50 [08:52<09:06, 30.37s/it]
 66%|██████▌   | 33/50 [09:46<10:35, 37.41s/it]
 68%|██████▊   | 34/50 [09:53<07:33, 28.33s/it]
 70%|███████   | 35/50 [10:01<05:35, 22.36s/it]
 72%|███████▏  | 36/50 [10:13<04:27, 19.12s/it]
 74%|███████▍  | 37/50 [10:26<03:43, 17.21s/it]
 76%|███████▌  | 38/50 [10:47<03:40, 18.35s/it]
 78%|███████▊  | 39/50 [11:09<03:33, 19.42s/it]
 80%|████████  | 40/50 [11:37<03:41, 22.17s/it]
 82%|████████▏ | 41/50 [12:13<03:56, 26.31s/it]
 84%|████████▍ | 42/50 [13:02<04:23, 32.95s/it]
 86%|████████▌ | 43/50 [14:09<05:03, 43.42s/it]
 88%|████████▊ | 44/50 [15:23<05:15, 52.60s/it]
 90%|█████████ | 45/50 [17:08<05:41, 68.21s/it]
 92%|█████████▏| 46/50 [19:03<05:28, 82.13s/it]
 94%|█████████▍| 47/50 [21:16<04:52, 97.63s/it]
 96%|█████████▌| 48/50 [24:13<04:02, 121.18s/it]
 98%|█████████▊| 49/50 [27:56<02:31, 151.72s/it]
100%|██████████| 50/50 [32:35<00:00, 190.13s/it]
In [363]:
import numpy as np

import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (8, 5)
mpl.rcParams['figure.dpi'] = 120
    
import matplotlib.pyplot as plt

import seaborn as sns
sns.set(context="notebook", style="whitegrid", palette="hls", font="sans-serif", font_scale=1.1)
In [364]:
plt.figure()
plt.xlabel("Size of the input array $n$")
plt.ylabel("Time in second")
plt.title("Time complexity of the merge sort algorithm (naive code in Python)")
plt.plot(values_n, values_times, "d-")
plt.show()
Out[364]:
<Figure size 960x600 with 0 Axes>
Out[364]:
Text(0.5, 0, 'Size of the input array $n$')
Out[364]:
Text(0, 0.5, 'Time in second')
Out[364]:
Text(0.5, 1.0, 'Time complexity of the merge sort algorithm (naive code in Python)')
Out[364]:
[<matplotlib.lines.Line2D at 0x7f8227b8af28>]
In [368]:
plt.figure()
plt.xlabel("Size of the input array $n$")
plt.ylabel("Time in milli-second, normalized by $n \log(n)$")
plt.title("Time complexity of the merge sort algorithm (naive code in Python)")
normalized_values_times = 1e6 * np.array(values_times) / (values_n * np.log(values_n))
min_time = 1e5
plt.plot(values_n[values_n >= min_time], normalized_values_times[values_n >= min_time], "d-")
plt.show()
Out[368]:
<Figure size 960x600 with 0 Axes>
Out[368]:
Text(0.5, 0, 'Size of the input array $n$')
Out[368]:
Text(0, 0.5, 'Time in milli-second, normalized by $n \\log(n)$')
Out[368]:
Text(0.5, 1.0, 'Time complexity of the merge sort algorithm (naive code in Python)')
Out[368]:
[<matplotlib.lines.Line2D at 0x7f81fe4877f0>]

Enveloppe convexe de points en 2D (quickhull)

  • Problème :

    • entrée = tableau T = [xy1, ..., xyn] de n points dans le plan (xy = (x, y))
    • sortie = tableau hull = [xy_i1, ..., xy_ip] des p points constituant l'enveloppe convexe des n points d'entrée
  • Algorithme :

    • on identifie le point le plus en bas à gauche $P_{bg}$, le point le plus en haut à droite $P_{hd}$,
    • la diagonale $D$ est le segment orienté qui va de $P_{bg}$ à $P_{hd}$,
    • on sépare l'ensemble en deux, les points à gauche de $D$ (coin bas droite), ceux à droite de $D$,
    • on calcule les deux enveloppes convexes $E_g$ et $E_d$ des deux ensembles de points (récursivement),
    • on fusionne les deux enveloppes convexes $E_g$ et $E_d$ en les ordonnant comme il faut (et en enlevant l'arête $P_{hd} \to P_{pg}$ qui est présente dans $E_g$ et $P_{pg} \to P_{hd}$ présente dans $E_d$.

La fusion est naïve simple à réaliser. Le test pour savoir si un point est à gauche, à droite, ou sur $D$, se fait en temps $O(1)$ avec un calcul de produit scalaire.

In [374]:
exemple_points = [(1, 3) for _ in range(21)]
for i in range(1, 21):
    x, y = exemple_points[i-1]
    exemple_points[i] = (x * 17) % 23, (y * 17) % 19
In [407]:
plt.figure()
plt.title("Des points en 2D")
Xs = [p[0] for p in exemple_points]
Ys = [p[1] for p in exemple_points]
ax = plt.scatter(Xs, Ys)
plt.show()
Out[407]:
<Figure size 960x600 with 0 Axes>
Out[407]:
Text(0.5, 1.0, 'Des points en 2D')
In [376]:
def alpha(a, b, c):
    xa, ya = a
    xb, yb = b
    xc, yc = c
    return (xb - xa) * (yc - ya) - (xc - xa) * (yb - ya)
In [380]:
p = exemple_points
for i in range(2, 21):
    for j in range(1, i):
        for k in range(0, j):
            if alpha(p[i], p[j], p[k]) == 0:
                print(f"Les trois points #{i}, #{j} et #{k} sont alignés ({p[i]}, {p[j]} et {p[k]}).")
Les trois points #5, #2 et #0 sont alignés ((21, 18), (13, 12) et (1, 3)).
Les trois points #11, #8 et #6 sont alignés ((22, 12), (18, 8) et (12, 2)).
Les trois points #12, #6 et #4 sont alignés ((6, 14), (12, 2) et (8, 10)).
Les trois points #16, #15 et #10 sont alignés ((2, 15), (15, 2) et (4, 13)).
Les trois points #17, #3 et #2 sont alignés ((11, 8), (14, 14) et (13, 12)).
Les trois points #18, #9 et #0 sont alignés ((3, 3), (7, 3) et (1, 3)).
Les trois points #18, #13 et #3 sont alignés ((3, 3), (10, 10) et (14, 14)).
Les trois points #19, #10 et #1 sont alignés ((5, 13), (4, 13) et (17, 13)).
Les trois points #20, #9 et #1 sont alignés ((16, 12), (7, 3) et (17, 13)).
Les trois points #20, #11 et #2 sont alignés ((16, 12), (22, 12) et (13, 12)).
In [381]:
plt.figure()
plt.title("Des points en 2D, certains sont alignés")
Xs = [p[0] for p in exemple_points]
Ys = [p[1] for p in exemple_points]
plt.scatter(Xs, Ys)
p = exemple_points
for i in range(2, 21):
    for j in range(1, i):
        for k in range(0, j):
            if alpha(p[i], p[j], p[k]) == 0:
                plt.plot([p[i][0], p[j][0], p[k][0]], [p[i][1], p[j][1], p[k][1]], ':')
plt.show()
Out[381]:
<Figure size 960x600 with 0 Axes>
Out[381]:
Text(0.5, 1.0, 'Des points en 2D, certains sont alignés')
Out[381]:
<matplotlib.collections.PathCollection at 0x7f81fe18edd8>
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe304438>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe19c1d0>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe19c550>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe19c8d0>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe19cc50>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe19cfd0>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe24dd68>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe12e6d8>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe12ea58>]
Out[381]:
[<matplotlib.lines.Line2D at 0x7f81fe12edd8>]
In [391]:
def plus_bas(points):
    """ Trouve le point (xa, ya) le plus en bas à gauche, en temps linéaire."""
    n = len(points)
    xa, ya = points[0]
    for j in range(1, n):
        xj, yj = points[j]
        if (ya > yj) or (ya == yj and xa > xj):
            xa, ya = xj, yj
    return xa, ya
In [392]:
plus_bas(exemple_points)  # (12, 2)
Out[392]:
(12, 2)
In [393]:
def plus_haut(points):
    """ Trouve le point (xb, yb) le plus en haut à droite, en temps linéaire."""
    n = len(points)
    xb, yb = points[0]
    for j in range(1, n):
        xj, yj = points[j]
        if (yb < yj) or (yb == yj and xb < xj):
            xb, yb = xj, yj
    return xb, yb
In [395]:
plus_haut(exemple_points)  # (21, 8)
Out[395]:
(21, 18)
In [396]:
def plus_a_droite(b, h, points):
    if not points:
        return b, []
    t = points[0]
    q = points[1:]
    m, d = plus_a_droite(b, h, q)  # recursif !
    angle = alpha(b, t, h)
    if angle <= 0:
        return m, d
    else:
        if not d:
            return t, [t]
        if angle > alpha(b, m, h):
            return t, [t] + d
        else:
            return m, [t] + d
In [400]:
def quickHull_droite(b, h, points):
    m, d = plus_a_droite(b, h, points)
    # si aucun point n'est à droite du segment orienté [b h], on en garde un seul, [b]
    if not d:
        return [b]
    # on fait les deux appels récursifs !
    return quickHull_droite(b, m, d) + quickHull_droite(m, h, d)
In [408]:
def quickHull(points):
    b = plus_haut(points)
    h = plus_bas(points)
    return quickHull_droite(b, h, points) + quickHull_droite(h, b, points)
In [409]:
quickHull(exemple_points)
Out[409]:
[(21, 18), (9, 18), (2, 15), (1, 3), (12, 2), (15, 2), (22, 12)]

La complexité de cet algorithme est en $\mathcal{O}(n \log(n))$.

Avec le master theorem, on a $a=2, b=2, k=1$ : on divise les entrées en $a=2$ entrées de tailles $\leq b=2$ plus petites, sur lesquels on applique un traitement linéaire ($\mathcal{O}(n^{k=1})$ avant et après l'appel récursif. (En supposant les points uniformément répartis dans le plan.)

On pourrait l'implémenter, et vérifier la complexité sur des entrées aléatoires (ie. les points sont tirés uniformément dans une boîte carrée $(x_i,y_i) \sim [x_\min,x_\max] \times [y_\min,y_\max]$).

Attention ici avec Python, les ajout en tête de listes peuvent prendre un coût linéaire (et pas constant comme en OCaml) et les concaténations de listes prennent un coût linéaire ! Donc la vraie implémentation sera plus coûteuse que ce $\mathcal{O}(n \log(n))$ annoncé.

Un exemple

In [415]:
plt.figure()
plt.title("Des points en 2D, et leur enveloppe convexe")
Xs = [p[0] for p in exemple_points]
Ys = [p[1] for p in exemple_points]
plt.scatter(Xs, Ys)
p = exemple_points
enveloppe = quickHull(p)
q = enveloppe
h = len(enveloppe)
for i in range(h - 1):
    plt.plot([q[i][0], q[i+1][0]], [q[i][1], q[i+1][1]])
plt.plot([q[-1][0], q[0][0]], [q[-1][1], q[0][1]])
plt.show()
Out[415]:
<Figure size 960x600 with 0 Axes>
Out[415]:
Text(0.5, 1.0, 'Des points en 2D, et leur enveloppe convexe')
Out[415]:
<matplotlib.collections.PathCollection at 0x7f81fdca0e48>
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdd1f0f0>]
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdcae2b0>]
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdcf9ac8>]
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdcae940>]
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdcaecc0>]
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdcbe080>]
Out[415]:
[<matplotlib.lines.Line2D at 0x7f81fdcffba8>]

Exemples aléatoires de taille contrôlée

In [416]:
import random

def points_aleatoires(n=100, xmin=-10, xmax=10, ymin=-10, ymax=10):
    return [(random.randint(xmin, xmax), random.randint(ymin, ymax)) for _ in range(n)]
In [424]:
xmin, xmax, ymin, ymax = -10, 10, -10, 10

for n in [5, 10, 50, 100, 500, 1000]:
    p = points_aleatoires(n=n, xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax)
    _ = plt.figure()
    _ = plt.title(f"{n} points aléatoires dans [{xmin},{xmax}] x [{ymin},{ymax}] en 2D, et leur enveloppe convexe")
    Xs = [pi[0] for pi in p]
    Ys = [pi[1] for pi in p]
    _ = plt.scatter(Xs, Ys)
    enveloppe = quickHull(p)
    q = enveloppe
    h = len(enveloppe)
    for i in range(h - 1):
        _ = plt.plot([q[i][0], q[i+1][0]], [q[i][1], q[i+1][1]])
    _ = plt.plot([q[-1][0], q[0][0]], [q[-1][1], q[0][1]])
    _ = plt.show()

Complexité temporelle de ce calcul d'enveloppe convexe

In [444]:
values_n = np.array(np.floor(np.logspace(1, 3.5, num=20)), dtype=int)
values_points = [ points_aleatoires(n=n) for n in values_n ]
In [445]:
values_times = [
    timeit.timeit(
        stmt=f"quickHull({points})",
        number= 1000 if n <= 1000 else (100 if n <= 10000 else (10 if n <= 100000 else 1)),
        globals=globals()
    )
    for (n, points) in tqdm(list(zip(values_n, values_points)))
]
In [446]:
plt.figure()
plt.xlabel("Size of the input array $n$")
plt.ylabel("Time in second")
plt.title("Time complexity of the quick hull algorithm (naive code in Python)")
plt.plot(values_n, values_times, "d-")
plt.show()
Out[446]:
<Figure size 960x600 with 0 Axes>
Out[446]:
Text(0.5, 0, 'Size of the input array $n$')
Out[446]:
Text(0, 0.5, 'Time in second')
Out[446]:
Text(0.5, 1.0, 'Time complexity of the quick hull algorithm (naive code in Python)')
Out[446]:
[<matplotlib.lines.Line2D at 0x7f81fe11b2b0>]
In [447]:
plt.figure()
plt.xlabel("Size of the input array $n$")
plt.ylabel("Time in milli-second, normalized by $n \log(n)$")
plt.title("Time complexity of the quick hull algorithm (naive code in Python)")
normalized_values_times = 1e6 * np.array(values_times) / (values_n * np.log(values_n))
min_time = 1
plt.plot(values_n[values_n >= min_time], normalized_values_times[values_n >= min_time], "d-")
plt.show()
Out[447]:
<Figure size 960x600 with 0 Axes>
Out[447]:
Text(0.5, 0, 'Size of the input array $n$')
Out[447]:
Text(0, 0.5, 'Time in milli-second, normalized by $n \\log(n)$')
Out[447]:
Text(0.5, 1.0, 'Time complexity of the quick hull algorithm (naive code in Python)')
Out[447]:
[<matplotlib.lines.Line2D at 0x7f81fd9f66d8>]

Algorithme de Gauss-Karatsuba

  • Problème :

    • entrée = deux nombres x et y, avec n chiffres dans leurs écritures décimales,
    • sortie = un nombre z = x * y produit des deux nombres.
  • Algorithme :

    • on découpe les deux nombres en nombres avec au plus n/2 chiffres :
      • découper x = b + 10^{n/2} a, avec a et b de taille au plus n/2
      • découper y = d + 10^{n/2} c, avec c et d de taille au plus n/2
    • trois appels récursifs au produit de nombres, de tailles deux fois plus petites
    • on calcule a * c et b * d
    • l'astuce vient de ad_plus_bc = ad + bc = (a+b)(c+d) - ac - bd comme (a+b)(c+d) = ac + ad + bc + bd
    • x y = (b + 10^{n/2} a) (d + 10^{n/2} c) = bd + 10^{n/2} (ad_plus_bc) + 10^{n} (a c)

La complexité de cet algorithme est en $\mathcal{O}(n^{\log_2(3)})$, asymptotiquement meilleur que $\mathcal{O}(n^2)$ la méthode naïve.

Avec le master theorem, on a $a=3, b=2, k=1$ : on divise les entrées en $a=7$ entrées de tailles $\leq b=2$ plus petites [1], sur lesquels on applique un traitement linéaire (toutes les additions $\mathcal{O}(n^{k=1})$ avant et après l'appel récursif.

Donc $a = 3 > b^k = 2$ ce qui donne $T(n) = \mathcal{O}(n^{\log_b(a)})$.

En comparaison, la méthode naïve, que voici, sera en avec $a=4$ appels récursifs, soit $a = 4 > b^k = 2$ ce qui donne $T(n) = \mathcal{O}(n^{\log_b(a)}) = \mathcal{O}(n^2)$.

In [305]:
def naivemult(x, y, base=10):
    """ Function to multiply 2 numbers using the grade school algorithm."""
    if len(str(x)) == 1 or len(str(y)) == 1:
        return x*y
    else:
        n = max(len(str(x)),len(str(y)))
        # that's suboptimal, and ugly, but it's quick to write

        nby2 = n // 2
        # split x in b + 10^{n/2} a, with a and b of sizes at most n/2
        a = x // base**(nby2)
        b = x %  base**(nby2)
        # split y in d + 10^{n/2} a, with c and d of sizes at most n/2
        c = y // base**(nby2)
        d = y %  base**(nby2)

        # we make 3 calls to entries which are 2 times smaller
        ac = naivemult(a, c)
        ad = naivemult(a, d)
        bd = naivemult(b, d)
        bc = naivemult(b, c)
        # x y = (b + 10^{n/2} a) (d + 10^{n/2} c)
        # ==> x y = bd + 10^{n/2} (b c + a d) + 10^{n} (a c)

        # this little trick, writing n as 2*nby2 takes care of both even and odd n
        prod = ac * base**(2*nby2) + ((ad + bc) * base**nby2) + bd

        return prod

Et la fonction implémentant l'algorithme de Karatsuba.

In [306]:
def karatsuba(x, y, base=10):
    """ Function to multiply 2 numbers in a more efficient manner than the grade school algorithm."""
    if len(str(x)) == 1 or len(str(y)) == 1:
        return x*y
    else:
        n = max(len(str(x)),len(str(y)))
        # that's suboptimal, and ugly, but it's quick to write

        nby2 = n // 2
        # split x in b + 10^{n/2} a, with a and b of sizes at most n/2
        a = x // base**(nby2)
        b = x %  base**(nby2)
        # split y in d + 10^{n/2} c, with c and d of sizes at most n/2
        c = y // base**(nby2)
        d = y %  base**(nby2)

        # we make 3 calls to entries which are 2 times smaller
        ac = karatsuba(a, c)
        bd = karatsuba(b, d)
        # ad + bc = (a+b)(c+d) - ac - bd as (a+b)(c+d) = ac + ad + bc + bd
        ad_plus_bc = karatsuba(a + b, c + d) - ac - bd
        # x y = (b + 10^{n/2} a) (d + 10^{n/2} c)
        # ==> x y = bd + 10^{n/2} (ad_plus_bc) + 10^{n} (a c)

        # this little trick, writing n as 2*nby2 takes care of both even and odd n
        prod = ac * base**(2*nby2) + (ad_plus_bc * base**nby2) + bd

        return prod

Un exemple :

In [308]:
x = 1234
y = 4567
x * y
naivemult(x, y)
karatsuba(x, y)
Out[308]:
5635678
Out[308]:
5635678
Out[308]:
5635678

Des exemples de grandes tailles :

In [278]:
def rand_largeint(n=1024):
    return int("".join(str(random.randint(0, 9)) for _ in range(n)))
In [279]:
x = rand_largeint(1024)
y = rand_largeint(1024)
%timeit x * y
%timeit naivemult(x, y)
%timeit karatsuba(x, y)
9.94 µs ± 290 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
1.04 s ± 38.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
139 ms ± 4.35 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Quelle est l'influence de la base ?

In [317]:
n = 1024
base = 10
%timeit rand_largeint(n) * rand_largeint(n)
%timeit naivemult(rand_largeint(n), rand_largeint(n), base=base)
%timeit karatsuba(rand_largeint(n), rand_largeint(n), base=base)
2.73 ms ± 186 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.03 s ± 34.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
143 ms ± 4.56 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [318]:
n = 1024
base = 2
%timeit rand_largeint(n) * rand_largeint(n)
%timeit naivemult(rand_largeint(n), rand_largeint(n), base=base)
%timeit karatsuba(rand_largeint(n), rand_largeint(n), base=base)
2.53 ms ± 126 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
921 ms ± 14.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
225 ms ± 9.75 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Et pour des entrées de tailles croissantes :

In [281]:
for n in [2**3, 2**4, 2**5, 2**6, 2**7, 2**8, 2**9, 2**10, 2**11, 2**12]:
    print(f"\nFor n = {n} : Python native, naive then Karatsuba :")
    x = rand_largeint(n)
    y = rand_largeint(n)
    %timeit x * y  # crazy fast!
    %timeit naivemult(x, y)
    %timeit karatsuba(x, y)
For n = 8 : Python native, naive then Karatsuba :
49.4 ns ± 3.68 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
71.8 µs ± 5.67 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
55.8 µs ± 637 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

For n = 16 : Python native, naive then Karatsuba :
60.8 ns ± 5.85 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
212 µs ± 5.89 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
178 µs ± 830 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

For n = 32 : Python native, naive then Karatsuba :
74.2 ns ± 0.627 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
1.07 ms ± 118 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
699 µs ± 28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

For n = 64 : Python native, naive then Karatsuba :
158 ns ± 22.8 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
4.07 ms ± 127 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.65 ms ± 13 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

For n = 128 : Python native, naive then Karatsuba :
275 ns ± 2.78 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
15.8 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
5.14 ms ± 151 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

For n = 256 : Python native, naive then Karatsuba :
891 ns ± 26 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
63.6 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
16.2 ms ± 774 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

For n = 512 : Python native, naive then Karatsuba :
3.16 µs ± 60.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
264 ms ± 7.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
51.9 ms ± 2.81 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 1024 : Python native, naive then Karatsuba :
9.98 µs ± 370 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
1 s ± 12.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
139 ms ± 4.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 2048 : Python native, naive then Karatsuba :
31 µs ± 624 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
4.14 s ± 138 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
430 ms ± 30.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

For n = 4096 : Python native, naive then Karatsuba :
101 µs ± 4.65 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
17.6 s ± 712 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.27 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Ce n'est pas facile de vérifier le comportement en $n^{\log_2(3)}$ mais on voit déjà que la méthode de Gauss-Karatsuba est bien plus rapide que la méthode naïve !


Algorithme de Strassen

  • On cherche à multiplier deux matrices $A, B \in\mathbb{K}^{n \times n}$, où $\mathbb{K}$ est un anneau commutatif quelconque (e.g., les entiers, ou les rationnels, ou les flottants), et $n\in\mathbb{N}$ est un entier.
  • On va rappeler un algorithme naïf (de complexité temporelle asymptotque en $\Theta(n^3)$), un algorithme récursif mais qui n'est pas plus efficace que l'algorithme naïf, et un algorithme récursif qui attend une complexité asymptotique plus effaice ($\Theta(n^{\log_2(7)})$).

  • Cf le cours pour l'algorithme, ou https://fr.wikipedia.org/wiki/Algorithme_de_Strassen.

Part of this code is coming from this blog post.

Méthode naïve, "méthode i k j"

On écrit $C = A \times B$, et on rappelle que $\forall i, j \in[n], C_{i,j} = \sum_{k=1}^n A_{i,k} B_{k,j}$. Donc on peut calculer cela avec trois boucles for imbriquées, pour $i, k, j$.

On va utiliser des tableaux numpy et pas des listes de listes, pour plus de simplicités.

In [644]:
import numpy as np

Attention : pour ne pas être trop lent, j'utilise la compilation "just in time" proposée par le projet numba, pour faire que ces fonctions naïves soient aussi efficaces que celles de Numpy (en gros).

In [645]:
try:
    from numba import jit
except ImportError:
    def jit(f, *args, **kwargs):
        return f
In [ ]:
@jit
def ikjMatrixProduct(A, B):
    """ Produit matriciel naïf, en O(n^3) opérations."""
    n = len(A)
    C = np.zeros((n, n), dtype=type(A[0,0]))
    for i in range(n):
        for k in range(n):
            for j in range(n):
                C[i,j] += A[i,k] * B[k,j]
    return C

Méthode naïve récursive

On va d'abord définir nos propres opérations d'additions et de soustractions de matrices dans $\mathbb{K}^{n \times n}$.

In [647]:
@jit
def add(A, B):
    n = len(A)
    C = np.zeros((n, n), dtype=type(A[0,0]))
    for i in range(n):
        for j in range(n):
            C[i,j] = A[i,j] + B[i,j]
    return C
In [648]:
@jit
def subtract(A, B):
    n = len(A)
    C = np.zeros((n, n), dtype=type(A[0,0]))
    for i in range(n):
        for j in range(n):
            C[i,j] = A[i,j] - B[i,j]
    return C

La méthode récursive mais naïve est la suivante. On découpe les matrices de taille $n \times n$ en quatre matrices de tailles $(n/2) \times (n/2)$. $$ A B = \begin{array}{l} \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} \end{array} \begin{array}{l} \begin{pmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{pmatrix}

\end{array}

C = \begin{array}{l} \begin{pmatrix} c_{11} = p_1 + p2 = a_{11}b_{11} + a_{12}b_{21} & c_{12} = p_3 + p4 = a_{11}b_{12} + a_{12}b_{22}\\ c_{21} = p_5 + p6 = a_{21}b_{11} + a_{22}b_{21} & c_{22} = p_7 + p8 = a_{21}b_{12} + a_{22}b_{22} \end{pmatrix} \end{array} $$

In [ ]:
def naive_recursive(A, B, leaf_size=LEAF_SIZE):
    n = len(A)

    if n <= leaf_size:
        return ikjMatrixProduct(A, B)
    else:
        # initializing the new sub-matrices
        newSize = n//2
        a11 = A[:newSize, :newSize]  # top left
        a12 = A[:newSize, newSize:]  # top right
        a21 = A[newSize:, :newSize]  # bottom left
        a22 = A[newSize:, newSize:]  # bottom right
        b11 = B[:newSize, :newSize]  # top left
        b12 = B[:newSize, newSize:]  # top right
        b21 = B[newSize:, :newSize]  # bottom left
        b22 = B[newSize:, newSize:]  # bottom right

        # Calculating p1 to p8:
        p1 = naive_recursive(a11, b11, leaf_size=leaf_size)
        p2 = naive_recursive(a12, b21, leaf_size=leaf_size)
        p3 = naive_recursive(a11, b12, leaf_size=leaf_size)
        p4 = naive_recursive(a12, b22, leaf_size=leaf_size)
        p5 = naive_recursive(a21, b11, leaf_size=leaf_size)
        p6 = naive_recursive(a22, b21, leaf_size=leaf_size)
        p7 = naive_recursive(a21, b12, leaf_size=leaf_size)
        p8 = naive_recursive(a22, b22, leaf_size=leaf_size)

        # calculating c11, c21, c11 and c22:
        c11 = add(p1, p2)
        c12 = add(p3, p4)
        c21 = add(p5, p6)
        c22 = add(p7, p8)

        # Grouping the results obtained in a single matrix:
        C = np.zeros((n, n), dtype=type(A[0,0]))
        for i in range(newSize):
            for j in range(newSize):
                C[i,j] = c11[i,j]
                C[i,j + newSize] = c12[i,j]
                C[i + newSize,j] = c21[i,j]
                C[i + newSize,j + newSize] = c22[i,j]
        return C

La complexité de cet algorithme est en $\mathcal{O}(n^{\log_2(8)}) = \mathcal{O}(n^3)$ équivalent à la méthode naïve non récursive.

Avec le master theorem, on a $a=8, b=2, k=2$ : on divise les entrées en $8$ entrées de tailles plus petites, et on fait $a=8$ calculs sur des entrées de tailles $\leq b=2$ plus petites [1], sur lesquels on applique un traitement quadratique (toutes les additions $\mathcal{O}(n^{k=2})$ avant et après l'appel récursif.

Donc $a = 8 > b^k = 4$ ce qui donne $T(n) = \mathcal{O}(n^{\log_b(a)})$.

[1] : c'est $n$ qui est divisé par deux, bien que $n^2$ soit divisé par $4$ et que l'on puisse considérer que la taille d'une matrice $(n,m)$ est $n \times m$.

In [ ]:
def naive(A, B, leaf_size=LEAF_SIZE):
    assert isinstance(A, np.ndarray) and isinstance(B, np.ndarray)
    assert len(A) == len(A[0]) == len(B) == len(B[0])

    # Make the matrices bigger so that you can apply the strassen
    # algorithm recursively without having to deal with odd matrix sizes
    nextPowerOfTwo = lambda n: 2**int(ceil(log(n,2)))
    n = len(A)
    m = nextPowerOfTwo(n)
    APrep = np.zeros((m, m), dtype=type(A[0,0]))
    BPrep = np.zeros((m, m), dtype=type(A[0,0]))
    for i in range(n):
        for j in range(n):
            APrep[i,j] = A[i,j]
            BPrep[i,j] = B[i,j]
    CPrep = naive_recursive(APrep, BPrep, leaf_size=leaf_size)
    C = np.zeros((n, n), dtype=type(A[0,0]))
    for i in range(n):
        for j in range(n):
            C[i,j] = CPrep[i,j]
    return C

Méthode récursive de Strassen

La méthode récursive de Strassen est la suivante. On découpe les matrices de taille $n \times n$ en quatre matrices de tailles $(n/2) \times (n/2)$. $$ A B = \begin{array}{l} \begin{pmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{pmatrix} \end{array} \begin{array}{l} \begin{pmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{pmatrix}

\end{array}

C = \begin{array}{l} \begin{pmatrix} c_{11} = p1 + p4 - p5 + p7 & c_{12} = p3 + p5 \\ c_{21} = p2 + p4 & c_{22} = p1 + p3 - p2 + p6 \\ \end{pmatrix} \end{array} $$ Avec les 7 produits intermédiaires suivants : $$ p1 = (a{11}+a{22}) (b{11}+b{22}) \ p2 = (a{21}+a{22}) (b{11}) \ p3 = (a{11}) (b{12} - b{22}) \ p4 = (a_{22}) (b{21} - b{11}) \ p5 = (a{11}+a{12}) (b{22}) \ p6 = (a{21}-a_{11}) (b{11}+b{12}) \ p7 = (a{12}-a{22}) * (b{21}+b{22}) $$ On fait plus d'additions mais moins de produits, et les additions sont en $\Theta(n^2)$, les produits sont plus couteux, donc on va gagner (asymptotiquement) !

La complexité de cet algorithme est en $\mathcal{O}(n^{\log_2(7)})$, asymptotiquement meilleur que $\mathcal{O}(n^3)$ la méthode naïve.

Avec le master theorem, on a $a=7, b=2, k=2$ : on divise les entrées en $8$ entrées de tailles 2 fois plus petites, mais on ne fait que $a=7$ calculs sur entrées de tailles $\leq b=2$ plus petites [1], sur lesquels on applique un traitement quadratique (toutes les additions $\mathcal{O}(n^{k=2})$ avant et après l'appel récursif.

Donc $a = 7 > b^k = 4$ ce qui donne $T(n) = \mathcal{O}(n^{\log_b(a)})$.

[1] : c'est $n$ qui est divisé par deux, bien que $n^2$ soit divisé par $4$ et que l'on puisse considérer que la taille d'une matrice $(n,m)$ est $n \times m$.

On va encore utiliser cette idée de leaf_size : dès que les entrées sont trop petites, on réutilise l'algorithme naïf.

In [649]:
LEAF_SIZE = 64
In [661]:
def strassenR(A, B, leaf_size=LEAF_SIZE):
    n = len(A)

    if n <= leaf_size:
        return ikjMatrixProduct(A, B)
    else:
        # initializing the new sub-matrices
        newSize = n//2
        a11 = A[:newSize, :newSize]  # top left
        a12 = A[:newSize, newSize:]  # top right
        a21 = A[newSize:, :newSize]  # bottom left
        a22 = A[newSize:, newSize:]  # bottom right
        b11 = B[:newSize, :newSize]  # top left
        b12 = B[:newSize, newSize:]  # top right
        b21 = B[newSize:, :newSize]  # bottom left
        b22 = B[newSize:, newSize:]  # bottom right

        # Calculating p1 to p7:
        p1 = strassenR(add(a11, a22), add(b11, b22), leaf_size=leaf_size) # p1 = (a11+a22) * (b11+b22)
        p2 = strassenR(add(a21, a22), b11, leaf_size=leaf_size)  # p2 = (a21+a22) * (b11)
        p3 = strassenR(a11, subtract(b12, b22), leaf_size=leaf_size)  # p3 = (a11) * (b12 - b22)
        p4 = strassenR(a22, subtract(b21, b11), leaf_size=leaf_size)   # p4 = (a22) * (b21 - b11)
        p5 = strassenR(add(a11, a12), b22, leaf_size=leaf_size)  # p5 = (a11+a12) * (b22)   
        p6 = strassenR(subtract(a21, a11), add(b11, b12), leaf_size=leaf_size) # p6 = (a21-a11) * (b11+b12)
        p7 = strassenR(subtract(a12, a22), add(b21, b22), leaf_size=leaf_size) # p7 = (a12-a22) * (b21+b22)

        # calculating c21, c21, c11 e c22:
        c12 = add(p3, p5) # c12 = p3 + p5
        c21 = add(p2, p4)  # c21 = p2 + p4
        c11 = subtract(add(add(p1, p4), p7), p5) # c11 = p1 + p4 - p5 + p7
        c22 = subtract(add(add(p1, p3), p6), p2) # c22 = p1 + p3 - p2 + p6

        # Grouping the results obtained in a single matrix:
        C = np.zeros((n, n), dtype=type(A[0,0]))
        for i in range(newSize):
            for j in range(newSize):
                C[i,j] = c11[i,j]
                C[i,j + newSize] = c12[i,j]
                C[i + newSize,j] = c21[i,j]
                C[i + newSize,j + newSize] = c22[i,j]
        return C
In [662]:
def strassen(A, B, leaf_size=LEAF_SIZE):
    assert isinstance(A, np.ndarray) and isinstance(B, np.ndarray)
    assert len(A) == len(A[0]) == len(B) == len(B[0])

    # Make the matrices bigger so that you can apply the strassen
    # algorithm recursively without having to deal with odd matrix sizes
    nextPowerOfTwo = lambda n: 2**int(ceil(log(n,2)))
    n = len(A)
    m = nextPowerOfTwo(n)
    APrep = np.zeros((m, m), dtype=type(A[0,0]))
    BPrep = np.zeros((m, m), dtype=type(A[0,0]))
    for i in range(n):
        for j in range(n):
            APrep[i,j] = A[i,j]
            BPrep[i,j] = B[i,j]
    CPrep = strassenR(APrep, BPrep, leaf_size=leaf_size)
    C = np.zeros((n, n), dtype=type(A[0,0]))
    for i in range(n):
        for j in range(n):
            C[i,j] = CPrep[i,j]
    return C

En fait, on devrait essayer d'utiliser les opérations les plus efficaces pour les additions et soustractions de matrices, afin de vraiment voir où se situe le gain de l'algorithme de Strassen.

In [663]:
def strassenR_with_numpy_for_add_sub(A, B, leaf_size=LEAF_SIZE):
    n = len(A)

    if n <= leaf_size:
        return ikjMatrixProduct(A, B)
    else:
        # initializing the new sub-matrices
        newSize = n//2
        # dividing the matrices in 4 sub-matrices:
        a11 = A[:newSize, :newSize]  # top left
        a12 = A[:newSize, newSize:]  # top right
        a21 = A[newSize:, :newSize]  # bottom left
        a22 = A[newSize:, newSize:]  # bottom right
        b11 = B[:newSize, :newSize]  # top left
        b12 = B[:newSize, newSize:]  # top right
        b21 = B[newSize:, :newSize]  # bottom left
        b22 = B[newSize:, newSize:]  # bottom right

        # Calculating p1 to p7:
        p1 = strassenR_with_numpy_for_add_sub(a11 + a22, b11 + b22, leaf_size=leaf_size) # p1 = (a11+a22) * (b11+b22)
        p2 = strassenR_with_numpy_for_add_sub(a21 + a22, b11, leaf_size=leaf_size)  # p2 = (a21+a22) * (b11)
        p3 = strassenR_with_numpy_for_add_sub(a11, b12 - b22, leaf_size=leaf_size)  # p3 = (a11) * (b12 - b22)
        p4 = strassenR_with_numpy_for_add_sub(a22, b21 - b11, leaf_size=leaf_size)   # p4 = (a22) * (b21 - b11)
        p5 = strassenR_with_numpy_for_add_sub(a11 + a12, b22, leaf_size=leaf_size)  # p5 = (a11+a12) * (b22)   
        p6 = strassenR_with_numpy_for_add_sub(a21 - a11, b11 + b12, leaf_size=leaf_size) # p6 = (a21-a11) * (b11+b12)
        p7 = strassenR_with_numpy_for_add_sub(a12 - a22, b21 + b22, leaf_size=leaf_size) # p7 = (a12-a22) * (b21+b22)

        # calculating c21, c21, c11 e c22:
        c12 = p3 + p5 # c12 = p3 + p5
        c21 = p2 + p4  # c21 = p2 + p4
        c11 = p1 + p4 + p7 - p5 # c11 = p1 + p4 - p5 + p7
        c22 = p1 + p3 + p6 - p2 # c22 = p1 + p3 - p2 + p6

        # Grouping the results obtained in a single matrix:
        C = np.zeros((n, n), dtype=type(A[0,0]))
        C[:newSize, :newSize] = c11
        C[:newSize, newSize:] = c12
        C[newSize:, :newSize] = c21
        C[newSize:, newSize:] = c22
        return C
In [664]:
def strassen_with_numpy_for_add_sub(A, B, leaf_size=LEAF_SIZE):
    assert isinstance(A, np.ndarray) and isinstance(B, np.ndarray)
    assert len(A) == len(A[0]) == len(B) == len(B[0])

    # Make the matrices bigger so that you can apply the strassen
    # algorithm recursively without having to deal with odd matrix sizes
    nextPowerOfTwo = lambda n: 2**int(ceil(log(n,2)))
    n = len(A)
    m = nextPowerOfTwo(n)
    APrep = np.zeros((m, m), dtype=type(A[0,0]))
    BPrep = np.zeros((m, m), dtype=type(A[0,0]))
    APrep[:n, :n] = A
    BPrep[:n, :n] = B
    CPrep = strassenR_with_numpy_for_add_sub(APrep, BPrep, leaf_size=leaf_size)
    return CPrep[:n, :n]

Pour générer des exemples de tailles grandissantes :

In [665]:
import random
In [666]:
def random_matrix(n, minint=0, maxint=1000):
    A = np.zeros((n, n), dtype=int)
    for i in range(n):
        for j in range(n):
            A[i, j] = random.randint(minint, maxint)
    return A
In [667]:
A = random_matrix(4)
A
Out[667]:
array([[151, 145, 650, 267],
       [933, 635, 112, 356],
       [991, 878, 586, 674],
       [687, 336, 815, 249]])
In [668]:
B = random_matrix(4)
B
Out[668]:
array([[333, 711, 765, 481],
       [522, 365,  34, 638],
       [627,  65, 916, 371],
       [241,  63, 982, 821]])

On vérifie sur un exemple que nos deux algorithmes calculant le produit $AB$ de matrices sont corrects :

In [669]:
A @ B
Out[669]:
array([[ 597870,  219357,  978039,  625498],
       [ 798179,  924846, 1187519, 1187731],
       [1318175, 1105623, 1986611, 1807595],
       [ 975177,  679759, 1528037, 1051609]])
In [670]:
ikjMatrixProduct(A, B)
Out[670]:
array([[ 597870,  219357,  978039,  625498],
       [ 798179,  924846, 1187519, 1187731],
       [1318175, 1105623, 1986611, 1807595],
       [ 975177,  679759, 1528037, 1051609]])
In [671]:
strassenR(A, B, leaf_size=1)
Out[671]:
array([[ 597870,  219357,  978039,  625498],
       [ 798179,  924846, 1187519, 1187731],
       [1318175, 1105623, 1986611, 1807595],
       [ 975177,  679759, 1528037, 1051609]])
In [672]:
strassenR(A, B, leaf_size=2)
Out[672]:
array([[ 597870,  219357,  978039,  625498],
       [ 798179,  924846, 1187519, 1187731],
       [1318175, 1105623, 1986611, 1807595],
       [ 975177,  679759, 1528037, 1051609]])
In [673]:
strassenR_with_numpy_for_add_sub(A, B, leaf_size=1)
Out[673]:
array([[ 597870,  219357,  978039,  625498],
       [ 798179,  924846, 1187519, 1187731],
       [1318175, 1105623, 1986611, 1807595],
       [ 975177,  679759, 1528037, 1051609]])
In [674]:
strassenR_with_numpy_for_add_sub(A, B, leaf_size=2)
Out[674]:
array([[ 597870,  219357,  978039,  625498],
       [ 798179,  924846, 1187519, 1187731],
       [1318175, 1105623, 1986611, 1807595],
       [ 975177,  679759, 1528037, 1051609]])

La première version n'utilisait pas la compilation "just in time" offerte par Numba, et c'était très lent ! Il y avait aussi beaucoup trop d'allocations mémoire !

J'ai optimisé et (naïvement) simplifié le code, en enlevant les déclarations mémoires inutiles, mais surtout en utilisant numba.jit pour optimiser les opérations add et substract, et on obtient des temps bien plus courts :

In [681]:
for n in tqdm([2**5, 2**6, 2**7, 2**8, 2**9]):
    print(f"\nFor n = {n} : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :")
    A = random_matrix(n)
    B = random_matrix(n)
    C = A @ B
    %timeit A @ B  # crazy fast!
    assert np.all(np.isclose(ikjMatrixProduct(A, B), C))
    %timeit ikjMatrixProduct(A, B)
    assert np.all(np.isclose(strassen(A, B, leaf_size=4), C))
    %timeit strassen(A, B, leaf_size=4)
    assert np.all(np.isclose(strassen_with_numpy_for_add_sub(A, B, leaf_size=4), C))
    %timeit strassen_with_numpy_for_add_sub(A, B, leaf_size=4)
For n = 32 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
35.3 µs ± 2.2 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
16.6 µs ± 1.19 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
4.38 ms ± 668 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.75 ms ± 514 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

For n = 64 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
258 µs ± 13.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
118 µs ± 19.5 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
35.4 ms ± 7.66 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
22.8 ms ± 1.44 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 128 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
2.68 ms ± 270 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
799 µs ± 16.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
186 ms ± 12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
169 ms ± 16.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 256 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
30.5 ms ± 691 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
5.97 ms ± 152 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.24 s ± 26.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.17 s ± 21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

For n = 512 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
405 ms ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
53.2 ms ± 1.89 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
8.6 s ± 149 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
8.27 s ± 135 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Est-ce que les temps sont vraiment différents si les deux matrices A et B changent à chaque test ? Pas vraiment ! Mais pour les petites valeurs de n, les quatre façons de multiplier sont toutes aussi lentes : le temps est principalement passé à générer A et B !

In [682]:
for n in tqdm([2**5, 2**6, 2**7, 2**8, 2**9, 2**10]):
    print(f"\nFor n = {n} : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :")
    %timeit random_matrix(n) @ random_matrix(n)
    %timeit ikjMatrixProduct(random_matrix(n), random_matrix(n))
    %timeit strassen(random_matrix(n), random_matrix(n), leaf_size=64)
    %timeit strassen_with_numpy_for_add_sub(random_matrix(n), random_matrix(n), leaf_size=64)
For n = 32 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
2.5 ms ± 228 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.93 ms ± 331 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.68 ms ± 271 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.43 ms ± 133 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

For n = 64 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
12.3 ms ± 2.64 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
10.1 ms ± 373 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
13.4 ms ± 832 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
10.5 ms ± 354 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

For n = 128 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
41.2 ms ± 2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
40.9 ms ± 2.93 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
60.3 ms ± 3.57 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
43.2 ms ± 3.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 256 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
205 ms ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
218 ms ± 53.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
255 ms ± 12.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
207 ms ± 33.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 512 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
877 ms ± 13.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
666 ms ± 23 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.25 s ± 9.44 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
996 ms ± 30.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

For n = 1024 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
12.4 s ± 389 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.75 s ± 269 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
7.71 s ± 1.62 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
5.66 s ± 174 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [687]:
for n in tqdm([2**7, 2**8, 2**9]):
    print(f"\nFor n = {n} : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :")
    %timeit random_matrix(n) @ random_matrix(n)
    %timeit ikjMatrixProduct(random_matrix(n), random_matrix(n))
    for leaf_size in tqdm([8, 32, 64, 128]):
        print(f"Both Strassen, with a leaf size = {leaf_size}")
        %timeit strassen(random_matrix(n), random_matrix(n), leaf_size=leaf_size)
        %timeit strassen_with_numpy_for_add_sub(random_matrix(n), random_matrix(n), leaf_size=leaf_size)
For n = 128 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
45.4 ms ± 7.27 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
46 ms ± 4.25 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Both Strassen, with a leaf size = 8
131 ms ± 22.9 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
120 ms ± 17.4 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Both Strassen, with a leaf size = 32
69.8 ms ± 9.58 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
51.1 ms ± 3.84 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Both Strassen, with a leaf size = 64
82 ms ± 16 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
57.1 ms ± 15.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Both Strassen, with a leaf size = 128
60.2 ms ± 5.59 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
44 ms ± 3.46 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 256 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
205 ms ± 27.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
162 ms ± 4.25 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Both Strassen, with a leaf size = 8
681 ms ± 8.27 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
604 ms ± 7.97 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Both Strassen, with a leaf size = 32
311 ms ± 3.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
253 ms ± 14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Both Strassen, with a leaf size = 64
255 ms ± 3.23 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
237 ms ± 38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Both Strassen, with a leaf size = 128
240 ms ± 27 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
174 ms ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For n = 512 : numpy, ten ikj naive algorithm, then Strassen, then (slightly) faster Strassen :
1.66 s ± 73.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
858 ms ± 149 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Both Strassen, with a leaf size = 8
4.65 s ± 252 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.69 s ± 19.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Both Strassen, with a leaf size = 32
1.61 s ± 22.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
1.33 s ± 27.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Both Strassen, with a leaf size = 64
1.24 s ± 61.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
950 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Both Strassen, with a leaf size = 128
1 s ± 3.37 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
744 ms ± 7.07 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Transformée de Fourier Rapide (FFT)

Transformée de Fourier Discrète (DFT), implémentation naïve

Ce premier algorithme est simple à écrire, et a une complexité en $\Theta(n^2)$.

In [538]:
def dft(x):
    n = len(x)
    omega = lambda j, k: np.exp(- (j * k) * 2j * np.pi / n)
    f = np.zeros_like(x)
    for j in range(n):
        for k in range(n):
            f[j] += x[k] * omega(j, k)
    return f
In [533]:
def round_complex(x, decimals=0):
    return np.round(np.real(x), decimals=decimals) + 1j * np.round(np.imag(x), decimals=decimals)
In [634]:
x = np.exp(2j * np.pi * np.arange(8) / 8)
x
f = dft(x)
f
Out[634]:
array([ 1.00000000e+00+0.00000000e+00j,  7.07106781e-01+7.07106781e-01j,
        6.12323400e-17+1.00000000e+00j, -7.07106781e-01+7.07106781e-01j,
       -1.00000000e+00+1.22464680e-16j, -7.07106781e-01-7.07106781e-01j,
       -1.83697020e-16-1.00000000e+00j,  7.07106781e-01-7.07106781e-01j])
Out[634]:
array([-5.55111512e-16-2.22044605e-16j,  8.00000000e+00+0.00000000e+00j,
       -5.55111512e-16+2.22044605e-16j,  1.33226763e-15-4.44089210e-16j,
       -1.11022302e-16-3.33066907e-16j,  0.00000000e+00+3.03841983e-16j,
        3.33066907e-16+1.99840144e-15j,  1.77635684e-15+1.11022302e-15j])
In [581]:
round_complex(f)
Out[581]:
array([-0.+0.j,  8.+0.j,  0.+0.j,  0.+0.j, -0.+0.j,  0.+0.j,  0.+0.j,
        0.+0.j])

On va faire des tests numériques, avec une version "optimisée" automatiquement grâce à numba.jit.

In [541]:
@jit
def dft_jit(x):
    n = len(x)
    pi_2j_by_n = 2j * np.pi / n
    f = np.zeros_like(x)
    for j in range(n):
        for k in range(n):
            f[j] += x[k] * np.exp(- (j * k) * pi_2j_by_n)
    return f
In [542]:
round_complex(dft_jit(x))
Out[542]:
array([-0.+0.j,  8.+0.j,  0.+0.j,  0.+0.j, -0.+0.j,  0.+0.j,  0.+0.j,
        0.+0.j])

Implémentation de FFT dans le module numpy

In [543]:
f2 = np.fft.fft(x)
round_complex(f2)
Out[543]:
array([0.+0.j, 8.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j])

In this example, real input has an FFT which is Hermitian, i.e., symmetric in the real part and anti-symmetric in the imaginary part, as described in the numpy.fft documentation:

In [517]:
import matplotlib.pyplot as plt
t = np.arange(256)
sp = np.fft.fft(np.sin(t))
freq = np.fft.fftfreq(t.shape[-1])
plt.plot(freq, sp.real, freq, sp.imag)
Out[517]:
[<matplotlib.lines.Line2D at 0x7f81f83d8630>,
 <matplotlib.lines.Line2D at 0x7f81f84d2358>]

Implémentation de la DFT par multiplication matricielle

On peut calculer la matrice de Vandermonde $W_N \in\mathbb{C}^{N \times N}$, associée à la racine $\omega_N = \exp(-i \frac{2\pi}{N}$ pour un signal de taille $N$, et ensuite calculer la DFT(x) comme $z = W_N x$ (produit matriciel). Cf cette page Wikipédia.

$$ f_j = \sum_{k=0}^{n-1} x_k e^{-{2\pi i \over n} jk } \qquad j = 0,\dots,n-1. $$

ou en notation matricielle :

$$ \begin{array}{l} \begin{pmatrix} f_0 \\ f_1 \\ f_2 \\ \vdots \\ f_{n-1} \end{pmatrix} = \begin{pmatrix} 1 & 1 & 1 & \cdots & 1\\ 1 & w & w^2 & \cdots & w^{n-1}\\ 1 & w^2 & w^4 & \cdots & w^{2(n-1)}\\ \vdots & \vdots & \vdots & \ddots & \vdots &\\ 1 & w^{n-1} & w^{2(n-1)} & \cdots & w^{(n-1)^2} \end{pmatrix} \end{array} \begin{pmatrix} x_0 \\ x_1 \\ x_2 \\ \vdots \\ x_{n-1} \end{pmatrix} , w = e^{-\frac{2 \pi i}{n}} $$
In [555]:
def vandermonde_fourier(n):
    pi_2j_by_n = 2j * np.pi / n
    omega = np.zeros((n, n), dtype=np.complex)
    for j in range(n):
        for k in range(n):
            omega[j, k] = np.exp(- (j * k) * pi_2j_by_n)
    return omega
In [567]:
def dft_naive_matmult(x):
    n = len(x)
    return vandermonde_fourier(len(x)) * np.array(x)
In [582]:
f3 = dft_naive_matmult(x)
round_complex(f3)
Out[582]:
array([[ 1.+0.j,  1.+1.j,  0.+1.j, -1.+1.j, -1.+0.j, -1.-1.j, -0.-1.j,
         1.-1.j],
       [ 1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,
         1.+0.j],
       [ 1.+0.j,  1.-1.j,  0.-1.j, -1.-1.j, -1.+0.j, -1.+1.j,  0.+1.j,
         1.+1.j],
       [ 1.+0.j,  0.-1.j, -1.+0.j,  0.+1.j,  1.+0.j,  0.-1.j, -1.+0.j,
         0.+1.j],
       [ 1.+0.j, -1.-1.j,  0.+1.j,  1.-1.j, -1.+0.j,  1.+1.j,  0.-1.j,
        -1.+1.j],
       [ 1.+0.j, -1.+0.j,  1.+0.j, -1.+0.j,  1.+0.j, -1.+0.j,  1.+0.j,
        -1.+0.j],
       [ 1.+0.j, -1.+1.j,  0.-1.j,  1.+1.j, -1.+0.j,  1.-1.j,  0.+1.j,
        -1.-1.j],
       [ 1.+0.j,  0.+1.j, -1.+0.j, -0.-1.j,  1.+0.j,  0.+1.j, -1.+0.j,
         0.-1.j]])
In [568]:
%timeit dft_naive_matmult(random_complex_vector(2**8))
102 ms ± 8.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Une approche plus efficace :

In [559]:
def vandermonde_fourier2(n):
    pi_2j_by_n = 2j * np.pi / n
    omega = np.zeros((n, n), dtype=np.complex)
    for j in range(n):
        omega[j, :] = np.exp(- (j * np.arange(n)) * pi_2j_by_n)
    return omega
In [560]:
def dft_naive_matmult2(x):
    n = len(x)
    return vandermonde_fourier2(len(x)) * np.array(x)
In [543]:
f4 = dft_naive_matmult2(x)
round_complex(f4)
Out[543]:
array([0.+0.j, 8.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j])
In [561]:
%timeit dft_naive_matmult2(random_complex_vector(2**8))
9 ms ± 380 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Une approche encore un peu plus efficace :

In [569]:
@jit
def vandermonde_fourier3(n):
    pi_2j_by_n = 2j * np.pi / n
    omega = np.zeros((n, n), dtype=np.complex)
    for j in range(n):
        omega[j, :] = np.exp(- (j * np.arange(n)) * pi_2j_by_n)
    return omega
In [572]:
def dft_naive_matmult3(x):
    n = len(x)
    return vandermonde_fourier3(len(x)) * np.array(x)
In [583]:
f5 = dft_naive_matmult3(x)
round_complex(f5)
Out[583]:
array([[ 1.+0.j,  1.+1.j,  0.+1.j, -1.+1.j, -1.+0.j, -1.-1.j, -0.-1.j,
         1.-1.j],
       [ 1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,  1.+0.j,
         1.+0.j],
       [ 1.+0.j,  1.-1.j,  0.-1.j, -1.-1.j, -1.+0.j, -1.+1.j,  0.+1.j,
         1.+1.j],
       [ 1.+0.j,  0.-1.j, -1.+0.j,  0.+1.j,  1.+0.j,  0.-1.j, -1.+0.j,
         0.+1.j],
       [ 1.+0.j, -1.-1.j,  0.+1.j,  1.-1.j, -1.+0.j,  1.+1.j,  0.-1.j,
        -1.+1.j],
       [ 1.+0.j, -1.+0.j,  1.+0.j, -1.+0.j,  1.+0.j, -1.+0.j,  1.+0.j,
        -1.+0.j],
       [ 1.+0.j, -1.+1.j,  0.-1.j,  1.+1.j, -1.+0.j,  1.-1.j,  0.+1.j,
        -1.-1.j],
       [ 1.+0.j,  0.+1.j, -1.+0.j, -0.-1.j,  1.+0.j,  0.+1.j, -1.+0.j,
         0.-1.j]])
In [573]:
%timeit dft_naive_matmult3(random_complex_vector(2**8))
6.34 ms ± 328 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Implémentation manuelle de la FFT (Cooley-Tucker)

Comme avec notre petite implémentation de l'algorithme de Strassen, on va se restreindre aux valeurs $n = 2^p$ pour $p=\mathbb{N}$. On va utiliser la DFT naïve pour les valeurs trop petites, comme on sait que trop d'appels récursifs ajoutent un gros surcoût (avec leaf_size=64, cela devrait donner un bon compromis).

Cf. cette page Wikipédia

In [554]:
LEAF_SIZE = 64
Out[554]:
64

In [628]:
def fft(x, leaf_size=LEAF_SIZE):
    n = len(x)
    if n <= 1:
        return x
    elif n <= leaf_size:
        return dft(x)
    n_by_2 = n//2
    assert n == 2 * n_by_2, "Error : only n = 2^k are accepted."
    # we split the entries in 2 vectors of size n/2
    # we compute the two FFT, recursively, in T(n/2)
    even_fft = fft(x[0::2], leaf_size=leaf_size)
    odd_fft  = fft(x[1::2], leaf_size=leaf_size)
    # combine the two, in O(n)
    full_fft = np.zeros(n, dtype=np.complex)
    omega_n = np.exp(-2j * np.pi / n)
    omega_s = omega_n ** np.arange(n_by_2)  # compute all the omega^j j=0..n/2
    full_fft[:n_by_2] = even_fft[:] + omega_s * odd_fft[:]
    full_fft[n_by_2:] = even_fft[:] - omega_s * odd_fft[:]
    # so T(n) = O(n) + 2 T(n/2)
    # ==> T(n) = O(n \log(n)) by master theorem!
    return full_fft

Un exemple, pour vérifier notre implémentation :

In [635]:
f6 = fft(x, leaf_size=1)
round_complex(f6)
Out[635]:
array([0.+0.j, 8.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j])
In [636]:
round_complex(fft(x, leaf_size=2))
Out[636]:
array([0.+0.j, 8.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j])
In [637]:
round_complex(fft(x))
Out[637]:
array([-0.+0.j,  8.+0.j,  0.+0.j,  0.+0.j, -0.+0.j,  0.+0.j,  0.+0.j,
        0.+0.j])
In [638]:
round_complex(np.fft.fft(x))
Out[638]:
array([0.+0.j, 8.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j, 0.+0.j])

Cette implémentation semble fonctionner sans problème.

Pour les tests numériques, on peut aussi écrire une variante avec numba.jit.

In [639]:
@jit
def fft_jit(x, leaf_size=LEAF_SIZE):
    n = len(x)
    if n <= 1:
        return x
    elif n <= leaf_size:
        return dft_jit(x)
    n_by_2 = n//2
    assert n == 2 * n_by_2, "Error : only n = 2^k are accepted."
    # we split the entries in 2 vectors of size n/2
    # we compute the two FFT, recursively, in T(n/2)
    even_fft = fft_jit(x[0::2], leaf_size=leaf_size)
    odd_fft  = fft_jit(x[1::2], leaf_size=leaf_size)
    # combine the two, in O(n)
    full_fft = np.zeros(n, dtype=np.complex)
    omega_n = np.exp(-2j * np.pi / n)
    omega_s = omega_n ** np.arange(n_by_2)  # compute all the omega^j j=0..n/2
    full_fft[:n_by_2] = even_fft[:] + omega_s * odd_fft[:]
    full_fft[n_by_2:] = even_fft[:] - omega_s * odd_fft[:]
    # so T(n) = O(n) + 2 T(n/2)
    # ==> T(n) = O(n \log(n)) by master theorem!
    return full_fft

Tests avec des vecteurs aléatoires

On va définir des vecteurs $x\in\mathbb{C}^n$ aléatoires, distribués selon des lois normales centrées.

In [544]:
def random_complex_vector(n=16):
    return np.random.standard_normal(size=n) + 1j * np.random.standard_normal(size=n)
In [640]:
x = random_complex_vector(4)
x
f1 = np.fft.fft(x)
f1
f2 = dft(x)
f2
f3 = dft_jit(x)
f3
f4 = fft(x, leaf_size=2)
f4
f5 = fft_jit(x, leaf_size=2)
f5
Out[640]:
array([-0.98721009-0.99058535j, -1.1876662 -0.60440419j,
       -1.22773845-0.43722865j, -0.55370133+1.51055982j])
Out[640]:
array([-3.95631608-0.52165836j, -1.87443566+0.08060817j,
       -0.47358102-2.33396962j,  2.35549237-1.18732157j])
Out[640]:
array([-3.95631608-0.52165836j, -1.87443566+0.08060817j,
       -0.47358102-2.33396962j,  2.35549237-1.18732157j])
Out[640]:
array([-3.95631608-0.52165836j, -1.87443566+0.08060817j,
       -0.47358102-2.33396962j,  2.35549237-1.18732157j])
Out[640]:
array([-3.95631608-0.52165836j, -1.87443566+0.08060817j,
       -0.47358102-2.33396962j,  2.35549237-1.18732157j])
Out[640]:
array([-3.95631608-0.52165836j, -1.87443566+0.08060817j,
       -0.47358102-2.33396962j,  2.35549237-1.18732157j])

Maintenant quelques comparaisons, montrant que l'implémentation naïve est très mauvaise en comparaison de celle optimisée (en C) de numpy.fft, et que l'implémentation de la FFT est aussi plutôt lente :

In [626]:
for n in tqdm([2**4, 2**5, 2**6, 2**7, 2**8, 2**9, 2**10]):
    print(f"""\nPour des vecteurs aléatoires de tailles {n}
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit """)
    x = random_complex_vector(n)
    %timeit np.fft.fft(x)
    assert np.all(np.isclose(np.fft.fft(x), dft(x)))
    %timeit dft(x)
    assert np.all(np.isclose(np.fft.fft(x), dft_jit(x)))
    %timeit dft_jit(x)
    assert np.all(np.isclose(np.fft.fft(x), fft(x)))
    %timeit fft(x)
    assert np.all(np.isclose(np.fft.fft(x), fft_jit(x)))
    %timeit fft_jit(x)
Pour des vecteurs aléatoires de tailles 16
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
5.12 µs ± 1.2 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
735 µs ± 90.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
52.8 µs ± 1.83 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
638 µs ± 70.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
95.8 µs ± 5.85 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Pour des vecteurs aléatoires de tailles 32
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
4.44 µs ± 190 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
2.66 ms ± 162 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
158 µs ± 3.06 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
2.23 ms ± 66.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
196 µs ± 4.56 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Pour des vecteurs aléatoires de tailles 64
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
4.28 µs ± 156 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
9.08 ms ± 557 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
527 µs ± 43 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
11.3 ms ± 1.34 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
692 µs ± 38.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Pour des vecteurs aléatoires de tailles 128
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
7.49 µs ± 1.22 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
68.9 ms ± 13.2 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.47 ms ± 268 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
23 ms ± 3.62 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.55 ms ± 200 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Pour des vecteurs aléatoires de tailles 256
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
9.79 µs ± 737 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
258 ms ± 81.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
6.86 ms ± 998 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
36.7 ms ± 992 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.64 ms ± 202 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Pour des vecteurs aléatoires de tailles 512
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
10.9 µs ± 283 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
566 ms ± 27.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
21 ms ± 3.1 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
74.5 ms ± 4.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
5.73 ms ± 519 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Pour des vecteurs aléatoires de tailles 1024
    numpy.fft.fft | dft naive | dft numba.jit | fft naive | fft jit.jit 
19.1 µs ± 2.59 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
2.48 s ± 136 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
73.6 ms ± 3.59 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
181 ms ± 32.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
10.8 ms ± 1.2 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [642]:
round(258_000 / 9.79)
Out[642]:
26353

Avec une taille aussi petite que juste n=2**8=256 échantillons, la DFT naïve (en $\Theta(n^2)$) et en pure Python est déjà environ 20_000 fois plus lente que la FFT optimisée. La DFT naïve mais compilée avec numba.jit est quant à elle 1000 plus lente.

La FFT naïve devrait être en $\mathcal{O}(n \log(n))$ et on a l'impression de le vérifier ici !

Quelle est l'influence de leaf_size ici ?

In [641]:
for n in tqdm([2**7, 2**8, 2**9]):
    print(f"""\nPour des vecteurs aléatoires de tailles {n}
    numpy.fft.fft | fft naive | fft jit.jit for different leaf_size""")
    x = random_complex_vector(n)
    %timeit np.fft.fft(x)
    for leaf_size in [1, 8, 32, 64, 2*n]:
        print(f"For leaf_size = {leaf_size}")
        assert np.all(np.isclose(np.fft.fft(x), fft(x, leaf_size=leaf_size)))
        %timeit fft(x)
        assert np.all(np.isclose(np.fft.fft(x), fft_jit(x, leaf_size=leaf_size)))
        %timeit fft_jit(x)
Pour des vecteurs aléatoires de tailles 128
    numpy.fft.fft | fft naive | fft jit.jit for different leaf_size
6.43 µs ± 1.12 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
For leaf_size = 1
19.3 ms ± 1.71 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.15 ms ± 56.4 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
For leaf_size = 8
22.8 ms ± 3.26 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.26 ms ± 48.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
For leaf_size = 32
20.8 ms ± 2.87 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.22 ms ± 115 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
For leaf_size = 64
17.2 ms ± 679 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
1.09 ms ± 13.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
For leaf_size = 256
17 ms ± 1.14 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
1.08 ms ± 9.81 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Pour des vecteurs aléatoires de tailles 256
    numpy.fft.fft | fft naive | fft jit.jit for different leaf_size
6.63 µs ± 153 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
For leaf_size = 1
34.4 ms ± 1.27 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.15 ms ± 44.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 8
35.5 ms ± 6.14 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.14 ms ± 46 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 32
33.9 ms ± 1.64 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.2 ms ± 75 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 64
33.5 ms ± 638 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.14 ms ± 42.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 512
33 ms ± 537 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
2.19 ms ± 63.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Pour des vecteurs aléatoires de tailles 512
    numpy.fft.fft | fft naive | fft jit.jit for different leaf_size
11.7 µs ± 2.08 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
For leaf_size = 1
79.9 ms ± 10.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
4.68 ms ± 151 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 8
73.1 ms ± 4.12 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
4.26 ms ± 159 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 32
75.2 ms ± 3.89 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
4.82 ms ± 337 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 64
70.2 ms ± 4.65 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
4.69 ms ± 466 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
For leaf_size = 1024
68.4 ms ± 1.61 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
4.48 ms ± 155 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

The influence is hard to see, but this is very counter intuitive.

Conclusion

C'est bon pour aujourd'hui !