ALGO1 : Introduction à l'algorithmique

Cours Magistral 9

  • Ce cours traite de programmation linéaire.

  • On va illustrer deux programmes linéaires résolus avec la fonction scipy.optimize.linprog.

In [1]:
import numpy as np

import matplotlib.pyplot as plt

import scipy.optimize as opt

Documentation de scipy.opt.linprog

In [8]:
print("\n".join(opt.linprog.__doc__.split("\n")[:31]))
    Linear programming: minimize a linear objective function subject to linear
    equality and inequality constraints.

    Linear programming solves problems of the following form:

    .. math::

        \min_x \ & c^T x \\
        \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
        & A_{eq} x = b_{eq},\\
        & l \leq x \leq u ,

    where :math:`x` is a vector of decision variables; :math:`c`,
    :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
    :math:`A_{ub}` and :math:`A_{eq}` are matrices.

    Informally, that's:

    minimize::

        c @ x

    such that::

        A_ub @ x <= b_ub
        A_eq @ x == b_eq
        lb <= x <= ub

    Note that by default ``lb = 0`` and ``ub = None`` unless specified with
    ``bounds``.

La programmation linéaire résout des problèmes de la forme suivante :

$$ \begin{align} \min_x \ & c^T x \\ \mbox{such that} \ & A_{ub} x \leq b_{ub},\\ & A_{eq} x = b_{eq},\\ & l \leq x \leq u , \end{align} $$

avec $x$ un vecteur de variables de décisions ; $c$, $b_{ub}$, $b_{eq}$, $l$, et $u$ sont des vecteurs ; $A_{ub}$ et $A_{eq}$ sont des matrices.

Fonction de débogage

In [16]:
def make_callback():
    list_of_x, list_of_fun = [], []
    def debug_callback(opt_res):
        print("\nA new optimization step gave:")
        print(f"Current solution x = {opt_res.x}")
        list_of_x.append(opt_res.x)
        print(f"Current value of c @ x = {opt_res.fun}")
        list_of_fun.append(opt_res.fun)
        print(f"Success? = {opt_res.success}")
        print(f"The (nominally positive) values of the slack, b_ub - A_ub @ x. = {opt_res.slack}")
        print(f"The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = {opt_res.con}")
        print(f"Algorithm in phase {opt_res.phase}")
        print(f"Algorithm in iteration number {opt_res.nit}")
        status = {
            0: "Optimization proceeding nominally.",
            1: "Iteration limit reached.",
            2: "Problem appears to be infeasible.",
            3: "Problem appears to be unbounded.",
            4: "Numerical difficulties encountered.",
        }
        print(f"Algorithm status {status[opt_res.status]}")
        if opt_res.message: print(f"Algorithm message: {opt_res.message}")
    return list_of_x, list_of_fun, debug_callback

Premier exemple

On va suivre l'exemple détaillé en cours :

  • Variables

    • $x$ nombre de tables fabriquées par semaine,
    • $y$ nombre de chaises fabriquées par semaine.
  • Objectif :

    • maximiser $30x + 10y$.
  • Contraintes :

    • heures de travail : $6x+3y \leq 36$,
    • demande : $y \geq 3x$,
    • stockage : $x + y/4 \leq 4$,
    • positivité : $x \geq 0$,
    • positivité : $y \geq 0$.

Mise sous la forme requise par la fonction linprog, cela va donner :

$$ \begin{align} \min_{[x, y]} \ & [-30, -10]^T [x, y] \\ \mbox{such that} \ & [ [6, 3], [3, -1], [1, 1/4] ] [x, y] = [6x + 3y, 3x - y, x + y/4] \leq [36, 0, 4],\\ & [0, 0] \leq [x, y] \leq [+\infty,+\infty] , \end{align} $$

Et donc avec Python cela sera :

In [10]:
c = np.array([-30, -10])

A_ub = np.array([[6, 3], [3, -1], [1, 1/4]])
b_ub = np.array([36, 0, 4])

A_eq = None
b_eq = None

# all variables are bound to be in (0, +inf)
bounds = (0, None)

Objectif :

In [11]:
import sympy
x, y = sympy.var('x y')
In [12]:
c.T @ [x, y]
Out[12]:
$\displaystyle - 30 x - 10 y$

Contraintes d'inéqualités :

In [13]:
A_ub @ [x, y]
Out[13]:
array([6.0*x + 3.0*y, 3.0*x - 1.0*y, 1.0*x + 0.25*y], dtype=object)
In [14]:
b_ub
Out[14]:
array([36,  0,  4])

Essayons avec différentes méthodes de résolution :

In [17]:
list_of_x, list_of_fun, debug_callback = make_callback()

opt.linprog(c,
            A_ub=A_ub, b_ub=b_ub,
            A_eq=A_eq, b_eq=b_eq,
            bounds=bounds,
            method="simplex",
            callback=debug_callback,
)
A new optimization step gave:
Current solution x = [0. 0.]
Current value of c @ x = 0.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [36.  0.  4.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 0
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [0. 0.]
Current value of c @ x = 0.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [36.  0.  4.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 1
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [2.28571429 6.85714286]
Current value of c @ x = -137.14285714285717
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [1.71428571e+00 8.88178420e-16 0.00000000e+00]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 2
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [2.28571429 6.85714286]
Current value of c @ x = -137.14285714285717
Success? = True
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [1.71428571e+00 8.88178420e-16 0.00000000e+00]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 3
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [2.28571429 6.85714286]
Current value of c @ x = -137.14285714285717
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [1.71428571e+00 8.88178420e-16 0.00000000e+00]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 2
Algorithm in iteration number 3
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [2. 8.]
Current value of c @ x = -140.0
Success? = True
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [7.10542736e-15 2.00000000e+00 0.00000000e+00]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 2
Algorithm in iteration number 4
Algorithm status Optimization proceeding nominally.
Out[17]:
     con: array([], dtype=float64)
     fun: -140.0
 message: 'Optimization terminated successfully.'
     nit: 4
   slack: array([7.10542736e-15, 2.00000000e+00, 0.00000000e+00])
  status: 0
 success: True
       x: array([2., 8.])
In [18]:
x_opt, y_opt = _.x

La solution obtenue est donc $x = 2$ et $y = 8$, qui donnerait un profit maximal de $+140 €$ par semaine en respectant toutes les contraintes.

Pour obtenir une solution entière, on a rien à faire ici.

Si la solution optimale était par exemple $2.23$ et $6.43$, on pourrait essayer $x = 2, 3$ et $y = 6, 7$, ie. on arrondit en dessous et au dessus, et on prend la solution qui satisfait les contraintes et maximise l'objectif :

In [19]:
x_opt, y_opt
Out[19]:
(2.000000000000001, 7.9999999999999964)
In [20]:
import itertools
In [22]:
sol = None
min_obj = float("+inf")

for (x, y) in itertools.product(
        [int(np.floor(x_opt)), int(np.ceil(x_opt))],
        [int(np.floor(y_opt)), int(np.ceil(y_opt))],
    ):
    obj = c.T @ [x, y]
    ctr = (A_ub @ [x, y]) <= b_ub
    print(f"Pour (x, y) = {x, y}, l'objectif vaut {obj}, la contrainte vaut {ctr}")
    if np.all(ctr) and obj < min_obj:
        min_obj = obj
        sol = [x, y]

print(f"==> Donc on utilise la solution entière optimale = {sol}")
Pour (x, y) = (2, 7), l'objectif vaut -130, la contrainte vaut [ True  True  True]
Pour (x, y) = (2, 8), l'objectif vaut -140, la contrainte vaut [ True  True  True]
Pour (x, y) = (3, 7), l'objectif vaut -160, la contrainte vaut [False False False]
Pour (x, y) = (3, 8), l'objectif vaut -170, la contrainte vaut [False False False]
==> Donc on utilise la solution entière optimale = [2, 8]

La solution entière optimale à ce premier problème est donc de fabriquer $x=2$ tables et $y=8$ chaises chaque semaine.

Second exemple

Mise sous la forme requise par la fonction linprog, cela va donner :

$$ \begin{align} \min_{[x, y]} \ & [4, 3]^T [x, y] \\ \mbox{such that} \ & [ [-1, 4], [1, 1], [3, -1] ] [x, y] = [-x + 4y, x + y, 3x - y] \leq [16, 9, 15],\\ & [0, 0] \leq [x, y] \leq [+\infty,+\infty] , \end{align} $$

Et donc avec Python cela sera :

In [23]:
c = np.array([-4, -3])

A_ub = np.array([[-1, 4], [1, 1], [3, -1]])
b_ub = np.array([16, 9, 15])

A_eq = None
b_eq = None

# all variables are bound to be in (0, +inf)
bounds = (0, None)

Objectif :

In [24]:
import sympy
x, y = sympy.var('x y')
In [25]:
c.T @ [x, y]
Out[25]:
$\displaystyle - 4 x - 3 y$

Contraintes d'inéqualités :

In [26]:
A_ub @ [x, y]
Out[26]:
array([-x + 4*y, x + y, 3*x - y], dtype=object)
In [27]:
b_ub
Out[27]:
array([16,  9, 15])

Essayons avec différentes méthodes de résolution :

Avec la méthode du simplexe

In [38]:
list_of_x, list_of_fun, debug_callback = make_callback()

opt.linprog(c,
            A_ub=A_ub, b_ub=b_ub,
            A_eq=A_eq, b_eq=b_eq,
            bounds=bounds,
            method="simplex",
            callback=debug_callback,
)
A new optimization step gave:
Current solution x = [0. 0.]
Current value of c @ x = 0.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [16.  9. 15.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 0
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [0. 4.]
Current value of c @ x = -12.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [ 0.  5. 19.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 1
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [4. 5.]
Current value of c @ x = -31.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [0. 0. 8.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 2
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [4. 5.]
Current value of c @ x = -31.0
Success? = True
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [0. 0. 8.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 3
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [4. 5.]
Current value of c @ x = -31.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [0. 0. 8.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 2
Algorithm in iteration number 3
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [6. 3.]
Current value of c @ x = -33.0
Success? = True
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [10.  0.  0.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 2
Algorithm in iteration number 4
Algorithm status Optimization proceeding nominally.
Out[38]:
     con: array([], dtype=float64)
     fun: -33.0
 message: 'Optimization terminated successfully.'
     nit: 4
   slack: array([10.,  0.,  0.])
  status: 0
 success: True
       x: array([6., 3.])
In [39]:
plt.figure(figsize=(10, 7))
plt.title("Valeur de l'objectif étape par étape (méthode simplexe)")
plt.plot(list_of_fun, "ro-", lw=3, ms=14)
plt.show()
Out[39]:
<Figure size 720x504 with 0 Axes>
Out[39]:
Text(0.5, 1.0, "Valeur de l'objectif étape par étape (méthode simplexe)")
Out[39]:
[<matplotlib.lines.Line2D at 0x7f8253230dd0>]
In [40]:
plt.figure(figsize=(10, 7))
plt.title("Position des points étape par étape (méthode simplexe)")
list_of_X, list_of_Y = [x for (x,y) in list_of_x], [y for (x,y) in list_of_x]
# plt.plot(list_of_X, list_of_Y, 'bo-')
plt.plot(list_of_X, 'bo-', label="Valeur de x", lw=3, ms=14)
plt.plot(list_of_Y, 'gd-', label="Valeur de y", lw=3, ms=14)
plt.legend()
plt.show()
Out[40]:
<Figure size 720x504 with 0 Axes>
Out[40]:
Text(0.5, 1.0, 'Position des points étape par étape (méthode simplexe)')
Out[40]:
[<matplotlib.lines.Line2D at 0x7f82531b0bd0>]
Out[40]:
[<matplotlib.lines.Line2D at 0x7f8253217e10>]
Out[40]:
<matplotlib.legend.Legend at 0x7f8253164c10>

Avec la méthode du point intérieur (un autre algorithme)

Cet autre algorithme est plus récent, plus technique, et il fonctionne généralement mieux : plus rapide, plus stable numérique.

In [41]:
list_of_x, list_of_fun, debug_callback = make_callback()

opt.linprog(c,
            A_ub=A_ub, b_ub=b_ub,
            A_eq=A_eq, b_eq=b_eq,
            bounds=bounds,
            method="interior-point",
            callback=debug_callback,
)
A new optimization step gave:
Current solution x = [1. 1.]
Current value of c @ x = -7.0
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [13.  7. 13.]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 0
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [4.01262108 3.38515509]
Current value of c @ x = -26.205949599949
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [6.47200073 1.60222383 6.34729184]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 1
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [5.9137431  2.95777926]
Current value of c @ x = -32.52831014613114
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [10.08262607  0.12847765  0.21654997]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 2
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [5.9999917  2.99999607]
Current value of c @ x = -32.999954993936036
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [1.00000074e+01 1.22338576e-05 2.09841077e-05]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 3
Algorithm status Optimization proceeding nominally.

A new optimization step gave:
Current solution x = [6. 3.]
Current value of c @ x = -32.9999999977497
Success? = False
The (nominally positive) values of the slack, b_ub - A_ub @ x. = [1.00000000e+01 6.11692030e-10 1.04921938e-09]
The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x. = []
Algorithm in phase 1
Algorithm in iteration number 4
Algorithm status Optimization proceeding nominally.
Out[41]:
     con: array([], dtype=float64)
     fun: -32.9999999977497
 message: 'Optimization terminated successfully.'
     nit: 4
   slack: array([1.00000000e+01, 6.11692030e-10, 1.04921938e-09])
  status: 0
 success: True
       x: array([6., 3.])
In [42]:
plt.figure(figsize=(10, 7))
plt.title("Valeur de l'objectif étape par étape (méthode point intérieur)")
plt.plot(list_of_fun, "ro-", lw=3, ms=14)
plt.show()
Out[42]:
<Figure size 720x504 with 0 Axes>
Out[42]:
Text(0.5, 1.0, "Valeur de l'objectif étape par étape (méthode point intérieur)")
Out[42]:
[<matplotlib.lines.Line2D at 0x7f825314e550>]
In [43]:
plt.figure(figsize=(10, 7))
plt.title("Position des points étape par étape (méthode point intérieur)")
list_of_X, list_of_Y = [x for (x,y) in list_of_x], [y for (x,y) in list_of_x]
# plt.plot(list_of_X, list_of_Y, 'bo-')
plt.plot(list_of_X, 'bo-', label="Valeur de x", lw=3, ms=14)
plt.plot(list_of_Y, 'gd-', label="Valeur de y", lw=3, ms=14)
plt.legend()
plt.show()
Out[43]:
<Figure size 720x504 with 0 Axes>
Out[43]:
Text(0.5, 1.0, 'Position des points étape par étape (méthode point intérieur)')
Out[43]:
[<matplotlib.lines.Line2D at 0x7f82530fec50>]
Out[43]:
[<matplotlib.lines.Line2D at 0x7f8253164c90>]
Out[43]:
<matplotlib.legend.Legend at 0x7f82533984d0>

Bonus : implémentation manuelle de l'algorithme du simplexe

Algorithme

In [44]:
import numpy as np
import heapq
In [89]:
def identity(numRows, numCols, val=1, rowStart=0):
    """ Return a rectangular identity matrix with the specified diagonal entiries, possibly starting in the middle.
    """
    # return val * np.ones((numRows, numCols))
    return [
        [
            (val if i == j else 0)
            for j in range(numCols)
        ]
        for i in range(rowStart, numRows)
    ]

Conversion en forme standard

In [90]:
def standardForm(cost,
                 greaterThans=None, gtThreshold=None,
                 lessThans=None, ltThreshold=None,
                 equalities=None, eqThreshold=None,
                 maximization=True):
    """
       standardForm: [float], [[float]], [float], [[float]], [float], [[float]], [float] -> [float], [[float]], [float]
       Convert a linear program in general form to the standard form for the
       simplex algorithm.  The inputs are assumed to have the correct dimensions: cost
       is a length n list, greaterThans is an n-by-m matrix, gtThreshold is a vector
       of length m, with the same pattern holding for the remaining inputs. No
       dimension errors are caught, and we assume there are no unrestricted variables.
    """
    newVars = 0
    numRows = 0
    if gtThreshold:
        newVars += len(gtThreshold)
        numRows += len(gtThreshold)
    if ltThreshold:
        newVars += len(ltThreshold)
        numRows += len(ltThreshold)
    if eqThreshold:
        numRows += len(eqThreshold)

    if not maximization:
        cost = [-x for x in cost]

    if newVars == 0:
        return cost, equalities, eqThreshold

    newCost = list(cost) + ([0] * newVars)

    constraints = [ ]
    threshold   = [ ]

    oldConstraints = [(greaterThans, gtThreshold, -1), (lessThans, ltThreshold, 1),
                     (equalities, eqThreshold, 0)]

    offset = 0
    for constraintList, oldThreshold, coefficient in oldConstraints:
        constraints += [c + r for c, r in zip(constraintList,
             identity(numRows, newVars, coefficient, offset))]

        threshold += oldThreshold
        offset += len(oldThreshold)

    return newCost, constraints, threshold

Utilitaires pour les matrices

In [91]:
def dot(a, b):
    return sum(x*y for x, y in zip(a, b))
In [92]:
def column(A, j):
    return [row[j] for row in A]
In [93]:
def transpose(A):
    return [column(A, j) for j in range(len(A[0]))]
In [94]:
def isPivotCol(col):
    return (len([c for c in col if c == 0]) == len(col) - 1) and sum(col) == 1

def variableValueForPivotColumn(tableau, column):
    pivotRow = [i for (i, x) in enumerate(column) if x == 1][0]
    return tableau[pivotRow][-1]
In [95]:
# assume the last m columns of A are the slack variables; the initial basis is
# the set of slack variables
def initialTableau(c, A, b):
    tableau = [row[:] + [x] for row, x in zip(A, b)]
    tableau.append([ci for ci in c] + [0])
    return tableau
In [96]:
def primalSolution(tableau):
    # the pivot columns denote which variables are used
    columns = transpose(tableau)
    indices = [j for j, col in enumerate(columns[:-1]) if isPivotCol(col)]
    return [(colIndex, variableValueForPivotColumn(tableau, columns[colIndex]))
            for colIndex in indices]
In [97]:
def objectiveValue(tableau):
    return -(tableau[-1][-1])
In [98]:
def canImprove(tableau):
    lastRow = tableau[-1]
    return any(x > 0 for x in lastRow[:-1])
In [99]:
# this can be slightly faster
def moreThanOneMin(L):
    if len(L) <= 1:
        return False

    x, y = heapq.nsmallest(2, L, key=lambda x: x[1])
    return x == y
In [100]:
def findPivotIndex(tableau):
    # pick minimum positive index of the last row
    column_choices = [(i, x) for (i, x) in enumerate(tableau[-1][:-1]) if x > 0]
    column = min(column_choices, key=lambda a: a[1])[0]

    # check if unbounded
    if all(row[column] <= 0 for row in tableau):
        raise Exception('Linear program is unbounded.')

    # check for degeneracy: more than one minimizer of the quotient
    quotients = [
        (i, r[-1] / r[column])
        for i, r in enumerate(tableau[:-1])
        if r[column] > 0
    ]

    if moreThanOneMin(quotients):
        raise Exception('Linear program is degenerate.')

    # pick row index minimizing the quotient
    row = min(quotients, key=lambda x: x[1])[0]

    return row, column
In [101]:
def pivotAbout(tableau, pivot):
    i, j = pivot

    pivotDenom = tableau[i][j]
    tableau[i] = [x / pivotDenom for x in tableau[i]]

    for k,row in enumerate(tableau):
        if k != i:
            pivotRowMultiple = [y * tableau[k][j] for y in tableau[i]]
            tableau[k] = [x - y for x,y in zip(tableau[k], pivotRowMultiple)]

L'algorithme du simplexe

In [102]:
def simplex(c, A, b):
    """
    simplex: c: [float], A: [[float]], b: [float] -> [float], float
    Solve the given standard-form linear program:
        max <c, x>
        s.t. Ax = b
             x >= 0
    Providing the optimal solution x* and the value of the objective function.
    """
    tableau = initialTableau(c, A, b)
    print("Initial tableau:")
    for row in tableau:
        print(row)
    print()

    while canImprove(tableau):
        pivot = findPivotIndex(tableau)
        print("Next pivot index is={}\n".format(pivot))
        pivotAbout(tableau, pivot)
        print("Tableau after pivot:")
        for row in tableau:
            print(row)
        print()

    return tableau, primalSolution(tableau), objectiveValue(tableau)

Un premier exemple

In [103]:
c = [300, 250, 450]
A = [[15, 20, 25], [35, 60, 60], [20, 30, 25], [0, 250, 0]]
b = [1200, 3000, 1500, 500]

# add slack variables by hand
A[0] += [1, 0, 0, 0]
A[1] += [0, 1, 0, 0]
A[2] += [0, 0, 1, 0]
A[3] += [0, 0, 0, -1]
c += [0, 0, 0, 0]

t, s, v = simplex(c, A, b)
print(s)
print(v)
Initial tableau:
[15, 20, 25, 1, 0, 0, 0, 1200]
[35, 60, 60, 0, 1, 0, 0, 3000]
[20, 30, 25, 0, 0, 1, 0, 1500]
[0, 250, 0, 0, 0, 0, -1, 500]
[300, 250, 450, 0, 0, 0, 0, 0]

Next pivot index is=(3, 1)

Tableau after pivot:
[15.0, 0.0, 25.0, 1.0, 0.0, 0.0, 0.08, 1160.0]
[35.0, 0.0, 60.0, 0.0, 1.0, 0.0, 0.24, 2880.0]
[20.0, 0.0, 25.0, 0.0, 0.0, 1.0, 0.12, 1440.0]
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, -0.004, 2.0]
[300.0, 0.0, 450.0, 0.0, 0.0, 0.0, 1.0, -500.0]

Next pivot index is=(1, 6)

Tableau after pivot:
[3.333333333333332, 0.0, 5.0, 1.0, -0.33333333333333337, 0.0, 0.0, 200.0]
[145.83333333333334, 0.0, 250.0, 0.0, 4.166666666666667, 0.0, 1.0, 12000.0]
[2.5, 0.0, -5.0, 0.0, -0.5, 1.0, 0.0, 0.0]
[0.5833333333333334, 1.0, 1.0, 0.0, 0.01666666666666667, 0.0, 0.0, 50.0]
[154.16666666666666, 0.0, 200.0, 0.0, -4.166666666666667, 0.0, 0.0, -12500.0]

Next pivot index is=(2, 0)

Tableau after pivot:
[0.0, 0.0, 11.666666666666664, 1.0, 0.33333333333333315, -1.333333333333333, 0.0, 200.0]
[0.0, 0.0, 541.6666666666667, 0.0, 33.333333333333336, -58.33333333333334, 1.0, 12000.0]
[1.0, 0.0, -2.0, 0.0, -0.2, 0.4, 0.0, 0.0]
[0.0, 1.0, 2.166666666666667, 0.0, 0.13333333333333336, -0.23333333333333336, 0.0, 50.0]
[0.0, 0.0, 508.3333333333333, 0.0, 26.666666666666664, -61.666666666666664, 0.0, -12500.0]

Next pivot index is=(1, 4)

Tableau after pivot:
[0.0, 0.0, 6.250000000000001, 1.0, 0.0, -0.75, -0.009999999999999993, 80.00000000000007]
[0.0, 0.0, 16.25, 0.0, 1.0, -1.7500000000000002, 0.03, 360.0]
[1.0, 0.0, 1.25, 0.0, 0.0, 0.04999999999999993, 0.006, 72.0]
[0.0, 1.0, 0.0, 0.0, 0.0, 5.551115123125783e-17, -0.004000000000000001, 1.999999999999993]
[0.0, 0.0, 75.0, 0.0, 0.0, -14.999999999999993, -0.7999999999999999, -22100.0]

Next pivot index is=(0, 2)

Tableau after pivot:
[0.0, 0.0, 1.0, 0.15999999999999998, 0.0, -0.11999999999999998, -0.0015999999999999988, 12.80000000000001]
[0.0, 0.0, 0.0, -2.5999999999999996, 1.0, 0.1999999999999995, 0.05599999999999998, 151.99999999999986]
[1.0, 0.0, 0.0, -0.19999999999999996, 0.0, 0.1999999999999999, 0.007999999999999998, 55.999999999999986]
[0.0, 1.0, 0.0, 0.0, 0.0, 5.551115123125783e-17, -0.004000000000000001, 1.999999999999993]
[0.0, 0.0, 0.0, -11.999999999999998, 0.0, -5.999999999999995, -0.68, -23060.0]

[(0, 55.999999999999986), (1, 1.999999999999993), (2, 12.80000000000001), (4, 151.99999999999986)]
23060.0

Et pou comparer avec la réponse donnée par scipy.optimize.linprog :

In [104]:
opt_res = opt.linprog(-np.array(c), A_ub=A, b_ub=b, method="simplex")
opt_res
Out[104]:
     con: array([], dtype=float64)
     fun: -23400.0
 message: 'Optimization terminated successfully.'
     nit: 5
   slack: array([  0.,   0.,   0., 500.])
  status: 0
 success: True
       x: array([ 60.,   0.,  12.,   0., 180.,   0.,   0.])

La solution optimale trouvée par scipy.optimize.linprog est meilleure que celle trouvée par notre algorithme.

In [105]:
v, - opt_res.fun
Out[105]:
(23060.0, 23400.0)

Notre implémentation donne une solution :

In [106]:
s
Out[106]:
[(0, 55.999999999999986),
 (1, 1.999999999999993),
 (2, 12.80000000000001),
 (4, 151.99999999999986)]

Qui s'interprète comme étant assez proche de la solution trouvée par scipy.optimize.linprog.

In [107]:
opt_res.x
Out[107]:
array([ 60.,   0.,  12.,   0., 180.,   0.,   0.])
In [108]:
s2 = np.array([56, 2, 12, 0, 152, 0, 0])
s2
Out[108]:
array([ 56,   2,  12,   0, 152,   0,   0])
In [109]:
np.linalg.norm(opt_res.x - s2) / np.linalg.norm(opt_res.x)
Out[109]:
0.1491454186681393

C'est une différence relativement faible…

Tests

In [110]:
def test(expected, actual):
    e, a = np.array(expected), np.array(actual)
    if not np.isclose(np.linalg.norm(e - a), 0):
        import sys, traceback
        (filename, lineno, container, code) = traceback.extract_stack()[-2]
        print("Test: {} failed on line {} in file {}.\nExpected {} but got {}\n".format((code, lineno, filename, expected, actual)))
In [111]:
def testFromPost():
    cost = [1, 1, 1]
    gts = [[0, 1, 4]]
    gtB = [10]
    lts = [[3, -2, 0]]
    ltB = [7]
    eqs = [[1, 1, 0]]
    eqB = [2]

    expectedCost = [1,1,1,0,0]
    expectedConstraints = [[0,1,4,-1,0], [3,-2,0,0,1], [1,1,0,0,0]]
    expectedThresholds = [10,7,2]
    c, cs, ts = standardForm(cost, gts, gtB, lts, ltB, eqs, eqB)
    test(expectedCost, c)
    test(expectedConstraints, cs)
    test(expectedThresholds, ts)
    
    A_ub = np.array([
        [0, 1, 4],
        [-3, 2, 0]
    ])
    b_ub = np.array([10, -7])
    opt_res = opt.linprog(-np.array(cost),
                          A_eq=np.array(eqs), b_eq=np.array(eqB),
                          A_ub=np.array(A_ub), b_ub=np.array(b_ub),
                          method="simplex")
    print("Expected cost", expectedCost)
    print("scipy.optimize.linprog gives a solution =", opt_res.x)
In [112]:
testFromPost()
Expected cost [1, 1, 1, 0, 0]
scipy.optimize.linprog gives a solution = [2.  0.  2.5]

Un second test :

In [113]:
def test2():
    cost = [1, 1, 1]
    lts = [[3, -2, 0]]
    ltB = [7]
    eqs = [[1, 1, 0]]
    eqB = [2]

    expectedCost = [1, 1, 1, 0]
    expectedConstraints = [[3, -2, 0, 1], [1 ,1 ,0 ,0]]
    expectedThresholds = [7, 2]
    test((expectedCost, expectedConstraints, expectedThresholds),
         standardForm(cost, lessThans=lts, ltThreshold=ltB, equalities=eqs, eqThreshold=eqB))
In [ ]:
test2()

Un dernier test :

In [115]:
def test3():
    cost = [1, 1, 1]
    eqs = [[1, 1, 0], [2, 2, 2]]
    eqB = [2, 5]

    expectedCost = [1, 1, 1]
    expectedConstraints = [[3, -2, 0], [1, 1, 0]]
    expectedThresholds = [2, 5]
    test((expectedCost, expectedConstraints, expectedThresholds),
         standardForm(cost, equalities=eqs, eqThreshold=eqB))
In [ ]:
test3()

Ca suffit pour ces exemples.

Conclusion

C'est bon pour aujourd'hui !