Skip to content

A Python package integrating around ten unconstrained optimization algorithms, inclusive of 2D/3D visualizations for comparative analysis, and incorporated matrix operations.

License

Notifications You must be signed in to change notification settings

Heyyassinesedjari/pyoptim

Repository files navigation

PyOptim: Python Package for Optimization and Matrix Operations

A Python package integrating around ten unconstrained optimization algorithms, inclusive of 2D/3D visualizations for comparative analysis, and incorporated matrix operations.

This project, a component of the Numerical Analysis & Optimization course at ENSIAS, Mohammed V University arose from Professor M. Naoum's suggestion to create a Python package encapsulating algorithms practiced during laboratory sessions. It focuses on about ten unconstrained optimization algorithms, offering 2D and 3D visualizations, enabling a performance comparison in computational efficiency and accuracy. Additionally, the project encompasses matrix operations such as inversion, decomposition, and solving linear systems, all integrated within the package containing the lab-derived algorithms.

GIF Source: https://www.nsf.gov/news/mmg/mmg_disp.jsp?med_id=78950&from=](https://aria42.com/blog/2014/12/understanding-lbfgs

Every part of this project is sample code which shows how to do the following:

  • Implementation of one-variable optimization algorithms, encompassing fixed_step, accelerated_step, exhaustive_search, dichotomous_search, interval_halving, Fibonacci, golden_section, Armijo_backward, and Armijo_forward.

  • Incorporation of various one-variable optimization algorithms, including gradient descent, Gradient conjugate, Newton, quasi_Newton_dfp, stochastic gradient descent.

  • Visualization of each iteration step for all algorithms in 2D, contour, and 3D formats.

  • Conducting a comparative analysis focusing on their runtime and accuracy metrics.

  • Implementation of matrix operations such as inversion (e.g., Gauss-Jordan), decomposition (e.g., LU, Choleski), and solutions for linear systems (e.g., Gauss-Jordan, LU, Choleski).

    Getting Started

To install this Python Package:

pip install pyoptim

Run this demo notebook

Documentation & Comparative Analysis

Table of Contents

Single Variable Optimization Algorithms

        Importing Libraries

import pyoptim.my_scipy.onevar_optimize.minimize as soom
import pyoptim.my_plot.onevar._2D as po2

        Objectif Function

def f(x):
    return x*(x-1.5)   # analytically, argmin(f) = 0.75

        Parameter Initialization

xs=-10
xf=10
epsilon=1.e-2

        Fixed Step

print('x* =',soom.fixed_step(f,xs,epsilon))
po2.fixed_step(f,xs,epsilon)
x* = 0.75


The sequence of steps taken by the Fixed Step algorithm before reaching the minimum

        Accelerated Step

print('x* =',soom.accelerated_step(f,xs,epsilon))
po2.accelerated_step(f,xs,epsilon)
x* = 0.86


The sequence of steps taken by the Accelerated Step algorithm before reaching the minimum

        Exhaustive Search

print('x* =',soom.exhaustive_search(f,xs,xf,epsilon))
po2.exhaustive_search(f,xs,xf,epsilon)
x* = 0.75


The sequence of steps taken by the Exhaustive Search algorithm before reaching the minimum

        Dichotomous Search

mini_delta = 1.e-3
print('x* =',soom.dichotomous_search(f,xs,xf,epsilon,mini_delta))
po2.dichotomous_search(f,xs,xf,epsilon,mini_delta)
x* = 0.7494742431640624


The sequence of steps taken by the Dichotomous Search algorithm before reaching the minimum

        Interval Halving

print('x* =',soom.interval_halving(f,xs,xf,epsilon))
po2.interval_halving(f,xs,xf,epsilon)
x* = 0.75


The sequence of steps taken by the Interval Halving algorithm before reaching the minimum.

        Fibonacci

n=15
print('x* =',soom.fibonacci(f,xs,xf,n)) 
po2.fibonacci(f,xs,xf,n)
x* = 0.76


The sequence of steps taken by the Fibonacci algorithm before reaching the minimum.

        Golden Section

print('x* =',soom.golden_section(f,xs,xf,epsilon))
po2.golden_section(f,xs,xf,epsilon)
x* = 0.75


The sequence of steps taken by the Golden Section algorithm before reaching the minimum

        Armijo Backward

ŋ=2
xs=100
print('x* =',soom.armijo_backward(f,xs,ŋ,epsilon))
po2.armijo_backward(f,xs,ŋ,epsilon)
x* = 0.78


The sequence of steps taken by the Armijo Backward algorithm before reaching the minimum

        Armijo Forward

xs=0.1
epsilon = 0.5
ŋ=2
print('x* =',soom.armijo_forward(f,xs,ŋ,epsilon))
po2.armijo_forward(f,xs,ŋ,epsilon)
x* = 0.8


The sequence of steps taken by the Armijo Forward algorithm before reaching the minimum

        Comparative Analysis

po2.compare_all_time(f,0,2,1.e-2,1.e-3,10,2,0.1,100)

Runtime

po2.compare_all_precision(f,0,2,1.e-2,1.e-3,10,2,0.1,100)

Gap between true and computed minimum

From the runtime and accuracy graphs, it can be deduced that among the algorithms assessed for this convex single-variable function, the Golden Section method emerges as the optimal choice, offering a blend of high accuracy and notably low runtime.

Multivariable Optimization Algorithms

         Importing Libraries

import pyoptim.my_plot.multivar._3D as pm3
import pyoptim.my_plot.multivar.contour2D as pmc

         Objectif Function

def h(x):
    return x[0] - x[1] + 2*(x[0]**2) + 2*x[1]*x[0] + x[1]**2
# analytically, argmin(f) = [-1, 1.5]

analitical solution

        Parameter Initialization

X=[1000,897]
alpha=1.e-2
tol=1.e-2

        Gradient Descent

pmc.gradient_descent(h,X,tol,alpha)
pm3.gradient_descent(h,X,tol,alpha)
Y* =  [-1.01  1.51]

The sequence of steps taken by the Gradient Descent algorithm before reaching the minimum

Contour Plot

3D Plot

        Gradient Conjugate

pmc.gradient_conjugate(h,X,tol)
pm3.gradient_conjugate(h,X,tol)
Y* =  [-107.38  172.18]

The sequence of steps taken by the Gradient Conjugate algorithm before reaching the minimum

Contour Plot

3D Plot

        Newton

pmc.newton(h,X,tol)
pm3.newton(h,X,tol)
Y* =  [-1.   1.5]

The sequence of steps taken by the Newton algorithm before reaching the minimum

Contour Plot

3D Plot

        Quasi Newton DFP

pmc.quasi_newton_dfp(h,X,tol)
pm3.quasi_newton_dfp(h,X,tol)
Y* =  [-1.   1.5]

The sequence of steps taken by the Quasi Newton DFP algorithm before reaching the minimum

Contour Plot

3D Plot

        Stochastic Gradient Descent

pmc.sgd(h,X,tol,alpha)
pm3.sgd(h,X,tol,alpha)
Y* =  [-1.01  1.52]

The sequence of steps taken by the Stochastic Gradient Descent algorithm before reaching the minimum

Contour Plot

3D Plot

        Stochastic Gradient Descent with BLS

alpha = 100 #it must be high because of BLS
c = 2
pmc.sgd_with_bls(h,X,tol,alpha,c) 
pm3.sgd_with_bls(h,X,tol,alpha,c)
Y* =  [-1.   1.5]

The sequence of steps taken by the Stochastic Gradient Descent with BLS algorithm before reaching the minimum

Contour Plot

3D Plot

        Comparative Analysis

pm3.compare_all_time(h,X,1.e-2,1.e-1,100,2)

Runtime

pm3.compare_all_precision(h,X,1.e-2,1.e-1,100,2)

Gap between true and computed minimum

From the runtime and accuracy graphs, it can be deduced that among the algorithms assessed for this convex Multivariable function, the Quasi Newton DFP method emerges as the optimal choice, offering a blend of high accuracy and notably low runtime.

Matrix Inverse

        Import Libraries

import pyoptim.my_numpy.inverse as npi

        Test Matrix

A = np.array([[1.,2.,3.],[0.,1.,4.],[5.,6.,0.]])

        Gauss Jordan

A_1=npi.gaussjordan(A.copy())
I=A@A_1
I=np.around(I,1)
print('A_1 =\n\n',A_1)
print('\nA_1*A =\n\n',I)
A_1 =

 [[-24.  18.   5.]
 [ 20. -15.  -4.]
 [ -5.   4.   1.]]

A_1*A =

 [[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]

Matrix Decomposition

        Imports

import pyoptim.my_numpy.decompose as npd

        Test Matrix

A = np.array([[1.,2.,3.],[0.,1.,4.],[5.,6.,0.]])      #A is not positive definite
B = np.array([[2.,-1.,0.],[-1.,2.,-1.],[0.,-1.,2.]])  #B is positive definite
Y=np.array([45,-78,95])                               #randomly chosen column vector

        LU

L,U,P=npd.LU(A)
print("P =\n",P,"\n\nL =\n",L,"\n\nU =\n",U)
print("\n",A==P@L@U)
P =
 [[0. 0. 1.]
 [0. 1. 0.]
 [1. 0. 0.]] 

L =
 [[1.  0.  0. ]
 [0.  1.  0. ]
 [0.2 0.8 1. ]] 

U =
 [[ 5.   6.   0. ]
 [ 0.   1.   4. ]
 [ 0.   0.  -0.2]]

 [[ True  True  True]
 [ True  True  True]
 [ True  True  True]]

        Choleski

L=npd.choleski(A)          # A is not positive definite
print(L)
print("--------------------------------------------------")
L=npd.choleski(B)          # B is positive definite 
print('L =\n',L,'\n')

C=np.around(L@(L.T),1)
print('B = L@(L.T) \n\n',B==C)
A must be positive definite !
None
--------------------------------------------------
L =
 [[ 1.41421356  0.          0.        ]
 [-0.70710678  1.22474487  0.        ]
 [ 0.         -0.81649658  1.15470054]] 

B = L@(L.T) 

 [[ True  True  True]
 [ True  True  True]
 [ True  True  True]]

Solving Linear Systems

        Imports

import pyoptim.my_numpy.solve as nps

        Test Matrix

A = np.array([[1., 2., 3.], [0., 1., 4.], [5., 6., 0.]]) # A is not positive definite
B = np.array([[2., -1., 0.], [-1., 2., -1.], [0., -1., 2.]]) # B is positive definite
Y = np.array([45, -78, 95]) # randomly chosen column vector

        Gauss Jordan

X=nps.gaussjordan(A,Y)
print("X =\n",X)
print("\n A@X=Y \n",A@X==Y,'\n')

print('---------------------------------------------------------------')
X=nps.gaussjordan(B,Y)
print("X =\n",X)
Y_=np.around(B@X,1)
print("\n B@X=Y \n",Y_==Y,'\n')
X =
 [[-2009.]
 [ 1690.]
 [ -442.]]

 A@X=Y 
 [[ True]
 [ True]
 [ True]] 

---------------------------------------------------------------
X =
 [[18.5]
 [-8. ]
 [43.5]]

 B@X=Y 
 [[ True]
 [ True]
 [ True]]

        LU

X=nps.LU(A,Y)
print("X* =\n",X)
print("\nAX*=Y \n",A@X==Y)
print("-------------------------------------------------------------------------------");
X=nps.LU(B,Y)
print("X* =\n",X)
Y_=np.around(B@X,1)
print("\nBX*=Y\n",Y_==Y)
X* =
 [[-2009.]
 [ 1690.]
 [ -442.]]

AX*=Y 
 [[ True]
 [ True]
 [ True]]
-------------------------------------------------------------------------------
X* =
 [[18.5]
 [-8. ]
 [43.5]]

BX*=Y
 [[ True]
 [ True]
 [ True]]

        Choleski

X=nps.choleski(A,Y)
print("-------------------------------------------------------------------------------")
X=nps.choleski(B,Y)
print("X =\n",X)
Y_=np.around(B@X,1)
print("\nBX*=Y\n",Y_==Y)
!! A must be positive definite !!
-------------------------------------------------------------------------------
X =
 [[18.5]
 [-8. ]
 [43.5]]

BX*=Y
 [[ True]
 [ True]
 [ True]]

About

A Python package integrating around ten unconstrained optimization algorithms, inclusive of 2D/3D visualizations for comparative analysis, and incorporated matrix operations.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published