Optimization with Constraints

maximize column sum with constraints in python

As already mentioned in the comments, this can easily be formulated as an integer linear program:

min sum(gain[c, v]*b[c,v] for c in customers for v in visits[c])

s.t.

# visit each customer at most once
sum(b[c,v] for v in visits[c]) <= 1 for c in customers

# total number of visits
sum(b[c,v]*v for all v in visits, for all c in customers) == 4

Here,

  • gain[c,v] is the gain if you visit customer c exactly v times.
  • b[c,v] is a binary variable that equals 1 if customer c is visited v times and 0 otherwise.
  • customers is the list of customers
  • visits[c] gives you the number of possible visits of customer c

Currently, there's no support for ILPs in scipy.optimize, but it's on the way :). In the meantime, you can use a modelling framework like PuLP:

import pandas as pd
import numpy as np
import pulp

df = pd.DataFrame({ 'customer':['A','A','B', 'B', 'C', 'C', 'D', 'D'],
'num_of_visits': [1,2,1,2,1,2,1,2],
'gain': [2,5, 3,4, 6,8, 5,10] })

# Sets and parameters
customers = df.customer.unique()
gain = {(row.customer, row.num_of_visits): row.gain for idx, row in df.iterrows()}
visits = {c: df[df.customer == c].num_of_visits.values for c in customers}

# pulp model
mdl = pulp.LpProblem("milp", pulp.LpMaximize)

# define the binary variables
b = {}
for c in customers:
for v in visits[c]:
b[c, v] = pulp.LpVariable(f"b[{c}, {v}]", cat="Binary")

# define the objective
mdl.setObjective(sum(gain[c, v]*b[c, v] for c in customers for v in visits[c]))

# Visit each customer at most once
for c in customers:
mdl.addConstraint(sum(b[c, v] for v in visits[c]) <= 1)

# total number of visits
mdl.addConstraint(sum(b[c, v]*v for c in customers for v in visits[c]) == 4)

# solve the problem
mdl.solve()

# output the solution
print(f"Status: {pulp.LpStatus[mdl.status]}")
for c in customers:
for v in visits[c]:
if b[c, v].varValue > 0:
print(f"Visit customer {c} exactly {v} times")
print(f"objective value: {mdl.objective.value()}")

which yields

Status: Optimal
Visit customer B exactly 1 times
Visit customer C exactly 1 times
Visit customer D exactly 2 times
objective value: 19.0

SciPy optimize with additional variables in constraint function

Interesting problem. IMO, it's a best practice to write the objective function and all the constraints as a function of one (vectorial) optimization variable z. Inside your functions, you can then simply set z = (en, x2) and split the variable z back into en and x2. Thus, your objective function can be written as follows:

def en_sum(z):
en, x2 = z[:m], z[m:]
return np.sum(en**2)

Note also that en_sum(en) and min_eigenvalues(use_mat(en, x2)) aren't functions and thus, aren't callable. Furthermore, your constraint that the minimal eigenvalue has to be positive (the matrix returned by use_mat positive definite), is equivalent to all eigenvalues being positive. This part is a bit messy since it's not trivial to find a feasible starting point such that this constraint is fulfilled. However, we can cheat by only considering the real part of complex eigenvalues. As a consequence, the constraint is fulfilled even if the current point is not feasible. Here, we basically hope that it still converges into the feasible region (where all eigenvalues are real and positive) during the first few iterations.

In code:

import numpy as np
from scipy.optimize import minimize, NonlinearConstraint

def en_min(varia):
#en_min is a function that (aims to) return the optimised values of en and x2.
m = 2
n = 2
s = 2
en = varia[0:m]
x2 = varia[m:len(varia)]

# use_mat(a,b) defines the matrix which depends on en and x2 variables.
def use_mat(z):
a, b = z[:m], z[m:]
mat = np.empty([m * s, n * s])
for i in range(0, m * s):
for j in range(0, m * s):
mat[i, j] = (b[j]*a[i-s])** 2 + a[j-s]**2 + (b[j]+b[i])**2
return mat

# Defining the optimisation function.
def en_sum(z):
en, x2 = z[:m], z[m:]
return np.sum(en**2)

# This function provides the minimal eigenvalue of a given matrix.
def min_eigenvalue(a):
return np.real(np.linalg.eigvals(a))

# constraint
nlc = NonlinearConstraint(lambda z: min_eigenvalue(use_mat(z)), 1e-12, np.inf)

# Bounds on the variables
bounds = [(None, None)]*en.size + [(0, None)]*x2.size

# initial guess
x0 = np.hstack((en, 0.1*np.ones(x2.size)))

opt_res2 = minimize(en_sum, x0=x0, bounds=bounds, constraints=nlc)
return opt_res2

print(en_min(np.array([1, 3, 1.5, 0, 0, 4])))

Alternatively, since a matrix A is positive definite if and only if there's a matrix V such that A = V'V, one could solve an auxiliary problem in order to find a feasible initial guess.

It's also worth mentioning that the above code isn't really efficient and will be quite slow for large optimization problems due to the approximation of the Jacobian (= the derivatives of our constraint function): Currently, each evaluation of the Jacobian requires N*N evaluations of min_eigenvalue, where N is the total number of variables. Thus, we calculate N*N times the eigenvalues per iteration. All in all, for larger problems it's definitely worth providing the exact Jacobian or using algorithmic differentiation



Related Topics



Leave a reply



Submit