Coder Social home page Coder Social logo

gurobi / modeling-examples Goto Github PK

View Code? Open in Web Editor NEW
581.0 581.0 264.0 66.5 MB

Gurobi modeling examples

Home Page: https://gurobi.github.io/modeling-examples/

License: Apache License 2.0

Jupyter Notebook 99.94% HTML 0.01% Python 0.04% SCSS 0.01%

modeling-examples's People

Contributors

cozad-gurobi avatar erothberg avatar fnxcorp avatar gglockner avatar jaczynski avatar lindsaynmontanari avatar maliheha avatar marika-k avatar mattmilten avatar orojuan avatar panitzin avatar venaturum avatar yurchisin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

modeling-examples's Issues

Multiple Lineups from Pool fantasy_basketball_1_2

Hello,
This is a really great and detailed example of how to use gurobi to solve fantasy sports problems. I actually play draftkings , and have been trying to teach myself how to optimize using gurobi opti... Anyway, I have been attempting to use PoolSolutions to return the other solutions... example:

m.setObjective(obj, sense= GRB.MAXIMIZE)
m.setParam(GRB.Param.PoolSolutions, 500) ## finds 500 best lineups
m.setParam(GRB.Param.PoolSearchMode, 2) ## makes sure these are the best solutions (2)
m.update()

m.optimize()
nSolutions = m.SolCount
print('Number of solutions found: ' + str(nSolutions))
for e in range(nSolutions):
    m.setParam(GRB.Param.SolutionNumber, e)

    for v in m.getVars():
        if v.xN ==1:   ### Help I'm stuck

..... How would I modify your colab notebook to print and save all 500 lineups. I have tried for hours, but just don't have a deep enough understanding of the syntax with gurobi and also creating the correct df calls for this. This isn't an issue, as much as a question . Thanks so much in advance, as I know there's no benefit to you helping me. Again, great work and thank you for sharing on github.

traveling_salesman "Model Code": RuntimeError: dictionary changed size during iteration

In tsp.ipynb, when excuting the "Model Code" part, you may get a RuntimeError: dictionary changed size during iteration error on

for i, j in vars.keys():
    vars[j, i] = vars[i, j]  # edge in opposite direction

This is because in the "Data computation" part, when creating variable dist, the following code

dist = {(c1, c2): distance(c1, c2) for c1, c2 in combinations(capitals, 2)}

creates a n(n-1)/2 length dictionary, which means variable dist only contains distance[i,j] and does not contain distance[j,i],
for example ,it only contains dist[('Montgomery', 'Phoenix')] but does not contain dist[('Phoenix',Montgomery')]
This leads to further problem when creating and initializing decision variables for gurobi model:

m = gp.Model()

# Variables: is city 'i' adjacent to city 'j' on the tour?
vars = m.addVars(dist.keys(), obj=dist, vtype=GRB.BINARY, name='x')

# Symmetric direction: Copy the object
for i, j in vars.keys():
    vars[j, i] = vars[i, j]  # edge in opposite direction

As stated above, distance[j,i] does not exist, so vars[j, i] does not exist neither. For example, decision variable vars[('Montgomery', 'Phoenix')] exist but vars[('Phoenix',Montgomery')] does not exist, which makes vars[j, i] = vars[i, j] a RuntimeError.

So to fix this , the code
dist = {(c1, c2): distance(c1, c2) for c1, c2 in combinations(capitals, 2)}
should be changed to
dist = {(c1, c2): distance(c1, c2) for c1, c2 in permutations(capitals, 2)}
And of course you need to import itertools.combinations before using it.

@yurchisin

I have an issue using this model with different data. Can anyone help?

Hi everyone!
I'm modeling with Gurobi, and I'm trying to model an efficiency analysis using different sets of data.
I can't solve the model because it says: "ValueError: not enough values to unpack (expected 3, got 2)".
I attach my model, can anyone understand why it doesn't work?
Thank you in advance!
Matteo

` import pandas as pd
from itertools import product

import gurobipy as gp
from gurobipy import GRB

def solve_DEA(target, verbose=True):
# input-output values for the garages
inattr = ['Cash', 'LEV']
outattr = ['EPS', 'ROA']
dmus,inputs, outputs = gp.multidict({
'DMU1':[{'Cash':0.2485, 'LEV':0.4688 , 'EPS':4, 'ROA':0.2002 }], 'DMU2':[{'Cash':0.1284, 'LEV':0.6539 , 'EPS':3.70, 'ROA':0.0902 }], 'DMU3':[{'Cash':0.293, 'LEV':0.5644 , 'EPS':4.20, 'ROA':0.1627 }]

})

### Create LP model
model = gp.Model('DEA')

# Decision variables
wout = model.addVars(outattr, name="outputWeight")
win = model.addVars(inattr, name="inputWeight")


# Constraints
ratios = model.addConstrs( ( gp.quicksum(outputs[h][r]*wout[r] for r in outattr ) 
                            - gp.quicksum(inputs[h][i]*win[i] for i in inattr ) 
                            <= 0 for h in dmus ), name='ratios' )

normalization = model.addConstr((gp.quicksum(inputs[target][i]*win[i] for i in inattr ) == 1 ),
                                name='normalization')

# Objective function

model.setObjective( gp.quicksum(outputs[target][r]*wout[r] for r in outattr ), GRB.MAXIMIZE)

# Run optimization engine
if not verbose:
    model.params.OutputFlag = 0
model.optimize()

# Print results
print(f"\nThe efficiency of target DMU {target} is {round(model.objVal,3)}") 

print("__________________________________________________________________")
print(f"The weights for the inputs are:")
for i in inattr:
    print(f"For {i}: {round(win[i].x,3)} ") 
    
print("__________________________________________________________________")
print(f"The weights for the outputs are")
for r in outattr:
    print(f"For {r} is: {round(wout[r].x,3)} ") 
print("__________________________________________________________________\n\n")  

return model.objVal

dmus = ['DMU1', 'DMU2', 'DMU3']

performance = {}
for h in dmus:
performance[h] = solve_DEA(h, verbose=False)_
`

Large dataset for power schedule example is incomplete

In this example, we state that the full dataset can simply be read by replacing small_plant_data by large_plant_data. However, in that folder plant_capacities.csv is missing and the file plant_info_update.csv does not have the same structure so can't serve as an immediate replacement either. What is needed to import the large dataset?

Both the training MSE and RSS decrease monotonically as more features are considered

\end{equation}<p>This model, by means of constraint 2, implicitly considers all ${{d-1} \choose s}$ feature subsets at once. However, we also need to find the value for $s$ that maximizes the performance of the regression on unseen observations. Notice that the training RSS decreases monotonically as more features are considered (which translates to relaxing the MIQP), so it is not advisable to use it as the performance metric. Instead, we will estimate the Mean Squared Error (MSE) via cross-validation. This metric is defined as $\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}{(y_i-\hat{y}_i)^2}$, where $y_i$ and $\hat{y}_i$ are the observed and predicted values for the ith observation, respectively. Then, we will fine-tune $s$ using grid search, provided that the set of possible values is quite small.</p>

RSS vs MSE

This paragraph mentioned that it is not advisable to use RSS as the performance metric, but MSE via cross-validation.

I think the highlight on MSE over RSS is misleading. Note that, given estimate $\hat\beta$,

$$ \mathrm{RSS} = (y-X\hat\beta)^T(y-X\hat\beta) = \sum (y_i - \hat{y}_i)^2 = n \cdot \mathrm{MSE} $$

So, both the training MSE and RSS decrease monotonically as more features are considered, not only RSS.

Cross-validation

The cross-validation part should be the correct. That is, we use grid search to find best $s$.

How Indicator constraint can be triggered with multiple variables in Gurobi?

Dear Sir/Madam,

I am trying to write the constraint related to multiple variables. However, it shows error "Indicator constraints can only be triggered by a single binary variable at a given value". Can you help me fix this error? Thank you very much!
The code is following:
mdl.addConstrs((x[i,j,k] - t[i,j,k] == 1) >> (d2[j,k] == d2[i,k] - d[i,j]) for i, j, k in arcos2 if i != 0 and j != 0)

Where:

  • x[i,j,k], t[i,j,k] are binary variables
  • d2[j,k], d2[i,k] are continuous variables
  • d[i,j] is a parameter

technician_routing_scheduling: Possible typo or missing concept.

Hi there, thanks for this great example which I am following - it is really good!

In the model formulation, we have:
image

I see that we have:
dโˆˆD={n+1,n+2,...,n+m}: Index and set of depots (service centers), where m is the number of depots.
I am confused as to how there can be n+m depots.
I am also confused as to why the set starts from n+1 instead of 1. In the example there are two depots, not 9 i.e 2+7.

I have run into something similar when studying the literature (Ramkumar et al 2012, their formulation also has confusing indices for the number of depots:
image
)
This makes me think I am missing some key concept in the formulation (as opposed to a typo).
Any enlightenment would be greatly appreciated.

Edit: Indeed, the paper referenced in the notebook also notes "(1,..,n+m) where the m depots are represented by n+1,...,n+m" but does not state their reasoning for doing so. (S. Salhi, A. Imran, N. A. Wassan. The multi-depot vehicle routing problem with heterogeneous vehicle fleet: Formulation and a variable neighborhood search implementation. Computers & Operations Research 52 (2014) 315-325.)

Edit 2: However, in the Supply Network I example, we have something that makes more sense (to me).:
image
Here there is no (n+1...n+m) indexing.

TypeError: must be real number, not Var

My objective function is this

\mbox{Average Profit under }(s_1,s_2,...,s_5)=\max \quad & \frac{1}{5}\sum_{k=1}^5 \left(\sum_{i=1}^m\sum_{j=1}^n (12.25-c_{ij})x_{ij}^k\right)\

Here's what I wrote
model.setObjective(quicksum((r-c[i,j]) * math.pow(x[i,j,k] ,k)for i in range(n) for j in range(m) for k in range(5)), GRB.MAXIMIZE)

Errors

-TypeError: must be real number, not Var

Lack of control in Google Colab environment

Some notebooks have been broken recently by changes in packages, eg

gurobipy 10 -> gurobipy 11
pandas 1 -> pandas 2
gensim not compatible with SciPy 1.13.0

We need to have tight control over the python environment that these notebooks are run under, especially on Google Colab. Unfortunately Colab does not provide a nice way of facilitating this.

I'm proposing we create a public repo modeling-examples-requirements in which we setup a codeless python package with pinned dependencies, essentially creating a lock file in package form. The sole purpose of this package is to tightly control the package versions in an any environment it is installed into. Then in notebooks we only have to have %pip install /path/to/modeling-examples-requirements.git (or something like that).

We could even be fancy and define a separate set of requirements on some different branches to cater for certain notebooks, and avoid having one huge environment. There'd probably be less than 10 different such branches needed.

[Question] Why x.sum() on constraint takes so long?

Hello there!
Using this as base example

I am probably doing something wrong here. I have a distance matrix with costs for traveling from point to point. Not using facility fixed cost, so I want to limit the number of facilities just to fulfill the total demand, such as:

open = m.addVars(list(dict_capacity.keys()), vtype=GRB.BINARY, name="open") #dict capacity has the capacities as values

open_max = m.addVar(vtype=GRB.INTEGER, name="max_open_opportunities", lb=1, ub=32)
m.addConstr(open_max == open.sum())

Assuming that 32 is the max number of facilities I want open, the model takes too long. The way I understood, open.sum() is supposed to give me the sum of the Binary variables on that solution, ie. the number of open facilities. Is it not supposed to have this quicksum quite fast?

m.addConstrs(
            (
                open.sum() * dict_capacity[capacity] <= total_demand + 100
                for capacity in list(dict_capacity.keys())
            )
        )

This is a less hardcoded version I made to test. It works (opens 31 facilities), only because all facilities have 80 units to offer. But it takes so long that I need to Control+C to get objetive value and opened facilities.

open has 628 items

Thanks for your time!

Broken link in predict_bike_flow.ipynb

In modeling-examples/optimization101/bike_share/predict_bike_flow.ipynb we try to download a dataset at:

https://s3.amazonaws.com/tripdata/202207-citbike-tripdata.csv.zip

It no longer exists. I'll see if I can find it somewhere else.

proposal to avoid nx.nx_pydot.graphviz_layout

In modeling-examples/food_program/food_supply.ipynb we draw a network with nx.nx_pydot.graphviz_layout which requires an external program called Graphviz to be installed.

We can get essentially the same plot with nx.kamada_kawai_layout which is implemented entirely within the networkx package.

image

Moving to nx.kamada_kawai_layout will make it easier for automated testing.

music_recommendation.ipynb cannot run under python 3.12

modeling-examples/music_recommendation/music_recommendation.ipynb relies on lightfm.

The following works in an empty py311 environment:

python -m pip install --upgrade pip setuptools wheel
python -m pip install lightfm

but fails to build in an empty py312 environment. Consequently the notebook can't be used with py312.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.