Coder Social home page Coder Social logo

dparo / master-thesis Goto Github PK

View Code? Open in Web Editor NEW
12.0 1.0 3.0 29.46 MB

A Branch-and-Cut based Pricer for the Capacitated Vehicle Routing Problem

License: MIT License

CMake 3.24% C 94.00% Shell 0.36% Python 2.40%
c mit-license cplex branch-and-cut branch-and-cut-and-price capacitated-vehicle-routing-problem capacitated-profitable-tour-problem elementary-shortest-path

master-thesis's People

Contributors

dparo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

master-thesis's Issues

CPTP as BaPCod plugin

TODO

Develop an API to use CPTP as BaPCod pricer plugin, and test the new performance.

WHY

  • Having CPTP in a separate executable incurs in costs which are unavoidable. For each pricing iteration CPTP needs to: spawn its process, parse command line arguments, setup CPLEX environment, allocate memory, setup MIP model, etc. Also, by living inside the same executable, cut generated at previous iterations can potentially be reused (no wasted work). The performance comparison will always be skewed in favour of BaPCod, even if CPTP manages to do a fantastic job for what it has available.
  • An iteration-by-iteration comparison is not possible thanks to the fact that BapCod pricer is not guaranteed to generate elementary paths. Even when the NG-path set size is set to a the maximum allowed value, eg 63, the NG-path set size is dynamically lowered/raised only when the RCSP pricer deems it to be important. This means that even at the last pricing iteration, the NG path size is not guaranteed to be equal to the maximum settable value. On top of that, there's no way for client code (eg the BcRCSPFunctor) to know the current value of the NG set size. Therefore elementarity can be guaranteed in BapCod only for instances having less than K <= 63 customers, where K value could be anything. CPTP solves a much harder problem by considering the elementarity constraint. Comparing solution time on a iteration-by-iteration basis is not fair. The much more RELIABLE total solution time, measured as the time BaPCod takes to solve the entire CVRP instance, is the approach that should be taken.
  • Running as a pricer inside BapCod also allows us to have an absolute performance metric which is RELIABLE (eg total solution time for a CVRP instance). Imagine the following scenario for an example input CVRP instance:
    • BapCod pricer would need 1000 iterations taking 10 ms each before concluding no reduced cost tour exist
    • CPTP pricer would need a single iteration taking 0.5 s before concluding no reduced cost tour exist.

Under a iteration-by-iteraion comparison CPTP always loses, but under the big scheme of things, it wins.

Improve running time of `GSEC` integral separation, and fractional separation of GLM, and `RCI`

What

When separating these cuts, it requires a Theta(N^2) complexity to generate the index and value array and only later we check if the cut is violated.

How

Fix these cuts by generating the index and value array only when absolutely needed.
Try to short circuit the separation function as early as possible, by determining if any cut is going to be violated. If no cut is going to be violated, there's no point paying for the Theta(N^2) iteration generating the index and value array that will never be used.

This should also improve cache locality.

NOTE: it is already partially fixed in GSEC fractional separation but not for the integral separation

Double check the set S used for cut separation

Double check that the set S used for the cut inequalities satisfies that: |S| >= 2, specifically for the following separations:

  • integral separation of GSECs
  • fractional separation of GSEC
  • GLM
  • RCI

Relax cut separation by adding cuts only if they are violated above a certain threshold

TODO

Relax the cut separation and make it more aggressive as to control and facilitate branching events.

Each cut should accept a double violation_amt (exposed to the commandline) to relax the conditions at which cuts are considered violated

Reasoning behind this idea:

It may be interesting to test the MIP solver performance by not always forcing the cut generations (eg even when they are barely effective) as to anticipate branching

  • PROS: If the resolution process is spending tremendous time at the root node by creating ineffective cuts, it may be time to prefer branching.
  • CONS: Branching usually leads to more memory consumption. Large CPTP instances becomes much harder to solve under low memory availability scenarios.

Add support for arbitrary statistics handling and plotting in the `perfprof` executable

Refactor the perfprof code to handle arbitrary statistics from the JSON file

Each statistic should have:

  • A stat name
  • An associated JSON string entry name for both the BaPCod (if applicable) and for the CPTP json output
  • A default value used when the statistic is not found in the JSON file
  • A transformation function to create ratios and such for outputting the CSV used for plotting

The CPTP main executable should:

  • Expose most of the CPLEX statistics in the JSON output file (examples: number of simplex iterations, number of cuts generated, number of branch and bound nodes, etc)

2-OPT refinement for warm start solutions

Implement 2-OPT refinement step for the solutions generated from the insertion heuristic algorithm.

For ESPPRSC a 2-OPT exchange maintains feasibility (the same vertices are visited after an exchange) and can improve the solution cost.

Add capacity lower bound constraint to the MIP model

The capacity lower bound constraint, although non strictly necessary to formulate the problem, allows improving the linear relaxation, likely leading to far fewer cuts being generated.

Unfortunately, the capacity lower bound usually is not so stringent, and its benefit is subject to debate.

Study the CPTP MIP performance under abundant capacity availability

TODO

Generate new CPTP instances by raising (by a multiplicative factor) the vehicle capacity.
Solve the new instances using BapCod and the CPTP solver, and do all the necessary performance profiles.

HOW

Use a batch program that takes a .vrp file in input, a command-line argument specifying the vehicle_cap factor to be multiplied and outputs the new instance to disk.

WHY

  • Dynamic Programming based labelling algorithm are known to suffer when inputted with instances with optimal solutions having long routes (eg when vehicle_cap is high).
  • This instances are in fact more similar to real world problems (eg think of Amazon order delivery).
  • High vehicle capacities, and therefore long routes, work better under a Branch and Cut approach

Generate final performance profiles

TODO

In order to assess the performance of our CPTP executable we need to generate the performance profiles for many pricing iterations induced from solving common CVRP instances...

Due to the problem of BapCod non-guaranteed elementarity, and the Branch&Cut nature of our approach we can distinguish three main different families of cut generations:

  • Only with CVRP instances having a number of customers inferior to the size of the NG-paths assumed in the BaPCod Pricer (63 by default)
  • New CVRP instances obtained from the original CVRP instances, but having a vehicle_cap which is multiplied by a constant factor (favors long tours solutions, where B&C approaches seem to work better than dynamic labelling algorithms)
  • Use all standard CVRP instances independently from the number of customers. The main CVRP families to use for comparison are: family E (short routes, hard for a B&C) and family F (long routes, easier for a B&C)

And combinations of the above.

WHY

Even though unlikely our CPTP solver will beat the dynamic programming labelling algorithm it can still provide a non-negligible contribution when vehicle capacities are high, and similar to real world problems (eg Amazon order delivery)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.