Coder Social home page Coder Social logo

kahypar.jl's Introduction

KaHyPar - Karlsruhe Hypergraph Partitioning

License Fossa Zenodo
License: GPL v3 FOSSA Status DOI
Linux & macOS Build Code Coverage Code Quality Coverity Scan Issues
Build Status codecov Codacy Badge Coverity Status Average time to resolve an issue

Table of Contents

What is a Hypergraph? What is Hypergraph Partitioning?

Hypergraphs are a generalization of graphs, where each (hyper)edge (also called net) can connect more than two vertices. The k-way hypergraph partitioning problem is the generalization of the well-known graph partitioning problem: partition the vertex set into k disjoint blocks of bounded size (at most 1 + ε times the average block size), while minimizing an objective function defined on the nets.

The two most prominent objective functions are the cut-net and the connectivity (or λ − 1) metrics. Cut-net is a straightforward generalization of the edge-cut objective in graph partitioning (i.e., minimizing the sum of the weights of those nets that connect more than one block). The connectivity metric additionally takes into account the actual number λ of blocks connected by a net. By summing the (λ − 1)-values of all nets, one accurately models the total communication volume of parallel sparse matrix-vector multiplication and once more gets a metric that reverts to edge-cut for plain graphs.

alt textalt text

What is KaHyPar?

KaHyPar is a multilevel hypergraph partitioning framework for optimizing the cut- and the (λ − 1)-metric. It supports both recursive bisection and direct k-way partitioning. As a multilevel algorithm, it consist of three phases: In the coarsening phase, the hypergraph is coarsened to obtain a hierarchy of smaller hypergraphs. After applying an initial partitioning algorithm to the smallest hypergraph in the second phase, coarsening is undone and, at each level, a local search method is used to improve the partition induced by the coarser level. KaHyPar instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grained n-level approach combined with strong local search heuristics, it computes solutions of very high quality. Its algorithms and detailed experimental results are presented in several research publications.

Additional Features

  • Hypergraph partitioning with variable block weights

    KaHyPar has support for variable block weights. If command line option --use-individual-part-weights=true is used, the partitioner tries to partition the hypergraph such that each block Vx has a weight of at most Bx, where Bx can be specified for each block individually using the command line parameter --part-weights= B1 B2 B3 ... Bk-1. Since the framework does not yet support perfectly balanced partitioning, upper bounds need to be slightly larger than the total weight of all vertices of the hypergraph. Note that this feature is still experimental.

  • Hypergraph partitioning with fixed vertices

    Hypergraph partitioning with fixed vertices is a variation of standard hypergraph partitioning. In this problem, there is an additional constraint on the block assignment of some vertices, i.e., some vertices are preassigned to specific blocks prior to partitioning with the condition that, after partitioning the remaining “free” vertices, the fixed vertices are still in the block that they were assigned to. The command line parameter --fixed / -f can be used to specify a fix file in hMetis fix file format. For a hypergraph with V vertices, the fix file consists of V lines - one for each vertex. The ith line either contains -1 to indicate that the vertex is free to move or <part id> to indicate that this vertex should be preassigned to block <part id>. Note that part ids start from 0.

    KaHyPar currently supports three different contraction policies for partitioning with fixed vertices:

    1. free_vertex_only allows all contractions in which the contraction partner is a free vertex, i.e., it allows contractions of vertex pairs where either both vertices are free, or one vertex is fixed and the other vertex is free.
    2. fixed_vertex_allowed additionally allows contractions of two fixed vertices provided that both are preassigned to the same block. Based on preliminary experiments, this is currently the default policy.
    3. equivalent_vertices only allows contractions of vertex pairs that consist of either two free vertices or two fixed vertices preassigned to the same block.
  • Evolutionary framework (KaHyPar-E)

    KaHyPar-E enhances KaHyPar with an evolutionary framework as described in our GECCO'18 publication. Given a fairly large amount of running time, this memetic multilevel algorithm performs better than repeated executions of nonevolutionary KaHyPar configurations, hMetis, and PaToH. The command line parameter --time-limit=xxx can be used to set the maximum running time (in seconds). Parameter --partition-evolutionary=true enables evolutionary partitioning.

  • Improving existing partitions

    KaHyPar uses direct k-way V-cycles to try to improve an existing partition specified via parameter --part-file=</path/to/file>. The maximum number of V-cycles can be controlled via parameter --vcycles=.

  • Partitioning directed acyclic hypergraphs

    While the code has not been merged into the main repository yet, there exists a fork that supports acyclic hypergraph partitioning. More details can be found in the corresponding conference publication.

Experimental Results

We use the performance profiles to compare KaHyPar to other partitioning algorithms in terms of solution quality. For a set of algorithms and a benchmark set containing instances, the performance ratio relates the cut computed by partitioner p for instance i to the smallest minimum cut of all algorithms, i.e.,

.

The performance profile of algorithm p is then given by the function

.

For connectivity optimization, the performance ratios are computed using the connectivity values instead of the cut values. The value of corresponds to the fraction of instances for which partitioner p computed the best solution, while is the probability that a performance ratio is within a factor of of the best possible ratio. Note that since performance profiles only allow to assess the performance of each algorithm relative to the best algorithm, the values cannot be used to rank algorithms (i.e., to determine which algorithm is the second best etc.).

In our experimental analysis, the performance profile plots are based on the best solutions (i.e., minimum connectivity/cut) each algorithm found for each instance. Furthermore, we choose parameters for all p, i, and such that a performance ratio if and only if algorithm p computed an infeasible solution for instance i, and if and only if the algorithm could not compute a solution for instance i within the given time limit. In our performance profile plots, performance ratios corresponding to infeasible solutions will be shown on the x-tick on the x-axis, while instances that could not be partitioned within the time limit are shown implicitly by a line that exits the plot below . Since the performance ratios are heavily right-skewed, the performance profile plots are divided into three segments with different ranges for parameter to reflect various areas of interest. The first segment highlights small values (), while the second segment contains results for all instances that are up to a factor of worse than the best possible ratio. The last segment contains all remaining ratios, i.e., instances for which some algorithms performed considerably worse than the best algorithm, instances for which algorithms produced infeasible solutions, and instances which could not be partitioned within the given time limit.

In the figures, we compare KaHyPar with PaToH in quality (PaToH-Q) and default mode (PaToH-D), the k-way (hMETIS-K) and the recursive bisection variant (hMETIS-R) of hMETIS 2.0 (p1), Zoltan using algebraic distance-based coarsening (Zoltan-AlgD), Mondriaan v.4.2.1 and the recently published HYPE algorithm.

Solution Quality Solution Quality

Running Time Running Time

Additional Resources

We provide additional resources for all KaHyPar-related publications:

kKaHyPar-SEA20 /
rKaHyPar-SEA20
SEA'20 Paper TR Slides Experimental Results
kKaHyPar /
rKaHyPar
- Dissertation - Slides Experimental Results
KaHyPar-MF /
KaHyPar-R-MF
SEA'18 /
JEA'19
SEA Paper /
JEA Paper
TR Slides Experimental Results:
SEA / JEA
KaHyPar-E (EvoHGP) GECCO'18 Paper TR Slides Experimental Results
KaHyPar-CA SEA'17 Paper - Slides Experimental Results
KaHyPar-K ALENEX'17 Paper - Slides Experimental Results
KaHyPar-R ALENEX'16 Paper TR Slides Experimental Results

Projects using KaHyPar

Requirements

The Karlsruhe Hypergraph Partitioning Framework requires:

  • A 64-bit operating system. Linux, Mac OS X and Windows (through the Windows Subsystem for Linux) are currently supported.
  • A modern, C++14-ready compiler such as g++ version 9 or higher or clang version 11.0.3 or higher.
  • The cmake build system.
  • The Boost.Program_options library and the boost header files. If you don't want to install boost yourself, you can add the -DKAHYPAR_USE_MINIMAL_BOOST=ON flag to the cmake command to download, extract, and build the necessary dependencies automatically.

Building KaHyPar

  1. Clone the repository including submodules:

    git clone --depth=1 --recursive [email protected]:SebastianSchlag/kahypar.git

  2. Create a build directory: mkdir build && cd build

  3. Run cmake: cmake .. -DCMAKE_BUILD_TYPE=RELEASE

  4. Run make: make

Testing and Profiling

Tests are automatically executed while project is built. Additionally a test target is provided. End-to-end integration tests can be started with: make integration_tests. Profiling can be enabled via cmake flag: -DENABLE_PROFILE=ON.

Running KaHyPar

The standalone program can be built via make KaHyPar. The binary will be located at: build/kahypar/application/.

KaHyPar has several configuration parameters. For a list of all possible parameters please run: ./KaHyPar --help. We use the hMetis format for the input hypergraph file as well as the partition output file.

The command line parameter --quiet=1 can be used to suppress all logging output. If you are using the library interfaces, adding quiet=1 to the corresponding .ini configuration file has the same effect.

Default / Most Recent Presets

We provide two default framework configurations - one for recursive bipartitioning (rKaHyPar) and one for direct k-way partitioning (kKaHyPar).

In general, we recommend using kKaHyPar as it performs better than rKaHyPar in terms of both running time and solution quality.

To start kKaHyPar optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/km1_kKaHyPar_sea20.ini

To start kKaHyPar optimizing the cut net objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m direct -p ../../../config/cut_kKaHyPar_sea20.ini

To start rKaHyPar optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m recursive -p ../../../config/km1_rKaHyPar_sea20.ini

To start rKaHyPar optimizing the cut net objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m recursive -p ../../../config/cut_rKaHyPar_sea20.ini

To start the memetic algorithm kKaHyPar-E optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/km1_kKaHyPar-E_sea20.ini

Old Presets

Additionally, we provide different presets that correspond to the configurations used in the publications at ALENEX'16, ALENEX'17, SEA'17, SEA'18, GECCO'18, as well as in our JEA journal paper and in the dissertation of Sebastian Schlag. These configurations are located in the config/old_reference_configs folder. In order to use these configurations, you have to checkout KaHyPar release 1.1.0, since some old code as been removed in the most current release.

To start KaHyPar-MF (using flow-based refinement) optimizing the (connectivity - 1) objective using direct k-way mode run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_kahypar_mf_jea19.ini

To start KaHyPar-MF (using flow-based refinement) optimizing the cut-net objective using direct k-way mode run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m direct -p ../../../config/old_reference_configs/cut_kahypar_mf_jea19.ini

To start EvoHGP/KaHyPar-E optimizing the (connectivity - 1) objective using direct k-way mode run

 ./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_direct_kway_gecco18.ini

Note that the configuration km1_direct_kway_gecco18.ini is based on KaHyPar-CA. However, KaHyPar-E also works with flow-based local improvements. In our JEA publication the km1_kahypar_e_mf_jea19.ini configuration was used.

To start KaHyPar-CA (using community-aware coarsening) optimizing the (connectivity - 1) objective using direct k-way mode run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_direct_kway_sea17.ini

To start KaHyPar in direct k-way mode (KaHyPar-K) optimizing the (connectivity - 1) objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o km1 -m direct -p ../../../config/old_reference_configs/km1_direct_kway_alenex17.ini

To start KaHyPar in recursive bisection mode (KaHyPar-R) optimizing the cut-net objective run:

./KaHyPar -h <path-to-hgr> -k <# blocks> -e <imbalance (e.g. 0.03)> -o cut -m recursive -p ../../../config/old_reference_configs/cut_rb_alenex16.ini

All preset parameters can be overwritten by using the corresponding command line options.

Input Validation

When creating a hypergraph KaHyPar validates that the input is actually a correct hypergraph, otherwise printing an error and aborting. This applies to hgr input files, the C interface and the Python interface. The runtime cost of the validation should be negligible in almost all cases. However, the input validation can also be disabled using the cmake flag -DKAHYPAR_INPUT_VALIDATION=OFF.

Additionally, warnings are printed for non-fatal issues (e.g. hyperedges with duplicate pins). To treat non-fatal issues as hard errors instead, use the cmake flag -DKAHYPAR_INPUT_VALIDATION_PROMOTE_WARNINGS_TO_ERRORS=ON.

Using the Library Interfaces

The C-Style Interface

We provide a simple C-style interface to use KaHyPar as a library. Note that this interface is not thread-safe yet. However there are some existing workarounds. The library can be built and installed via

make install.library

and can be used like this:

#include <memory>
#include <vector>
#include <iostream>

#include <libkahypar.h>

int main(int argc, char* argv[]) {

  kahypar_context_t* context = kahypar_context_new();
  kahypar_configure_context_from_file(context, "/path/to/config.ini");
  
  kahypar_set_seed(context, 42);

  const kahypar_hypernode_id_t num_vertices = 7;
  const kahypar_hyperedge_id_t num_hyperedges = 4;

  std::unique_ptr<kahypar_hyperedge_weight_t[]> hyperedge_weights = std::make_unique<kahypar_hyperedge_weight_t[]>(4);

  // force the cut to contain hyperedge 0 and 2
  hyperedge_weights[0] = 1;  hyperedge_weights[1] = 1000;
  hyperedge_weights[2] = 1;  hyperedge_weights[3] = 1000;

  std::unique_ptr<size_t[]> hyperedge_indices = std::make_unique<size_t[]>(5);

  hyperedge_indices[0] = 0; hyperedge_indices[1] = 2;
  hyperedge_indices[2] = 6; hyperedge_indices[3] = 9;
  hyperedge_indices[4] = 12;

  std::unique_ptr<kahypar_hyperedge_id_t[]> hyperedges = std::make_unique<kahypar_hyperedge_id_t[]>(12);

  // hypergraph from hMetis manual page 14
  hyperedges[0] = 0;  hyperedges[1] = 2;
  hyperedges[2] = 0;  hyperedges[3] = 1;
  hyperedges[4] = 3;  hyperedges[5] = 4;
  hyperedges[6] = 3;  hyperedges[7] = 4;
  hyperedges[8] = 6;  hyperedges[9] = 2;
  hyperedges[10] = 5; hyperedges[11] = 6;

  const double imbalance = 0.03;
  const kahypar_partition_id_t k = 2;

  kahypar_hyperedge_weight_t objective = 0;

  std::vector<kahypar_partition_id_t> partition(num_vertices, -1);

  kahypar_partition(num_vertices, num_hyperedges,
       	            imbalance, k,
               	    /*vertex_weights */ nullptr, hyperedge_weights.get(),
               	    hyperedge_indices.get(), hyperedges.get(),
       	            &objective, context, partition.data());

  for(int i = 0; i != num_vertices; ++i) {
    std::cout << i << ":" << partition[i] << std::endl;
  }

  kahypar_context_free(context);
}

To compile the program using g++ run:

g++ -std=c++14 -DNDEBUG -O3 -I/usr/local/include -L/usr/local/lib program.cc -o program -lkahypar

Note: If boost is not found during linking, you might need to add -L/path/to/boost/lib -I/path/to/boost/include -lboost_program_options to the command.

To remove the library from your system use the provided uninstall target:

make uninstall-kahypar

The Python Interface

To compile the Python interface, do the following:

  1. Create a build directory: mkdir build && cd build
  2. If you have boost installed on your system, run cmake: cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DKAHYPAR_PYTHON_INTERFACE=1. If you don't have boost installed, run: cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DKAHYPAR_PYTHON_INTERFACE=1 -DKAHYPAR_USE_MINIMAL_BOOST=ON instead. This will download, extract, and build the necessary dependencies automatically.
  3. Go to libary folder: cd python
  4. Compile the libarary: make
  5. Copy the libary to your site-packages directory: cp kahypar.so <path-to-site-packages>

After that you can use the KaHyPar libary like this:

import os
import kahypar as kahypar

num_nodes = 7
num_nets = 4

hyperedge_indices = [0,2,6,9,12]
hyperedges = [0,2,0,1,3,4,3,4,6,2,5,6]

node_weights = [1,2,3,4,5,6,7]
edge_weights = [11,22,33,44]

k=2

hypergraph = kahypar.Hypergraph(num_nodes, num_nets, hyperedge_indices, hyperedges, k, edge_weights, node_weights)

context = kahypar.Context()
context.loadINIconfiguration("<path/to/config>/km1_kKaHyPar_sea20.ini")

context.setK(k)
context.setEpsilon(0.03)

kahypar.partition(hypergraph, context)

For more information about the python library functionality, please see: module.cpp

We also provide a precompiled version as a PyPI version , which can be installed via:

python3 -m pip install --index-url https://pypi.org/simple/ --no-deps kahypar

The Julia Interface

Thanks to Jordan Jalving (@jalving) KaHyPar now also offers a Julia interface, which can currently be found here: kahypar/KaHyPar.jl.

The corresponding dependency can be installed via:

using Pkg
Pkg.add(PackageSpec(url="https://github.com/jalving/KaHyPar.jl.git"))
Pkg.test("KaHyPar")

After that, you can use KaHyPar to partition your hypergraphs like this:

using KaHyPar
using SparseArrays

I = [1,3,1,2,4,5,4,5,7,3,6,7]
J = [1,1,2,2,2,2,3,3,3,4,4,4]
V = Int.(ones(length(I)))

A = sparse(I,J,V)

h = KaHyPar.hypergraph(A)

KaHyPar.partition(h,2,configuration = :edge_cut)

KaHyPar.partition(h,2,configuration = :connectivity)

KaHyPar.partition(h,2,configuration = joinpath(@__DIR__,"../src/config/km1_kKaHyPar_sea20.ini"))

The Java Interface

Romain Wallon has created a Java interface for KaHyPar. Please refer to the readme for a detailed description on how to build and use the interface.

Bug Reports

We encourage you to report any problems with KaHyPar via the github issue tracking system of the project.

Licensing

KaHyPar is free software provided under the GNU General Public License (GPLv3). For more information see the COPYING file. We distribute this framework freely to foster the use and development of hypergraph partitioning tools. If you use KaHyPar in an academic setting please cite the appropriate papers. If you are interested in a commercial license, please contact me.

// Overall KaHyPar framework
@phdthesis{DBLP:phd/dnb/Schlag20,
  author    = {Sebastian Schlag},
  title     = {High-Quality Hypergraph Partitioning},
  school    = {Karlsruhe Institute of Technology, Germany},
  year      = {2020}
}

@article{10.1145/3529090, 
  author = {Schlag, Sebastian and 
            Heuer, Tobias and 
            Gottesb\"{u}ren, Lars and 
            Akhremtsev, Yaroslav and 
            Schulz, Christian and 
            Sanders, Peter}, 
  title = {High-Quality Hypergraph Partitioning}, 
  url = {https://doi.org/10.1145/3529090}, 
  doi = {10.1145/3529090}, 
  journal = {ACM J. Exp. Algorithmics},
  year = {2022}, 
  month = {mar}
}

// KaHyPar-R
@inproceedings{shhmss2016alenex,
 author    = {Sebastian Schlag and
              Vitali Henne and
              Tobias Heuer and
              Henning Meyerhenke and
              Peter Sanders and
              Christian Schulz},
 title     = {k-way Hypergraph Partitioning via \emph{n}-Level Recursive
              Bisection},
 booktitle = {18th Workshop on Algorithm Engineering and Experiments, (ALENEX 2016)},
 pages     = {53--67},
 year      = {2016},
}

// KaHyPar-K
@inproceedings{ahss2017alenex,
 author    = {Yaroslav Akhremtsev and
              Tobias Heuer and
              Peter Sanders and
              Sebastian Schlag},
 title     = {Engineering a direct \emph{k}-way Hypergraph Partitioning Algorithm},
 booktitle = {19th Workshop on Algorithm Engineering and Experiments, (ALENEX 2017)},
 pages     = {28--42},
 year      = {2017},
}

// KaHyPar-CA
@inproceedings{hs2017sea,
 author    = {Tobias Heuer and
              Sebastian Schlag},
 title     = {Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure},
 booktitle = {16th International Symposium on Experimental Algorithms, (SEA 2017)},
 pages     = {21:1--21:19},
 year      = {2017},
}

// KaHyPar-MF
@inproceedings{heuer_et_al:LIPIcs:2018:8936,
 author ={Tobias Heuer and Peter Sanders and Sebastian Schlag},
 title ={{Network Flow-Based Refinement for Multilevel Hypergraph Partitioning}},
 booktitle ={17th International Symposium on Experimental Algorithms  (SEA 2018)},
 pages ={1:1--1:19},
 year ={2018}
}


@article{KaHyPar-MF-JEA,
  author = {Heuer, T. and Sanders, P. and Schlag, S.},
  title = {Network Flow-Based Refinement for Multilevel Hypergraph Partitioning},
  journal = {ACM Journal of Experimental Algorithmics (JEA)}},
  volume = {24},
  number = {1},
  month = {09},
  year = {2019},
  pages = {2.3:1--2.3:36},
  publisher = {ACM}
}

// KaHyPar-E (EvoHGP)
@inproceedings{Andre:2018:MMH:3205455.3205475,
 author = {Robin Andre and Sebastian Schlag and Christian Schulz},
 title = {Memetic Multilevel Hypergraph Partitioning},
 booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference},
 series = {GECCO '18},
 year = {2018},
 pages = {347--354},
 numpages = {8}
}

// KaHyPar-SEA20 (KaHyPar-HFC)
@InProceedings{gottesbren_et_al:LIPIcs:2020:12085,
author = {Lars Gottesb{\"u}ren and Michael Hamann and Sebastian Schlag and Dorothea Wagner},
title =	{{Advanced Flow-Based Multilevel Hypergraph Partitioning}},
booktitle = {18th International Symposium on Experimental Algorithms (SEA)},
pages =	{11:1--11:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
year =	{2020}
}

Contributing

If you are interested in contributing to the KaHyPar framework feel free to contact me or create an issue on the issue tracking system.

kahypar.jl's People

Contributors

jalving avatar logankilpatrick avatar mofeing avatar nmoran avatar roger-luo avatar sebastianschlag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kahypar.jl's Issues

Cint limited to Int32

Hi :)

I am trying to create hypergraphs with pretty huge hyperedge weights. As a consequence, when these get too large I get this error log:

ERROR: InexactError: trunc(Int32, 1099511627776)
Stacktrace:
  [1] throw_inexacterror(f::Symbol, #unused#::Type{Int32}, val::Int64)
    @ Core ./boot.jl:634
  [2] checked_trunc_sint
    @ ./boot.jl:656 [inlined]
  [3] toInt32
    @ ./boot.jl:693 [inlined]
  [4] Int32(x::Int64)
    @ Core ./boot.jl:783
  [5] _broadcast_getindex_evalf
    @ ./broadcast.jl:683 [inlined]
  [6] _broadcast_getindex
    @ ./broadcast.jl:656 [inlined]
  [7] getindex
    @ ./broadcast.jl:610 [inlined]
  [8] copy
    @ ./broadcast.jl:912 [inlined]
  [9] materialize
    @ ./broadcast.jl:873 [inlined]
 [10] KaHyPar.HyperGraph(A::SparseArrays.SparseMatrixCSC{Float64, Int64}, vertex_weights::Vector{Float64}, edge_weights::Vector{Any})
    @ KaHyPar ~/.julia/packages/KaHyPar/63UEh/src/KaHyPar.jl:82

This is due to the fact that all constants in kahypar_h.jl, including kahypar_hyperedge_weight_t, are declared as Cint or Cuint, corresponding to julia's Int32, which is not enough bits to represent these larger integers.
In kahypar_h.jl, lines 1-5:

const kahypar_hypernode_id_t = Cuint
const kahypar_hyperedge_id_t = Cuint
const kahypar_hypernode_weight_t = Cint
const kahypar_hyperedge_weight_t = Cint
const kahypar_partition_id_t = Cint

For now, I have temporarily solved this issue with a try-catch-else clause:

try 
	Int32(my_huge_number)
catch
else
	# my_huge_number can be used for computation
end

Could this be solved by parsing a larger integer, e.g. Int64, to C?

Partition leads to invalid object identifier error [KaHyPar v1.3.0]

The Julia partition function fails when using KaHyPar v1.3.0. The following code produces the failure:

using KaHyPar
using SparseArrays

I = [1,3,1,2,4,5,4,5,7,3,6,7]
J = [1,1,2,2,2,2,3,3,3,4,4,4]
V = Int.(ones(length(I)))

A = sparse(I,J,V)

h = KaHyPar.HyperGraph(A)

KaHyPar.partition(h,2,configuration = :edge_cut)

Here is the output:

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 
+                    _  __     _   _       ____                               + 
+                   | |/ /__ _| | | |_   _|  _ \ __ _ _ __                    + 
+                   | ' // _` | |_| | | | | |_) / _` | '__|                   + 
+                   | . \ (_| |  _  | |_| |  __/ (_| | |                      + 
+                   |_|\_\__,_|_| |_|\__, |_|   \__,_|_|                      + 
+                                    |___/                                    + 
+                 Karlsruhe Hypergraph Partitioning Framework                 + 
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 
*******************************************************************************
*                            Partitioning Context                             *
*******************************************************************************
Partitioning Parameters:
  Hypergraph:                         
  Partition File:                     
  Mode:                               direct
  Objective:                          cut
  k:                                  2
  epsilon:                            0.03
  seed:                               -1
  # V-cycles:                         0
  time limit:                         -1s
  hyperedge size threshold:           1000
  use individual block weights:       false
  L_opt:                              4
  L_max:                              4
-------------------------------------------------------------------------------
Preprocessing Parameters:
  enable deduplication:               false
  enable min hash sparsifier:         true
  enable community detection:         true
-------------------------------------------------------------------------------
MinHash Sparsifier Parameters:
  max hyperedge size:                 1200
  max cluster size:                   10
  min cluster size:                   2
  number of hash functions:           5
  number of combined hash functions:  100
  active at median net size >=:       28
  sparsifier is active:               false
-------------------------------------------------------------------------------
Community Detection Parameters:
  use community detection in IP:      true
  maximum louvain-pass iterations:    100
  minimum quality improvement:        0.0001
  graph edge weight:                  degree
  reuse community structure:          false
-------------------------------------------------------------------------------
Coarsening Parameters:
  Algorithm:                          ml_style
  max-allowed-weight-multiplier:      1
  contraction-limit-multiplier:       160
  hypernode weight fraction:          0.003125
  max. allowed hypernode weight:      1
  contraction limit:                  320
  Rating Parameters:
    Rating Function:                  heavy_edge
    Use Community Structure:          true
    Heavy Node Penalty:               no_penalty
    Acceptance Policy:                best_prefer_unmatched
    Partition Policy:                 normal
    Fixed Vertex Acceptance Policy:   fixed_vertex_allowed
-------------------------------------------------------------------------------
Initial Partitioning Parameters:
  # IP trials:                        20
  Mode:                               recursive
  Technique:                          multilevel
  Algorithm:                          pool
  Bin Packing algorithm:              UNDEFINED
    early restart on infeasible:      false
    late restart on infeasible:       false
IP Coarsening:                        
Coarsening Parameters:
  Algorithm:                          ml_style
  max-allowed-weight-multiplier:      1
  contraction-limit-multiplier:       150
  hypernode weight fraction:          determined before IP
  max. allowed hypernode weight:      determined before IP
  contraction limit:                  determined before IP
  Rating Parameters:
    Rating Function:                  heavy_edge
    Use Community Structure:          true
    Heavy Node Penalty:               no_penalty
    Acceptance Policy:                best_prefer_unmatched
    Partition Policy:                 normal
    Fixed Vertex Acceptance Policy:   fixed_vertex_allowed
IP Local Search:                      
Local Search Parameters:
  Algorithm:                          twoway_fm
  iterations per level:               2147483647
  stopping rule:                      simple
  max. # fruitless moves:             50
  Flow Refinement Parameters:
    execution policy:                 UNDEFINED
-------------------------------------------------------------------------------
Local Search Parameters:
  Algorithm:                          kway_fm_hyperflow_cutter
  iterations per level:               2147483647
  stopping rule:                      adaptive_opt
  adaptive stopping alpha:            1
  Flow Refinement Parameters:
    execution policy:                 exponential
-------------------------------------------------------------------------------
 
Invalid object identifier 

@SebastianSchlag do you immediately notice any issues in the output? Did any of the C interface functions change for v1.3.0?

Segmentation fault for certain graphs

The following matrix causes a segmentation fault when trying to partition

CSV file with matrix elements problem_matrix.csv:

cols,rows,values
2,5,1
3,9,1
4,13,1
6,6,1
7,10,1
8,14,1
9,3,1
13,4,1
19,3,1
19,19,1
20,4,1
20,19,1
21,5,1
21,17,1
22,6,1
22,17,1
25,9,1
25,22,1
26,10,1
26,22,1
29,13,1
29,18,1
30,14,1
30,18,1
33,17,1
33,27,1
34,17,1
34,28,1
35,18,1
35,31,1
36,18,1
36,32,1
37,19,1
37,25,1
38,19,1
38,26,1
43,22,1
43,38,1
44,22,1
44,33,1
49,25,1
50,26,1
51,27,1
52,28,1
52,33,1
55,31,1
56,32,1
57,33,1
58,33,1
64,38,1

Script to reproduce

using KaHyPar
using CSV
using DataFrames
using SparseArrays

df = CSV.read("problem_matrix.csv", DataFrame)
A = sparse(df.rows, df.cols, df.values)

h = KaHyPar.hypergraph(A)
partitions = KaHyPar.partition(h, 2, configuration=:edge_cut)

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

ERROR: LoadError: UndefVarError: libkahypar not defined

While installing the package on VScode in Windows, I met the following question (for the dependent package: KaHyPar ). I have no idea what is going on, could you please give me some hints on this. Many and many thanks for your help.

Precompiling project...
✗ KaHyPar
0 dependencies successfully precompiled in 1 seconds. 10 already precompiled.
1 dependency errored. To see a full report either run import Pkg; Pkg.precompile() or load the package
Testing Running tests...
ERROR: LoadError: UndefVarError: libkahypar not defined

How to turn off runtime messages

Hello everyone,

I'd like to know how to turn off the runtime messages such as "Partitioning Context" and "Partitioning Result". For example, could you please tell me how I can modify the following code to achieve this? Thank you.

using KaHyPar
using SparseArrays

I = [1,3,1,2,4,5,4,5,7,3,6,7]
J = [1,1,2,2,2,2,3,3,3,4,4,4]
V = Int.(ones(length(I)))

A = sparse(I,J,V)

h = KaHyPar.hypergraph(A)

KaHyPar.partition(h,2,configuration = :edge_cut)

macOS support

Got this working on macOS (tests passing) but required a few undocumented steps. Can prepare PR with these if this is something that would be useful. Requires.

  1. Building C++ lib and setting JULIA_KAHYPAR_LIBRARY_PATH to point to lib folder
  2. Creating a symlink named libboost_program_options.so to point to libboost_program_options.dylib

To make this easier for others could add details about setting the JULIA_KAHYPAR_LIBRARY_PATH to the README.md and could update the build.jl file to try to load libboost_program_options.dylib as well as libboost_program_options.so.

I try it in linux. Still have problem. I tried julia 1.05 and Julia 1.54. It failed to clone.

julia> Pkg.add(PackageSpec(url="https://github.com/kahypar/KaHyPar.jl.git"))
Cloning git-repo https://github.com/kahypar/KaHyPar.jl.git
ERROR: failed to clone from https://github.com/kahypar/KaHyPar.jl.git, error: GitError(Code:ERROR, Class:SSL, SSL error: 0xffff8880 - SSL - A fatal alert message was received from our peer)
Stacktrace:
[1] pkgerror(::String) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/Types.jl:120
[2] #clone#2(::Nothing, ::Base.Iterators.Pairs{Symbol,Any,Tuple{Symbol,Symbol},NamedTuple{(:isbare, :credentials),Tuple{Bool,LibGit2.CachedCredentials}}}, ::Function, ::String, ::String) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/GitTools.jl:107
[3] #handle_repos_add!#32(::Bool, ::Nothing, ::Function, ::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at ./none:0
[4] #handle_repos_add! at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/API.jl:0 [inlined]
[5] #add_or_develop#15(::Symbol, ::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/API.jl:59
[6] #add_or_develop#14 at ./none:0 [inlined]
[7] #add_or_develop at ./none:0 [inlined]
[8] #add_or_develop#10 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/API.jl:32 [inlined]
[9] #add_or_develop at ./none:0 [inlined]
[10] #add#20 at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/API.jl:74 [inlined]
[11] add(::Pkg.Types.PackageSpec) at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Pkg/src/API.jl:74
[12] top-level scope at none:0

Now I download it and try to install it offline.

Improve initial partition with fixed vertices

Hi,

I am aiming to improve an initial partition using the improve_partition function, however, this initial partition has a bunch of fixed vertices. Is there a way the existing Julia interface allows me to pass that info to the partitioner so that the refinement algorithm is aware of these fixed vertices?

Thanks

Testing KaHyPar mistakes

julia> Pkg.test("KaHyPar")
Testing KaHyPar
Status C:\Users\2544035L\AppData\Local\Temp\jl_b45JpT\Manifest.toml
[b99e7846] BinaryProvider v0.5.10
[2a6221f6] KaHyPar v0.1.0 #master (https://github.com/kahypar/KaHyPar.jl.git)
[2a0f44e3] Base64
[8ba89e20] Distributed
[b77e0a4c] InteractiveUtils
[8f399da3] Libdl
[37e2e46d] LinearAlgebra
[56ddb016] Logging
[d6f4376e] Markdown
[9a3f8284] Random
[ea8e919c] SHA
[9e88b42a] Serialization
[6462fe0b] Sockets
[2f01184e] SparseArrays
[8dfed614] Test
ERROR: LoadError: C:\Users\2544035L.julia\packages\KaHyPar\52iYf\src..\deps\deps.jl does not exist, Please re-run Pkg.build("KaHyPar"), and restart Julia.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] top-level scope at C:\Users\2544035L.julia\packages\KaHyPar\52iYf\src\KaHyPar.jl:13
[3] include(::Module, ::String) at .\Base.jl:377
[4] top-level scope at none:2
[5] eval at .\boot.jl:331 [inlined]
[6] eval(::Expr) at .\client.jl:449
[7] top-level scope at .\none:3
in expression starting at C:\Users\2544035L.julia\packages\KaHyPar\52iYf\src\KaHyPar.jl:9
ERROR: LoadError: Failed to precompile KaHyPar [2a6221f6-aa48-11e9-3542-2d9e0ef01880] to C:\Users\2544035L.julia\compiled\v1.4\KaHyPar\CqBtU_5cVsk.ji.
Stacktrace:
[1] error(::String) at .\error.jl:33
[2] compilecache(::Base.PkgId, ::String) at .\loading.jl:1272
[3] _require(::Base.PkgId) at .\loading.jl:1029
[4] require(::Base.PkgId) at .\loading.jl:927
[5] require(::Module, ::Symbol) at .\loading.jl:922
[6] include(::String) at .\client.jl:439
[7] top-level scope at none:6
in expression starting at C:\Users\2544035L.julia\packages\KaHyPar\52iYf\test\runtests.jl:2
ERROR: Package KaHyPar errored during testing
Stacktrace:
[1] pkgerror(::String, ::Vararg{String,N} where N) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\Types.jl:53
[2] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}; coverage::Bool, julia_args::Cmd, test_args::Cmd, test_fn::Nothing) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\Operations.jl:1510
[3] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}; coverage::Bool, test_fn::Nothing, julia_args::Cmd, test_args::Cmd, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:316
[4] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:303
[5] #test#68 at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:297 [inlined]
[6] test at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:297 [inlined]
[7] #test#67 at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:296 [inlined]
[8] test at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:296 [inlined]
[9] test(::String; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:295
[10] test(::String) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\API.jl:295
[11] top-level scope at none:0

Bug report. Platform not support 'x86_64-w64-mingw32'. Thank you.

┌ Error: Error building KaHyPar:
│ ERROR: LoadError: Your platform ("x86_64-w64-mingw32", parsed as "x86_64-w64-mingw32-gcc8-cxx11") is not supported by this package!
│ Stacktrace:
│ [1] error(::String) at .\error.jl:33
│ [2] top-level scope at C:\Users\2544035L.julia\packages\KaHyPar\52iYf\deps\build.jl:47
│ [3] include(::String) at .\client.jl:439
│ [4] top-level scope at none:5
│ in expression starting at C:\Users\2544035L.julia\packages\KaHyPar\52iYf\deps\build.jl:39
└ @ Pkg.Operations D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.4\Pkg\src\Operations.jl:899

in partition occurs signal (11): Segmentation fault: 11

Hi, i am now using the partition function to partition a hypergraph and all except the last partition can process well. However, once i run the last partition sequence on VSC, the terminal quits. And then i run julia on my mac terminal, an error in partition occurs signal (11): Segmentation fault: 11 occurs. Could you please help me what is wrong?

#Setup node and edge weights
n_vertices = length(vertices(hypergraph))
node_sizes = [num_variables(node) for node in all_nodes(gas_network)]
edge_weights = [num_linkconstraints(edge) for edge in all_edges(gas_network)]
hyperedges = edges(hypergraph)
hyperedge_indices = vertices(hypergraph)
k = n_processes

hedges = Int32[]
hedge_indices = Int64[0]
for k in keys(hyperedges)
for e in k
push!(hedges,e)
end
push!(hedge_indices,length(hedges))
end

hypergraph1 = KaHyPar.HyperGraph(n_vertices, hedge_indices, hedges, node_sizes, edge_weights)
node_vector = KaHyPar.partition(hypergraph1,n_parts;imbalance = max_imbalance, configuration = :edge_cut)

Here is the error:

822qiuqiu@huchenyudeMacBook-Pro gas_network % julia partition_and\ _solve.jl
signal (11): Segmentation fault: 11
in expression starting at /Users/822qiuqiu/Desktop/hypergraph/gas_network/partition_and _solve.jl:42
ZN7kahypar2ds17GenericHypergraphIjjiiiNS_4meta5EmptyES3_EC2EjjPKmPKjiPKiSA at /Users/822qiuqiu/.julia/artifacts/839361bcfc9430c57b56f0bad0577fe198831dae/lib/libkahypar.dylib (unknown line)
kahypar_partition at /Users/822qiuqiu/.julia/artifacts/839361bcfc9430c57b56f0bad0577fe198831dae/lib/libkahypar.dylib (unknown line)
kahypar_partition at /Users/822qiuqiu/.julia/packages/KaHyPar/YSKB3/src/kahypar_h.jl:48
unknown function (ip: 0x139db4f70)
#partition#2 at /Users/822qiuqiu/.julia/packages/KaHyPar/YSKB3/src/KaHyPar.jl:111
partition##kw at /Users/822qiuqiu/.julia/packages/KaHyPar/YSKB3/src/KaHyPar.jl:92
unknown function (ip: 0x139db4e7f)
jl_apply at /Users/julia/buildbot/worker/package_macos64/build/src/./julia.h:1690 [inlined]
do_call at /Users/julia/buildbot/worker/package_macos64/build/src/interpreter.c:117
eval_body at /Users/julia/buildbot/worker/package_macos64/build/src/interpreter.c:0
jl_interpret_toplevel_thunk at /Users/julia/buildbot/worker/package_macos64/build/src/interpreter.c:660
jl_toplevel_eval_flex at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:840
jl_parse_eval_all at /Users/julia/buildbot/worker/package_macos64/build/src/ast.c:913
jl_load_rewrite at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:914 [inlined]
jl_load at /Users/julia/buildbot/worker/package_macos64/build/src/toplevel.c:919
include at ./Base.jl:380
include at ./Base.jl:368
exec_options at ./client.jl:296
_start at ./client.jl:506
jfptr__start_57330.clone_1 at /Applications/Julia-1.5.app/Contents/Resources/julia/lib/julia/sys.dylib (unknown line)
true_main at /Applications/Julia-1.5.app/Contents/Resources/julia/bin/julia (unknown line)
main at /Applications/Julia-1.5.app/Contents/Resources/julia/bin/julia (unknown line)
Allocations: 114763534 (Pool: 114728454; Big: 35080); GC: 85
zsh: segmentation fault julia partition_and\ _solve.jl

and i check the edge and vertices indices, they are matched and not out of index.

Partition based on minimizing the number of edge cuts

How can I get KaHyPar.partition to partition based on minimizing the number of edge cuts? By default it seems to maximize the number of edge cuts:

using Graphs
using KaHyPar
using Metis
using Suppressor

g = grid((16,))
npartitions = 2

kahypar_partitions = @suppress KaHyPar.partition(adjacency_matrix(g), npartitions)
metis_partitions = Metis.partition(g, npartitions)

@show kahypar_partitions
@show metis_partitions

which outputs:

kahypar_partitions = [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]
metis_partitions = Int32[1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2]

I'm hoping for partitioning like Metis gives by default. I tried playing around with different configuration files without any luck.

Can not load KaHyPar shared object file

Julia v1.6.1, KaHyPar 0.2.0, Ubuntu 20.04

julia> using KaHyPar
[ Info: Precompiling KaHyPar [2a6221f6-aa48-11e9-3542-2d9e0ef01880]
ERROR: LoadError: InitError: could not load library "/home/leo/.julia/artifacts/1f9c17d74a12ed0de413318dd1f11e2379e4716f/lib/libkahypar.so"
libboost_program_options.so.1.71.0: cannot open shared object file: No such file or directory
Stacktrace:
  [1] dlopen(s::String, flags::UInt32; throw_error::Bool)
    @ Base.Libc.Libdl ./libdl.jl:114
  [2] dlopen(s::String, flags::UInt32)
    @ Base.Libc.Libdl ./libdl.jl:114
  [3] macro expansion
    @ ~/.julia/packages/JLLWrappers/bkwIo/src/products/library_generators.jl:54 [inlined]
  [4] __init__()
    @ KaHyPar_jll ~/.julia/packages/KaHyPar_jll/6UpOK/src/wrappers/x86_64-linux-gnu.jl:9
  [5] _include_from_serialized(path::String, depmods::Vector{Any})
    @ Base ./loading.jl:674
  [6] _require_search_from_serialized(pkg::Base.PkgId, sourcepath::String)
    @ Base ./loading.jl:760
  [7] _require(pkg::Base.PkgId)
    @ Base ./loading.jl:998
  [8] require(uuidkey::Base.PkgId)
    @ Base ./loading.jl:914
  [9] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:901
 [10] include
    @ ./Base.jl:386 [inlined]
 [11] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt64}}, source::Nothing)
    @ Base ./loading.jl:1213
 [12] top-level scope
    @ none:1
 [13] eval
    @ ./boot.jl:360 [inlined]
 [14] eval(x::Expr)
    @ Base.MainInclude ./client.jl:446
 [15] top-level scope
    @ none:1
during initialization of module KaHyPar_jll
in expression starting at /home/leo/.julia/packages/KaHyPar/YSKB3/src/KaHyPar.jl:1
ERROR: Failed to precompile KaHyPar [2a6221f6-aa48-11e9-3542-2d9e0ef01880] to /home/leo/.julia/compiled/v1.6/KaHyPar/jl_1aVynj.
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] compilecache(pkg::Base.PkgId, path::String, internal_stderr::Base.TTY, internal_stdout::Base.TTY)
   @ Base ./loading.jl:1360
 [3] compilecache(pkg::Base.PkgId, path::String)
   @ Base ./loading.jl:1306
 [4] _require(pkg::Base.PkgId)
   @ Base ./loading.jl:1021
 [5] require(uuidkey::Base.PkgId)
   @ Base ./loading.jl:914
 [6] require(into::Module, mod::Symbol)
   @ Base ./loading.jl:901

Note: I can fix it by typing

$ sudo apt-get install libboost-program-options-dev

Still wondering why the error took place. I just updated some packages.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.