vtraag / leidenalg Goto Github PK
View Code? Open in Web Editor NEWImplementation of the Leiden algorithm for various quality functions to be used with igraph in Python.
License: GNU General Public License v3.0
Implementation of the Leiden algorithm for various quality functions to be used with igraph in Python.
License: GNU General Public License v3.0
Hi Vincent,
Thanks for the excellent work!
I'm running into a reproducibility issue. I was initially running find_partition_temporal without any seed -- which should use a random seed for each run , thus random results each run. However, I am getting the same results -- at least in the same computer -- so its using an unknown seed, but not randomizing it.
I'm running MacOs 10.15.7 and calling the algorithm in Python 3.7.1.
Do you know what seed it's using? Python3 should be generating a random hash each run...
Thanks!
When I compare the values returned by the quality()
method of RBConfigurationVertexPartition
in louvain and leiden packages, I see that the values outputted by louvain is always a lot higher.
For example in the network below, I ran louvain and leiden with resolution=1 and then I also ran leiden with resolution=0.44 to get the same number of partitions:
Although the partitions look very similar, the quality values for these three partitions are 45566.48, 16023.20 and 14271.00, respectively. I was wondering if the modularity values are reported in different scales or so.
Next, I ran louvain and leiden on the same network with different resolution parameters 10 times with same seeds and plotted quality values:
Here it seems like values do not depend much on the seed but there is still a huge difference between the two.
There is no standalone version of leidenalg, and you will always need python to access it. There are no plans for developing a standalone version or R support. So, use python.
FWIW, we have a pared-down version in R here:
https://github.com/kharchenkolab/leidenAlg
https://cran.r-project.org/web/packages/leidenAlg/index.html
It's possible this is worth mentioning, or not :)
Just letting you know. Feel free to close issue.
Best, Evan
Hello,
I am using Seurat package to analyse single cell data with more than 50k cells.
When I run the clustering on low number of cells, it works fine, however when I run it on the whole data, I get:
Error in asMethod(object) :
Cholmod error 'problem too large' at file ../Core/cholmod_dense.c, line 105
> traceback()
7: asMethod(object)
6: as(object, "matrix")
5: RunLeiden(object = object, method = method, partition.type = "RBConfigurationVertexPartition",
initial.membership = initial.membership, weights = weights,
node.sizes = node.sizes, resolution.parameter = r, random.seed = random.seed,
n.iter = n.iter)
4: FindClusters.default(object = object[[graph.name]], modularity.fxn = modularity.fxn,
initial.membership = initial.membership, weights = weights,
node.sizes = node.sizes, resolution = resolution, algorithm = algorithm,
n.start = n.start, n.iter = n.iter, random.seed = random.seed,
group.singletons = group.singletons, temp.file.location = temp.file.location,
edge.file.name = edge.file.name, verbose = verbose, ...)
3: FindClusters(object = object[[graph.name]], modularity.fxn = modularity.fxn,
initial.membership = initial.membership, weights = weights,
node.sizes = node.sizes, resolution = resolution, algorithm = algorithm,
n.start = n.start, n.iter = n.iter, random.seed = random.seed,
group.singletons = group.singletons, temp.file.location = temp.file.location,
edge.file.name = edge.file.name, verbose = verbose, ...)
2: FindClusters.Seurat(merge, resolution = 0.5, algorithm = 4)
1: FindClusters(merge, resolution = 0.5, algorithm = 4)
Algorithm=4 (4 is for leidenalg).
I guess this error is linked to leidenalg and not to Seurat.
Any help?
Thanks
Hi!
I want to know the complexity of Leiden's algorithm.
How do I know Leiden's complexity when I know an n * n KNN graph.
Hi, I've created a temporal network where each slice is a binary blocked adjacency matrix. Ideally then the algorithm should find as many communities as there are blocks. I'm not sure which are the actual partitions after I run the code below and have some problem exporting the output from leidenalg to for example numpy. The 'partitions' below contain as many objects as timepoints. When I take np.array(interslice_partition.membership).reshape([node, time]), the outcome does not make any sense given the input since I find many more communities than should be there. Appreciate any guidance on this.
layers, interslice_layer, G_full = la.time_slices_to_layers(slices1,interslice_weight=0.1);
partitions = [la.CPMVertexPartition(H, node_sizes='node_size',
weights='weight',
resolution_parameter=gamma)
for H in layers];
interslice_partition = \
la.CPMVertexPartition(interslice_layer, resolution_parameter=0,
node_sizes='node_size', weights='weight');
diff = optimiser.optimise_partition_multiplex(partitions + [interslice_partition]);
Hi Vincent,
I have been playing around for a bit with the package and I have a question. How to manage when you have both a temporal network and a multiplex network. By this I mean I have a number of temporal slices and each slice consists in different aspects of the network. Going back to the example in the documentation, how will you proceed if you have email and telephone graphs of users at different moments in time. I have been going through the documentation but I am not able to find anything close to this case. Is there a way for LeidenAlg to handle this situation?
Thanks for the awesome code!
Not all **kwargs
should be passed to the constructor for the partitions in
leidenalg/src/VertexPartition.py
Line 1031 in c012a03
leidenalg/src/VertexPartition.py
Line 1037 in c012a03
weights
argument.
In fact, it may be better to simply make the arguments explicit instead of using a keyword dictionary.
I use Fedora 30, python & gcc versions below. Install went alright, although I used the --user
option to avoid installing as a root. It shouldn't matter though.
When I use find_partition
function, sometimes it works and sometimes it doesn't. See below (first time it worked, second time with identical input, it didn't).
I could investigate the error further, but maybe you have an idea right away. Am I doing something wrong or is it a legitimate issue? How can I circumvent it? Thanks!
$ python3
Python 3.7.3 (default, May 11 2019, 00:38:04)
[GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import leidenalg
>>> import igraph as ig
>>> G = ig.Graph.Erdos_Renyi(100,0.1)
>>> part = leidenalg.find_partition(G, leidenalg.ModularityVertexPartition)
>>> part = leidenalg.find_partition(G, leidenalg.ModularityVertexPartition)
/usr/include/c++/9/bits/stl_vector.h:1042: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = double; _Alloc = std::allocator<double>; std::vector<_Tp, _Alloc>::reference = double&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
Aborted (core dumped)
Goodmorning and thank you for your work.
I have some problem when I try to set the resolution parameter.
res = leidenalg.find_partition(g, leidenalg.ModularityVertexPartition, seed=rands, n_iterations= n_iterations, resolution_parameter= parameter)
`~.../lib/python3.6/site-packages/leidenalg/functions.py in find_partition(graph, partition_type, initial_membership, weights, n_iterations, seed, **kwargs)
82 partition = partition_type(graph,
83 initial_membership=initial_membership,
---> 84 **kwargs)
85 optimiser = Optimiser()
86
TypeError: init() got an unexpected keyword argument 'resolution_parameter'
`
How can I set this parameter?
Hi
I'm trying out different configurations for interlayer edges using leidenalg, on k layers and n nodes let's say.
When the interslice_weight is a constant c in the input for time_slices_to_layers function, all interlayer edges gets coupled diagonally, i.e. every node gets connected to their future self with an edge weight c.
If the interslice_weight is a list (of length k-1) of lists (of length n), i.e. interslice_weight = [[C(1,1),..,C(1,n)],..,[C(k-1,1),..,C(k-1,n)]] all interlayer edges gets coupled diagonally again but with the option of varying interlayer edge weights C(i,j) where i in k-1 and j in n.
My question is, is there a way to add interlayer edges to the slice graph such that nodes are not only coupled with their future and past selves but also other node's future and/or past selves i.e. non-diagonal coupling?
Thank you
Ülgen
The demo in readme uses an unweighted graph. Does the algorithm support weighted graphs? Thanks
Hi,
I'm encountering a segmentation fault when running ModularityVertexPartition on the DBLP graph (https://snap.stanford.edu/data/com-DBLP.html). I'm not entirely sure if my install is correct (specifically the igraph install), but there seem to be no other errors that arise. I installed leidenalg using pip, and I'm testing on a GCP instance running Ubuntu, 16.04 LTS, amd64 (60 vCPUs, 240 GiB memory).
The Python code that I'm running is as follows:
import leidenalg
from igraph import *
def main():
g = Graph.Read_Ncol('/home/jeshi/snap/dblp.edges', directed=False)
part = leidenalg.find_partition(g, leidenalg.ModularityVertexPartition);
if __name__=="__main__":
main()
Note that the dblp.edges graph is exactly the graph downloaded from SNAP, but with the first four commented lines removed using sed.
Thanks for your help!
Hello,
I came across this issue when trying to install this in a Google Colab environment.
ERROR: Could not build wheels for leidenalg which use PEP 517 and cannot be installed directly
I assume this has to do with the new release as 0.7 has previously worked.
Thanks!
Your examples are great for regular community detection but unfortunately, I can't really wrap my head around how to do the same for bipartite networks. Could you perhaps give a working example (maybe in the documentation) how to use the package to retrieve lists that group nodes into communities taking the bipartite structure into account? That would be great!
I have tried seurat::FindClusters
using either louvain or leidenalg and the results are very different.
library(seurat)
pbmc_small <- FindClusters(pbmc_small, resolution = 0.5, algorithm = 1)
# 1 cluster
pbmc_small <- FindClusters(pbmc_small, resolution = 0.5, algorithm = 4)
# 8 clusters
Is this normal?
It would be great if there was some progress report or verbosity built into the resolution_profile() function.
Hi Vincent,
I discovered an issue which I don't fully understand but thought would be worth recording. When an empty graph - no nodes or edges - is supplied to find_partition_temporal (I imagine other functions too), something happens whereby the C library uses excessive memory / addresses invalid memory, throwing a segmentation fault and causing the OS to kill the process (occasionally crashing my IDE depending on how it's run).
The error message when running within joblib reads: "A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGSEGV(-11)}".
Having discovered the issue, I obviously removed the empty graphs from analysis before running the community detection (which I probably should have been doing anyway...). It would be good in the future for the library to handle these cases before the empty graph reaches the C layer and throws the ugly, python-crashing error.
Thank you for putting together the package.
Patrick
Hi there,
I'm applying Leiden and similar algorithms and find it extremely useful to fix the community of some nodes to the initial value, e.g. those will never change. I have a version of this based off Louvain and Smart Local Moving here:
https://github.com/iosonofabio/slmpy
and would like to ask whether this is already present in the Leiden repo or whether you have an interest in adding it - I could also add it via a PR.
From what I understand from the code, the change would be simple. Basically we put an additional property in the optimizer with a list of fixed nodes and an if/else clause inside the while loop that moves nodes around - so if the chosen node is up for moving we skip it instead.
What do you think?
Collecting python-igraph>=0.7.1.0 (from leidenalg==0.7.0.post1+19.g0c2a937.dirty)
Building wheels for collected packages: leidenalg
Building wheel for leidenalg (setup.py) ... error
Complete output from command /Users/xiaoxiang/miniconda3/envs/py36/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-req-build-8_p5nia0/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-wheel-y7th2abw --python-tag cp36:
running bdist_wheel
running build
running build_py
UPDATING build/lib.macosx-10.7-x86_64-3.6/leidenalg/_version.py
set build/lib.macosx-10.7-x86_64-3.6/leidenalg/_version.py to '0.7.0.post1+19.g0c2a937.dirty'
running build_ext
Build type: dynamic extension
Include path: /usr/local/Cellar/igraph/0.7.1_6/include/igraph
Library path: /usr/local/Cellar/igraph/0.7.1_6/lib
Linked dynamic libraries: igraph
Linked static libraries:
Extra compiler options:
Extra linker options:
building 'leidenalg._c_leiden' extension
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/xiaoxiang/miniconda3/envs/py36/include -arch x86_64 -I/Users/xiaoxiang/miniconda3/envs/py36/include -arch x86_64 -Iinclude -I/usr/local/Cellar/igraph/0.7.1_6/include/igraph -I/Users/xiaoxiang/miniconda3/envs/py36/include/python3.6m -c src/SignificanceVertexPartition.cpp -o build/temp.macosx-10.7-x86_64-3.6/src/SignificanceVertexPartition.o -O3
warning: include path for stdlibc++ headers not found; pass '-stdlib=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found]
In file included from src/SignificanceVertexPartition.cpp:1:
In file included from include/SignificanceVertexPartition.h:4:
include/MutableVertexPartition.h:4:10: fatal error: 'string' file not found
#include <string>
^~~~~~~~
1 warning and 1 error generated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for leidenalg
Running setup.py clean for leidenalg
Failed to build leidenalg
Installing collected packages: python-igraph, leidenalg
Running setup.py install for leidenalg ... error
Complete output from command /Users/xiaoxiang/miniconda3/envs/py36/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-req-build-8_p5nia0/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-record-65pc1iij/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.macosx-10.7-x86_64-3.6
creating build/lib.macosx-10.7-x86_64-3.6/leidenalg
copying src/functions.py -> build/lib.macosx-10.7-x86_64-3.6/leidenalg
copying src/Optimiser.py -> build/lib.macosx-10.7-x86_64-3.6/leidenalg
copying src/VertexPartition.py -> build/lib.macosx-10.7-x86_64-3.6/leidenalg
copying src/_version.py -> build/lib.macosx-10.7-x86_64-3.6/leidenalg
copying src/__init__.py -> build/lib.macosx-10.7-x86_64-3.6/leidenalg
Fixing build/lib.macosx-10.7-x86_64-3.6/leidenalg/functions.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/Optimiser.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/VertexPartition.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/_version.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/__init__.py
Skipping optional fixer: buffer
Skipping optional fixer: idioms
Skipping optional fixer: set_literal
Skipping optional fixer: ws_comma
Fixing build/lib.macosx-10.7-x86_64-3.6/leidenalg/functions.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/Optimiser.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/VertexPartition.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/_version.py build/lib.macosx-10.7-x86_64-3.6/leidenalg/__init__.py
Skipping optional fixer: buffer
Skipping optional fixer: idioms
Skipping optional fixer: set_literal
Skipping optional fixer: ws_comma
UPDATING build/lib.macosx-10.7-x86_64-3.6/leidenalg/_version.py
set build/lib.macosx-10.7-x86_64-3.6/leidenalg/_version.py to '0.7.0.post1+19.g0c2a937.dirty'
running build_ext
Build type: dynamic extension
Include path: /usr/local/Cellar/igraph/0.7.1_6/include/igraph
Library path: /usr/local/Cellar/igraph/0.7.1_6/lib
Linked dynamic libraries: igraph
Linked static libraries:
Extra compiler options:
Extra linker options:
building 'leidenalg._c_leiden' extension
creating build/temp.macosx-10.7-x86_64-3.6
creating build/temp.macosx-10.7-x86_64-3.6/src
gcc -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/xiaoxiang/miniconda3/envs/py36/include -arch x86_64 -I/Users/xiaoxiang/miniconda3/envs/py36/include -arch x86_64 -Iinclude -I/usr/local/Cellar/igraph/0.7.1_6/include/igraph -I/Users/xiaoxiang/miniconda3/envs/py36/include/python3.6m -c src/SignificanceVertexPartition.cpp -o build/temp.macosx-10.7-x86_64-3.6/src/SignificanceVertexPartition.o -O3
warning: include path for stdlibc++ headers not found; pass '-stdlib=libc++' on the command line to use the libc++ standard library instead [-Wstdlibcxx-not-found]
In file included from src/SignificanceVertexPartition.cpp:1:
In file included from include/SignificanceVertexPartition.h:4:
include/MutableVertexPartition.h:4:10: fatal error: 'string' file not found
#include <string>
^~~~~~~~
1 warning and 1 error generated.
error: command 'gcc' failed with exit status 1
----------------------------------------
Command "/Users/xiaoxiang/miniconda3/envs/py36/bin/python -u -c "import setuptools, tokenize;__file__='/private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-req-build-8_p5nia0/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-record-65pc1iij/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/l9/xq949z0n50q3nywwb3rm3w2h0000gn/T/pip-req-build-8_p5nia0/
I try all possible solutions in #1, but it still not works.
For reference
(py36) xiaoxiang@ele:leidenalg$ gcc -v
Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 10.0.1 (clang-1001.0.46.4)
Target: x86_64-apple-darwin18.5.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
and
(py36) xiaoxiang@ele:leidenalg$ brew info gcc
gcc: stable 9.2.0 (bottled), HEAD
GNU compiler collection
https://gcc.gnu.org/
/usr/local/Cellar/gcc/9.2.0 (1,462 files, 291.4MB) *
Poured from bottle on 2019-09-08 at 19:13:53
From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/gcc.rb
==> Dependencies
Required: gmp ✔, isl ✔, libmpc ✔, mpfr ✔
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 75,462 (30 days), 206,663 (90 days), 923,390 (365 days)
install_on_request: 34,043 (30 days), 93,420 (90 days), 438,057 (365 days)
build_error: 0 (30 days)
Hi, similarly to other users, I cannot install leidenalg version 0.8.2 due to an error for PEP 517.
From my terminal:
`carloleonardi$ pip install leidenalg==0.8.2
Collecting leidenalg==0.8.2
Using cached leidenalg-0.8.2.tar.gz (4.1 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Requirement already satisfied: python-igraph>=0.8.0 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from leidenalg==0.8.2) (0.8.3)
Requirement already satisfied: texttable>=1.6.2 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from python-igraph>=0.8.0->leidenalg==0.8.2) (1.6.3)
Building wheels for collected packages: leidenalg
Building wheel for leidenalg (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /Library/Frameworks/Python.framework/Versions/3.9/bin/python3 /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /var/folders/c4/s69c9bcj2j5044vq4f7hh6gh0000gn/T/tmpz_t91kvw
cwd: /private/var/folders/c4/s69c9bcj2j5044vq4f7hh6gh0000gn/T/pip-install-42ipj4km/leidenalg
Complete output (56 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-10.9-x86_64-3.9
creating build/lib.macosx-10.9-x86_64-3.9/leidenalg
copying src/functions.py -> build/lib.macosx-10.9-x86_64-3.9/leidenalg
copying src/Optimiser.py -> build/lib.macosx-10.9-x86_64-3.9/leidenalg
copying src/VertexPartition.py -> build/lib.macosx-10.9-x86_64-3.9/leidenalg
copying src/version.py -> build/lib.macosx-10.9-x86_64-3.9/leidenalg
copying src/__init__.py -> build/lib.macosx-10.9-x86_64-3.9/leidenalg
/private/var/folders/c4/s69c9bcj2j5044vq4f7hh6gh0000gn/T/pip-build-env-wfam4mxf/overlay/lib/python3.9/site-packages/setuptools/lib2to3_ex.py:36: SetuptoolsDeprecationWarning: 2to3 support is deprecated. If the project still requires Python 2 support, please migrate to a single-codebase solution or employ an independent conversion process.
warnings.warn(
Fixing build/lib.macosx-10.9-x86_64-3.9/leidenalg/functions.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/Optimiser.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/VertexPartition.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/version.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/__init__.py
Skipping optional fixer: buffer
Skipping optional fixer: idioms
Skipping optional fixer: set_literal
Skipping optional fixer: ws_comma
Fixing build/lib.macosx-10.9-x86_64-3.9/leidenalg/functions.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/Optimiser.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/VertexPartition.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/version.py build/lib.macosx-10.9-x86_64-3.9/leidenalg/__init__.py
Skipping optional fixer: buffer
Skipping optional fixer: idioms
Skipping optional fixer: set_literal
Skipping optional fixer: ws_comma
running build_ext
running build_c_core
Finding out version number/string... fatal: /private/var/folders/c4/s69c9bcj2j5044vq4f7hh6gh0000gn/T/pip-install-42ipj4km/leidenalg/vendor/source/igraph/../../../.git/modules/vendor/source/igraph non è un repository Git
fatal: /private/var/folders/c4/s69c9bcj2j5044vq4f7hh6gh0000gn/T/pip-install-42ipj4km/leidenalg/vendor/source/igraph/../../../.git/modules/vendor/source/igraph non è un repository Git
-post+
+ /usr/local/bin/glibtoolize --force --copy
glibtoolize: putting auxiliary files in '.'.
glibtoolize: copying file './ltmain.sh'
glibtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
glibtoolize: copying file 'm4/libtool.m4'
glibtoolize: copying file 'm4/ltoptions.m4'
glibtoolize: copying file 'm4/ltsugar.m4'
glibtoolize: copying file 'm4/ltversion.m4'
glibtoolize: copying file 'm4/lt~obsolete.m4'
+ aclocal -I m4 --install
bootstrap.sh: line 27: aclocal: command not found
+ autoheader
bootstrap.sh: line 28: autoheader: command not found
+ autoconf
bootstrap.sh: line 29: autoconf: command not found
+ automake --foreign --add-missing --force-missing --copy
bootstrap.sh: line 31: automake: command not found
+ patch -N -p0 -r-
sh: ../../source/igraph/configure: No such file or directory
We are going to build the C core of igraph.
Source folder: vendor/source/igraph
Build folder: vendor/build/igraph
Install folder: vendor/install/igraph
Bootstrapping igraph...
Configuring igraph...
Could not compile the C core of igraph.
----------------------------------------
ERROR: Failed building wheel for leidenalg
Failed to build leidenalg
ERROR: Could not build wheels for leidenalg which use PEP 517 and cannot be installed directly`
Note that I am able to install leidenalg version 0.7.0. (but when I ran it through Seurat in R, it cannot find leiden algorithm:
FindClusters(PDAC.subset, resolution = 1e-4,algorithm=4, method="igraph")
Error: Cannot find Leiden algorithm, please install through pip (e.g. pip install leidenalg).
igraph is already installed (igraph 0.8.3 running inside Python 3.9.0).
since PEP 517 errors are a common issue (see pydata/bottleneck#281 ), I tried to install anaconda and install leidenalg through anaconda, but still in R Seurat does not recognize it, returning the same error (Cannot find Leiden algorithm).
I also tried to install Visual Studio as someone suggested (since it should update all the wheels), but it doesn't work again.
Any idea how to proceed?
Hi,
Thank you for continuously developing this algorithm! I previously used the louvain algorithm to analyze my thesis and now I am doing the reanalysis with the leiden algorithm and multiplex. I stumbled into this error in which the partitioning with multiplex requires a max_comm_size (first cell in photo) but when I add any max_comm_size value (second cell in photo) it does not recognize this as an argument. This is not an issue with the regular partitioning though.
I solved this for my purposes by removing the max_comm_size statements in lines 100 and 164. I'm sure there is a better way to address the bug.
Cheers,
Patrick
Hi,
I'm confused about the difference between these two partition methods. Both classes have the same description:
Implements Reichardt and Bornholdt’s Potts model with a configuration null model.
Is this a documentation error?
PS: Thanks for releasing this library! It runs much faster than the built-in igraph community detection methods.
Is this a dictionary of mapping nodes to partitions? Can't find an example of this in the documentation.
Dear Vincent,
First, thanks for Leidenalg.
This might be more of a "feature-request", I am not sure.
I use a yml file (like below) to make conda environments from conda-forge that works fine on Linux and Mac. But on Windows it doesn't work as leidenalg exists only on your channel (https://anaconda.org/vtraag/leidenalg) and on conda-forge (https://anaconda.org/conda-forge/leidenalg) only for Mac and Linux.
name: testenv
channels:
- conda-forge
- defaults
dependencies:
- python=3.*
- numpy
- pandas
- dask
- pyarrow
- jupyterlab
- matplotlib
- networkx
- scipy
- seaborn
- scikit-learn
- statsmodels
- beautifulsoup4
- requests
- python-graphviz
- python-igraph
- leidenalg
When I remove leidenalg from above list, install the environment, activate it and install leidenalg from your channel, it takes for ever and for few times gives "Solving environment: failed with initial frozen solve. Retrying with flexible solve" (conda/conda#9367).
My workaround is to remove igraph from yaml above, after environment is setup, install both igraph and leidenalg from your channel on conda-forge. But, doing this downgrades my python from 3.8 to 3.7.6 and igraph from 0.8 to 0.7 and some other libraries (unfortunately anaconda prompt is not scrolling up enough to print the list here, but it was around 15-20 libraries).
Now, I don't know if my way is the best way to install on Windows. Since with yml file it takes care of contradictions and requirements (I have been told so)
What would be your suggestion on how to install on Windows?
Thanks.
Apple deprecated deprecated libstdc++ a few years ago, but finally eliminated it from the latest release of the command line tools. This means that the installation of leidenalg throws an error while compiling (i.e., libstdc++ not found, please use libc++). Changing the compiler manages to install the package but introduces more errors later.
The easiest solution I found was to delete Xcode 10 and install Xcode 9.4 w/ the command line tools. Everything works just fine. Technically, deleting Xcode 10 is not necessary but I haven't tried this. Moreover, installing just the command line tools (without Xcode) should work as well if you have no other need for Xcode.
The maximum cluster size feature is a great feature! Would it be possible to add something for a minimum cluster size constraint? In particular, high resolution clustering often results in singletons, which are not particularly useful for downstream analysis.
The function set_membership
should make use of the vector
functionality to simply copy the content.
The Leiden algorithm is working fine, as described in the tutorials. I can see the final community membership information via partition.membership
.
However, this is only for the highest community hierarchy? How can I access the hierarchical tree structure to see community memberships at other, intermediate levels -- i.e., as shown in Fig 3 of the original paper.
Hi Vincent,
I used the Leiden algorithm in a paper that myself and my colleagues recently submitted to eLife (biorxiv preprint).
Thank you for making the leidenalg
python package!
I'm having some trouble when I try to cluster multiplex graphs, maybe you can help (not relevant to that publication).
I'm stuck because calls to leidenalg.find_partition_multiplex
generates an error:
terminate called after throwing an instance of 'Exception'
what(): Node size vector not the same size as the number of nodes.
I've tried find_multiplex_partition
with some dummy graphs:
import sys
import igraph
import leidenalg
## Example 1. Simple generated graphs.
# Create two graphs.
g0 = igraph.Graph.GRG(100, 0.2)
g1 = igraph.Graph.GRG(100, 0.1)
# Multiplex partition.
partition = leidenalg.find_partition_multiplex([g0,g1],
leidenalg.ModularityVertexPartition)
As well as some real graphs with actual data (source).
# Download pickled examples.
wget https://github.com/twesleyb/StackOverflow/blob/master/leidenalg-multiplex/example/g0.pickle
wget https://github.com/twesleyb/StackOverflow/blob/master/leidenalg-multiplex/example/g0.pickle
## Example 2. Some real graphs.
# Load the graphs.
g0 = igraph.Graph.Read_Pickle("g0.pickle")
g1 = igraph.Graph.Read_Pickle("g1.pickle")
# Enforce same vertex set.
subg0 = g0.induced_subgraph(g1.vs['name'])
subg1 = g1.induced_subgraph(subg0.vs['name'])
# Multiplex partition.
optimiser = leidenalg.Optimiser()
partition = leidenalg.find_partition_multiplex([subg0,subg1],
leidenalg.ModularityVertexPartition)
Both generate the same error:
terminate called after throwing an instance of 'Exception'
what(): Node size vector not the same size as the number of nodes.
The leidenalg
Multiplex documentation uses G_telephone
and G_email
, but without these graphs, I am unable to reproduce the example.
Do you recognize this error message?
The error message is unhelpful here because g0
and g1
should contain the same number of nodes in each example.
When I use CPM or ModularityVertexPartition to cluster an undirected, weighted network in which all edges are non-zero, they both return 0 partitions (i.e. # clusters = # nodes). Is there a parameter that I'm not seeing that needs to be changed when working with weighted graphs?
I'm trying to partition a temporal graph, but the example given in the documentation is not working. When running this example
import igraph as ig
import leidenalg as la
n = 100
G_1 = ig.Graph.Lattice([n], 1)
G_1.vs['id'] = list(range(n))
G_2 = ig.Graph.Lattice([n], 1)
G_2.vs['id'] = list(range(n))
membership, improvement = la.find_partition_temporal([G_1, G_2],la.ModularityVertexPartition,interslice_weight=1)
I get the following error:
File "C:\Anaconda3\lib\site-packages\leidenalg\functions.py", line 283, in find_partition_temporal
partitions.append(partition_type(**arg_dict))
File "C:\Anaconda3\lib\site-packages\leidenalg\VertexPartition.py", line 462, in __init__
initial_membership, weights, node_sizes)
BaseException: Could not construct partition: Node size vector not the same size as the number of nodes.
I'm running versions 0.8.3 (leidenalg) and 0.9.0 (igraph) with Python 3.7.3 in Windows 10. I tried clean install of both.
EDIT:
I noticed that if I just leave out variable "node_sizes" (which is typically redundant anyway) for the C call in VertexPartition.py line 461 the code runs fine, i.e.,
self._partition = _c_leiden._new_ModularityVertexPartition(pygraph_t, initial_membership, weights)
However, maybe there are some bad effects of this...?
what is the time complexity of this algorithm?
I have an initial community assignment that I'd like to further subcluster using modularity optimization - analogous to the "refinement" step in Leiden. I noticed there were two functions to support this: optimiser.move_nodes_constrained
and optimiser.merge_nodes_constrained
. The documentation for merge_nodes
says "Merging in this case implies that a node will never be removed from a community, only merged with other communities". I'd like to get feedback on whether I'm understanding the differences correctly. On reading the code, it seems that the main difference between "move_nodes" and "merge_nodes" is that "merge_nodes" does a single pass over the nodes:
Line 1276 in 17eceef
Line 1281 in 17eceef
Line 1044 in 17eceef
Lines 731 to 736 in 17eceef
Am I understanding these differences correctly? If so, I am curious why one would prefer to use merge_nodes_constrained rather than move_nodes_constrained, given that move_nodes_constrained seems like it would consider more possible moves and might thus lead to a better value for the objective function. I understand that Leiden relies on "merge_nodes_constrained" for its refinement step based on the documentation at https://leidenalg.readthedocs.io/en/stable/advanced.html#optimiser. Is this simply for the speed? Or perhaps because "move_nodes_constrained" can produce poorly-connected communities in the same way the original Louvain algorithm can, as discussed in the Leiden paper?
If someone wants to further subcluster ("refine") an initial community assignment, would you recommend running BOTH move_nodes_constrained AND merge_nodes_constrained, and using the partition that has the best modularity? And are there any other considerations that you'd recommend keeping in mind?
Thanks,
Avanti
Hi, thanks for this great package! Do you have a plan to develop a version for working with NetworkX? Lots of users will be benefited.
Thanks for the wonderful library. I am currently trying to use it to cluster a dataset using its kNN graph. Here is how I am doing it:
from sklearn.neighbors import kneighbors_graph
import leidenalg as la
k = 10
A = kneighbors_graph(X, k)
sources, targets = A.nonzero()
G = ig.Graph(directed=True)
G.add_vertices(A.shape[0])
edges = list(zip(sources, targets))
G.add_edges(edges)
partition = la.find_partition(G, la.RBConfigurationVertexPartition, resolution_parameter = 1, seed = 42)
I am using scikit-learn to get a kNN graph A
, then I need to manually convert it into a igraph
object G
which is then passed into find_partition()
. I hope I do the convertion correctly (the results make sense so hopefully it's okay)... But it would be great if find_partition
could accept A
directly, which is simply a scipy
sparse binary matrix in Compressed Sparse Row format.
Or am I doing it the wrong way?
Hi Leiden team,
Thank you so much for you great work. I am using leiden for clustering of 720K points with 32-dim length input. It has been 24hours and it is still running. I do notice that after certain point, the usage of my cpu is only 100% instead of multiple cores. I was wondering if this is normal, and if leiden has multiple core support. Thank you again!
Hi, I run the following code
import copy
import leidenalg as la
import pickle
G = pickle.load(open('G.pickle', "rb" ))
partition = la.find_partition(G, la.CPMVertexPartition,resolution_parameter = 0.0004)
partition_copy = copy.deepcopy(partition)
And I get this error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-68-73b2513dd65d> in <module>()
1 partition = la.find_partition(G, la.CPMVertexPartition, resolution_parameter = 0.0004)
----> 2 partition_copy = copy.deepcopy(partition)
~\AppData\Local\Continuum\anaconda3\lib\copy.py in deepcopy(x, memo, _nil)
178 y = x
179 else:
--> 180 y = _reconstruct(x, memo, *rv)
181
182 # If is its own copy, don't memoize.
~\AppData\Local\Continuum\anaconda3\lib\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)
278 if state is not None:
279 if deep:
--> 280 state = deepcopy(state, memo)
281 if hasattr(y, '__setstate__'):
282 y.__setstate__(state)
~\AppData\Local\Continuum\anaconda3\lib\copy.py in deepcopy(x, memo, _nil)
148 copier = _deepcopy_dispatch.get(cls)
149 if copier:
--> 150 y = copier(x, memo)
151 else:
152 try:
~\AppData\Local\Continuum\anaconda3\lib\copy.py in _deepcopy_dict(x, memo, deepcopy)
238 memo[id(x)] = y
239 for key, value in x.items():
--> 240 y[deepcopy(key, memo)] = deepcopy(value, memo)
241 return y
242 d[dict] = _deepcopy_dict
~\AppData\Local\Continuum\anaconda3\lib\copy.py in deepcopy(x, memo, _nil)
167 reductor = getattr(x, "__reduce_ex__", None)
168 if reductor:
--> 169 rv = reductor(4)
170 else:
171 reductor = getattr(x, "__reduce__", None)
TypeError: can't pickle PyCapsule objects
Do you perhaps want to link to https://www.nature.com/articles/s41598-019-41695-z as Ref 1 in README.rst
?
Hello again! I have a question: I have a network and run Leiden on it. After some calculations I may add or remove an edge from the network (only edge not node), and after that I want to re-run Leiden to see if there is any change in the partitioning (which should only be around the affected area). Is it possible to re-run Leiden in incremental way? Because one major problem is the different partitions I get every time I run the algorithm even when I give the same seed.
Would the use of temporal help? In the manner that I use the initial graph and the changed one(that which has edge changes).
The library supports a variety of methods (modularity, CPM, etc.) but I couldn't find any guidance as to what should be a reasonable default. The Scientific Reports paper only describes modularity and CPM, but the package implements many further methods. Is there a beginner's guide to how to choose one of them?
Here https://leidenalg.readthedocs.io/en/latest/intro.html you use la.ModularityVertexPartition
for the first example. However, if I try to pass resolution_parameter
, I get an error message. Is there any way to use resolution with modularity?
The same documentation page shows how to use la.CPMVertexPartition
with specfying resolution. But I noticed that scanpy
implements leidenalg
with la.RBConfigurationVertexPartition
by default. This also allows to specify resolution. How can I choose between those, and other available methods?
Hai,
I have tried Leiden package. When running it with the famous 'karate club' network, it produced an error like Graph' object has no attribute 'vcount'
. What will be the reason?????
import leidenalg
import igraph as ig
import networkx as nx
G = nx.karate_club_graph()
part = leidenalg.find_partition(G, leidenalg.ModularityVertexPartition)
It might actually make sense to rename membership
to fixed_membership
. Additionally, a sort of relabelling object might be more appropriate than the map
, also in terms of speed. This would essentially consist of two vector
s,
vector<size_t> fixed_nodes;
vector<size_t> fixed_memberships;
where fixed_nodes
contains the nodes that are fixed, and fixed_memberships
is of length the number of nodes in the graph (n
), and contains for each node the fixed membership (which is only meaningful for a node in fixed_nodes
. Possibly, an additional boolean vector is_node_fixed
could be used to determine quickly whether a node is fixed or not, but I don't immediately know whether that is necessary or not.
Or we could work with these two vectors directly.
In addition, we should probably rename fixed_nodes
in Optimiser
to is_node_fixed
for clarity.
I am trying to use leidenalg in seurat::FindClusters
function. The function is running correctly on small dataset pbmc_small
.n
However, when running
dim(object)
[1] 23097 37996
object <- FindClusters(object, resolution = 0.2, algorithm = 4, method="igraph")
it throws an error
Error in sample.int(length(x), size, replace, prob) :
invalid first argument
In addition: Warning message:
In max(connectivity, na.rm = T) :
no non-missing arguments to max; returning -Inf
traceback()
7: sample.int(length(x), size, replace, prob)
6: sample(x = names(x = connectivity[mi]), 1)
5: GroupSingletons(ids = ids, SNN = object, group.singletons = group.singletons,
verbose = verbose)
4: FindClusters.default(object = object[[graph.name]], modularity.fxn = modularity.fxn,
initial.membership = initial.membership, weights = weights,
node.sizes = node.sizes, resolution = resolution, method = method,
algorithm = algorithm, n.start = n.start, n.iter = n.iter,
random.seed = random.seed, group.singletons = group.singletons,
temp.file.location = temp.file.location, edge.file.name = edge.file.name,
verbose = verbose, ...)
3: FindClusters(object = object[[graph.name]], modularity.fxn = modularity.fxn,
initial.membership = initial.membership, weights = weights,
node.sizes = node.sizes, resolution = resolution, method = method,
algorithm = algorithm, n.start = n.start, n.iter = n.iter,
random.seed = random.seed, group.singletons = group.singletons,
temp.file.location = temp.file.location, edge.file.name = edge.file.name,
verbose = verbose, ...)
2: FindClusters.Seurat(object, resolution = 0.2, algorithm = 4)
1: FindClusters(object, resolution = 0.2, algorithm = 4)
It seems to be related to random sampling of rows in marix
Is there any solution to this issue?
hello,
I noticed that when I updated to the latest leidenalg (0.8.0) the runtime of calling find_parition() on a graph of around 1M nodes took over an hour compared to the usual 5-10 minutes.
I uninstalled v0.8.0 and reinstalled 0.7.0 (pip install leidenalg == 0.7.0) and found that the runtime when calling find_partition() went back to the fast 5-10 minutes. I'm using python 3.7.
This is how I call the function:
partition = leidenalg.find_partition(G_sim, leidenalg.ModularityVertexPartition,n_iterations =n_iter_leiden,seed=partition_seed)
Is there some parameter I am overlooking that needs to be set when using find_partition() in 0.8.0 ? The function (called from 0.8.0) runs to completion on smaller graphs where the runtime difference is less noticeable.
Thank you
After installing leidenalg with conda I got the following error:
ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found
I could solve it by downgrading to libgcc==5.2.0
(my original version was 7.2.0)
Initially, I tried to install with pip
but got the error: Could not download and compile the C core of igraph.
. Probably, I could have solve this issue by setting the source of igraph but I tried conda instead.
Maybe this is helpful if anyone has the same problem.
In the documentation for slices_to_layers
the parameter for vertex_id_attr
is not included.
If the membership is set to any number that is larger than the number of nodes (by move_nodes
or set_membership
or using initial_membership
), this may lead to some problems.
Hi,
I corresponded with you several years ago when I was writing an R interface (leidenbase) to your excellent leiden community detection algorithms and functions. Several users reported recently crashes on CentOS 8 systems. This is similar to the issue # 12 on github
that is, an exception interrupts execution with the message+crash
/usr/include/c++/9/bits/stl_vector.h:1042: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = double; _Alloc = std::allocator<double>; std::vector<_Tp, _Alloc>::reference = double&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
I tend to write narratives but I'll try to keep to the essentials for your sake so I omit the multitude of dead-end paths that I explored until I became more confident that my code is not the immediate problem.
I reproduced the problem on a R version 4.0.2 CentOS Linux release 8.2.2004 and discovered that the file
/usr/lib64/R/etc/Makeconf
has C*FLAGS compiler options that include '-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions'. R on a Debian buster system initially runs without error but adding these options to the /usr/lib/R/etc/Makeconf C*FLAGS variables (and reinstalling leidenbase) causes the same crash as on CentOS 8.
I gathered just enough confidence to submit this issue when I found that I can get the same crash by ad
edgelist.edg.gz
d resolution_parameter=0.5
to leidenalg.find_partition
in the following leidenalg-based python program (when run on CentOS 8, which I hoped uses the '-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions' compiler options):
#!/usr/bin/env python3
import sys
import platform
import leidenalg
import igraph as ig
print('python version info: %s' % ( platform.python_version() ) )
print('leidenalg version: %s' % ( leidenalg.__version__ ) )
g = ig.read( filename='edgelist.edg', format='edgelist')
part = leidenalg.find_partition(g, partition_type=leidenalg.CPMVertexPartition, n_iterations=2, resolution_parameter=0.5)
print(part)
The output is
python version info: 3.6.8
leidenalg version: 0.8.3
/usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = double; _Alloc = std::allocator<double>; std::vector<_Tp, _Alloc>::reference = double&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
Aborted (core dumped)
I don't know how to set C*FLAGS for python package installation, which is why I ran this test program on the CentOS 8 system. (If you know how to set those flags and are willing to share the information, I would appreciate it greatly.) I attach the edgelist file, in case it may be of use to you.
Somewhere along the line of investigation, I returned to the Debian system with the modified C*FLAGS Makeconf variables (and removed the optimization flags) and ran R with the debugger. The stack dump after crashing is
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff793b535 in __GI_abort () at abort.c:79
#2 0x00007fffe78668e9 in std::__replacement_assert (__file=0x7fffe7c2d258 "/usr/include/c++/8/bits/stl_vector.h", __line=932,
__function=0x7fffe7c2d500 <std::vector<double, std::allocator<double> >::operator[](unsigned long)::__PRETTY_FUNCTION__> "std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = double; _Alloc = std::allocator<double>; std::vector<_Tp, _Alloc>::reference ="...,
__condition=0x7fffe7c2d228 "__builtin_expect(__n < this->size(), true)") at /usr/include/x86_64-linux-gnu/c++/8/bits/c++config.h:447
#3 0x00007fffe79b1289 in std::vector<double, std::allocator<double> >::operator[] (this=0x555558a1f560, __n=190) at /usr/include/c++/8/bits/stl_vector.h:932
#4 0x00007fffe7bc99c5 in MutableVertexPartition::cache_neigh_communities (this=0x555558a1f410, v=452, mode=IGRAPH_ALL) at leidenalg/src/MutableVertexPartition.cpp:815
#5 0x00007fffe7bc9c91 in MutableVertexPartition::get_neigh_comms (this=0x555558a1f410, v=452, mode=IGRAPH_ALL) at leidenalg/src/MutableVertexPartition.cpp:880
#6 0x00007fffe7bd1a89 in Optimiser::move_nodes (this=0x7ffffffb88c0, partitions=std::vector of length 1, capacity 1 = {...},
layer_weights=std::vector of length 1, capacity 1 = {...}, is_membership_fixed=std::vector<bool> of length 1500, capacity 1536 = {...}, consider_comms=2,
consider_empty_community=1, renumber_fixed_nodes=false, max_comm_size=0) at leidenalg/src/Optimiser.cpp:595
#7 0x00007fffe7bcf2cf in Optimiser::optimise_partition (this=0x7ffffffb88c0, partitions=std::vector of length 1, capacity 1 = {...},
layer_weights=std::vector of length 1, capacity 1 = {...}, is_membership_fixed=std::vector<bool> of length 1500, capacity 1536 = {...}, max_comm_size=0)
at leidenalg/src/Optimiser.cpp:159
#8 0x00007fffe7bcec25 in Optimiser::optimise_partition (this=0x7ffffffb88c0, partition=0x555558a1f410,
is_membership_fixed=std::vector<bool> of length 1500, capacity 1536 = {...}, max_comm_size=0) at leidenalg/src/Optimiser.cpp:71
#9 0x00007fffe7bceafe in Optimiser::optimise_partition (this=0x7ffffffb88c0, partition=0x555558a1f410,
is_membership_fixed=std::vector<bool> of length 1500, capacity 1536 = {...}) at leidenalg/src/Optimiser.cpp:63
#10 0x00007fffe7bcea61 in Optimiser::optimise_partition (this=0x7ffffffb88c0, partition=0x555558a1f410) at leidenalg/src/Optimiser.cpp:58
#11 0x00007fffe7bb8be2 in leidenFindPartition (pigraph=0x7ffffffb8b20, partitionType="CPMVertexPartition", pinitialMembership=0x0, pedgeWeights=0x0, pnodeSizes=0x0,
seed=123456, resolutionParameter=0.5, numIter=2, pmembership=0x7ffffffb8aa0, pweightInCommunity=0x7ffffffb8ac0, pweightFromCommunity=0x7ffffffb8ae0,
pweightToCommunity=0x7ffffffb8b00, pweightTotal=0x7ffffffb89f0, pquality=0x7ffffffb89f8, pmodularity=0x7ffffffb8a00, psignificance=0x7ffffffb8a08, pstatus=0x7ffffffb89e8)
at leidenFindPartition.cpp:207
#12 0x00007fffe7bbc124 in _leiden_find_partition (igraph=0x55555b9a3998, partition_type=0x55555b5b3ad0, initial_membership=0x555555574590, edge_weights=0x555555574590,
node_sizes=0x555555574590, seed=0x55555a244f98, resolution_parameter=0x55555a245008, num_iter=0x55555ae0f1e0) at leidenFindPartitionR2C.cpp:184
#13 0x00007ffff7c26262 in ?? () from /usr/lib/R/lib/libR.so
#14 0x00007ffff7c26815 in ?? () from /usr/lib/R/lib/libR.so
#15 0x00007ffff7c718d8 in Rf_eval () from /usr/lib/R/lib/libR.so
#16 0x00007ffff7c76479 in ?? () from /usr/lib/R/lib/libR.so
.
.
.
It looks like the problem is detected somewhere in the system libraries after executing
#4 0x00007fffe7bc99c5 in MutableVertexPartition::cache_neigh_communities (this=0x555558a1f410, v=452, mode=IGRAPH_ALL) at leidenalg/src/MutableVertexPartition.cpp:815
// Reset cached communities
for (size_t c : *_cached_neighs_comms)
(*_cached_weight_tofrom_community)[c] = 0;
I hope that I am not misleading you and myself when I submit this issue.
As an aside, I notice that you contributed leiden-related functions to igraph and rigraph. Do you consider those to be ready for 'production' use? If so, I may consider using those rather than leidenbase.
Ever grateful,
Brent
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.