gudhi / tda-tutorial Goto Github PK
View Code? Open in Web Editor NEWA set of jupyter notebooks for the practice of TDA with the python Gudhi library together with popular machine learning and data sciences libraries.
License: MIT License
A set of jupyter notebooks for the practice of TDA with the python Gudhi library together with popular machine learning and data sciences libraries.
License: MIT License
Hello, thank you for your works
When I run the Tuto-GUDHI-alpha-complex-visualization.py, the error message appears:
ac = gudhi.AlphaComplex(off_file = 'datasets/tore3D_1307.off')
AttributeError: module 'gudhi' has no attribute 'AlphaComplex'
This error message also appeared in other codes. Do you know how to solve it? Did the problem occur when I installed the gudhi library?
Hi, all,
I got the following error when I execute function gd.plot_persistence_diagram(BarCodes_Rips0)
in notebook Tuto-GUDHI-persistence-diagrams.ipynb.
My version on GUDHI is 3.2.0
Any hints to solve this issue?
THX!
Error in callback <function install_repl_displayhook.<locals>.post_execute at 0x7f8adcf0ef28> (for post_execute):
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex)
334 cwd=self.texcache,
--> 335 stderr=subprocess.STDOUT)
336 except subprocess.CalledProcessError as exc:
~/anaconda3/lib/python3.6/subprocess.py in check_output(timeout, *popenargs, **kwargs)
335 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 336 **kwargs).stdout
337
~/anaconda3/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs)
417 raise CalledProcessError(retcode, process.args,
--> 418 output=stdout, stderr=stderr)
419 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['latex', '-interaction=nonstopmode', '--halt-on-error', '/root/.cache/matplotlib/tex.cache/c1d089b6baf6a9ebfc28a7497a9f3957.tex']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py in post_execute()
147 def post_execute():
148 if matplotlib.is_interactive():
--> 149 draw_all()
150
151 # IPython >= 2
~/anaconda3/lib/python3.6/site-packages/matplotlib/_pylab_helpers.py in draw_all(cls, force)
134 for f_mgr in cls.get_all_fig_managers():
135 if force or f_mgr.canvas.figure.stale:
--> 136 f_mgr.canvas.draw_idle()
137
138 atexit.register(Gcf.destroy_all)
~/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py in draw_idle(self, *args, **kwargs)
2053 if not self._is_idle_drawing:
2054 with self._idle_draw_cntx():
-> 2055 self.draw(*args, **kwargs)
2056
2057 def draw_cursor(self, event):
~/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in draw(self)
431 # if toolbar:
432 # toolbar.set_cursor(cursors.WAIT)
--> 433 self.figure.draw(self.renderer)
434 # A GUI class may be need to update a window using this draw, so
435 # don't forget to call the superclass.
~/anaconda3/lib/python3.6/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
~/anaconda3/lib/python3.6/site-packages/matplotlib/figure.py in draw(self, renderer)
1473
1474 mimage._draw_list_compositing_images(
-> 1475 renderer, self, artists, self.suppressComposite)
1476
1477 renderer.close_group('figure')
~/anaconda3/lib/python3.6/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
139 if not_composite or not has_images:
140 for a in artists:
--> 141 a.draw(renderer)
142 else:
143 # Composite any adjacent images together
~/anaconda3/lib/python3.6/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
~/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_base.py in draw(self, renderer, inframe)
2605 renderer.stop_rasterizing()
2606
-> 2607 mimage._draw_list_compositing_images(renderer, self, artists)
2608
2609 renderer.close_group('axes')
~/anaconda3/lib/python3.6/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
139 if not_composite or not has_images:
140 for a in artists:
--> 141 a.draw(renderer)
142 else:
143 # Composite any adjacent images together
~/anaconda3/lib/python3.6/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
~/anaconda3/lib/python3.6/site-packages/matplotlib/axis.py in draw(self, renderer, *args, **kwargs)
1190 ticks_to_draw = self._update_ticks(renderer)
1191 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
-> 1192 renderer)
1193
1194 for tick in ticks_to_draw:
~/anaconda3/lib/python3.6/site-packages/matplotlib/axis.py in _get_tick_bboxes(self, ticks, renderer)
1128 for tick in ticks:
1129 if tick.label1On and tick.label1.get_visible():
-> 1130 extent = tick.label1.get_window_extent(renderer)
1131 ticklabelBoxes.append(extent)
1132 if tick.label2On and tick.label2.get_visible():
~/anaconda3/lib/python3.6/site-packages/matplotlib/text.py in get_window_extent(self, renderer, dpi)
920 raise RuntimeError('Cannot get window extent w/o renderer')
921
--> 922 bbox, info, descent = self._get_layout(self._renderer)
923 x, y = self.get_unitless_position()
924 x, y = self.get_transform().transform_point((x, y))
~/anaconda3/lib/python3.6/site-packages/matplotlib/text.py in _get_layout(self, renderer)
307 w, h, d = renderer.get_text_width_height_descent(clean_line,
308 self._fontproperties,
--> 309 ismath=ismath)
310 else:
311 w, h, d = 0, 0, 0
~/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
230 fontsize = prop.get_size_in_points()
231 w, h, d = texmanager.get_text_width_height_descent(
--> 232 s, fontsize, renderer=self)
233 return w, h, d
234
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in get_text_width_height_descent(self, tex, fontsize, renderer)
499 else:
500 # use dviread. It sometimes returns a wrong descent.
--> 501 dvifile = self.make_dvi(tex, fontsize)
502 with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
503 page = next(iter(dvi))
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in make_dvi(self, tex, fontsize)
363 self._run_checked_subprocess(
364 ["latex", "-interaction=nonstopmode", "--halt-on-error",
--> 365 texfile], tex)
366 for fname in glob.glob(basefile + '*'):
367 if not fname.endswith(('dvi', 'tex')):
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex)
342 prog=command[0],
343 tex=tex.encode('unicode_escape'),
--> 344 exc=exc.output.decode('utf-8')))
345 _log.debug(report)
346 return report
RuntimeError: latex was not able to process the following string:
b'$0.0$'
Here is the full report generated by latex:
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015/Debian) (preloaded format=latex)
restricted \write18 enabled.
entering extended mode
(/root/.cache/matplotlib/tex.cache/c1d089b6baf6a9ebfc28a7497a9f3957.tex
LaTeX2e <2016/02/01>
Babel <3.9q> and hyphenation patterns for 3 language(s) loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))
! LaTeX Error: File `type1cm.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Enter file name:
! Emergency stop.
<read *>
l.4 ^^M
No pages of output.
Transcript written on c1d089b6baf6a9ebfc28a7497a9f3957.log.
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex)
334 cwd=self.texcache,
--> 335 stderr=subprocess.STDOUT)
336 except subprocess.CalledProcessError as exc:
~/anaconda3/lib/python3.6/subprocess.py in check_output(timeout, *popenargs, **kwargs)
335 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 336 **kwargs).stdout
337
~/anaconda3/lib/python3.6/subprocess.py in run(input, timeout, check, *popenargs, **kwargs)
417 raise CalledProcessError(retcode, process.args,
--> 418 output=stdout, stderr=stderr)
419 return CompletedProcess(process.args, retcode, stdout, stderr)
CalledProcessError: Command '['latex', '-interaction=nonstopmode', '--halt-on-error', '/root/.cache/matplotlib/tex.cache/c1d089b6baf6a9ebfc28a7497a9f3957.tex']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/IPython/core/formatters.py in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
~/anaconda3/lib/python3.6/site-packages/IPython/core/pylabtools.py in <lambda>(fig)
239
240 if 'png' in formats:
--> 241 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
242 if 'retina' in formats or 'png2x' in formats:
243 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
~/anaconda3/lib/python3.6/site-packages/IPython/core/pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
123
124 bytes_io = BytesIO()
--> 125 fig.canvas.print_figure(bytes_io, **kw)
126 data = bytes_io.getvalue()
127 if fmt == 'svg':
~/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, **kwargs)
2210 orientation=orientation,
2211 dryrun=True,
-> 2212 **kwargs)
2213 renderer = self.figure._cachedRenderer
2214 bbox_inches = self.figure.get_tightbbox(renderer)
~/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in print_png(self, filename_or_obj, *args, **kwargs)
511
512 def print_png(self, filename_or_obj, *args, **kwargs):
--> 513 FigureCanvasAgg.draw(self)
514 renderer = self.get_renderer()
515 original_dpi = renderer.dpi
~/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in draw(self)
431 # if toolbar:
432 # toolbar.set_cursor(cursors.WAIT)
--> 433 self.figure.draw(self.renderer)
434 # A GUI class may be need to update a window using this draw, so
435 # don't forget to call the superclass.
~/anaconda3/lib/python3.6/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
~/anaconda3/lib/python3.6/site-packages/matplotlib/figure.py in draw(self, renderer)
1473
1474 mimage._draw_list_compositing_images(
-> 1475 renderer, self, artists, self.suppressComposite)
1476
1477 renderer.close_group('figure')
~/anaconda3/lib/python3.6/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
139 if not_composite or not has_images:
140 for a in artists:
--> 141 a.draw(renderer)
142 else:
143 # Composite any adjacent images together
~/anaconda3/lib/python3.6/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
~/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_base.py in draw(self, renderer, inframe)
2605 renderer.stop_rasterizing()
2606
-> 2607 mimage._draw_list_compositing_images(renderer, self, artists)
2608
2609 renderer.close_group('axes')
~/anaconda3/lib/python3.6/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
139 if not_composite or not has_images:
140 for a in artists:
--> 141 a.draw(renderer)
142 else:
143 # Composite any adjacent images together
~/anaconda3/lib/python3.6/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
53 renderer.start_filter()
54
---> 55 return draw(artist, renderer, *args, **kwargs)
56 finally:
57 if artist.get_agg_filter() is not None:
~/anaconda3/lib/python3.6/site-packages/matplotlib/axis.py in draw(self, renderer, *args, **kwargs)
1190 ticks_to_draw = self._update_ticks(renderer)
1191 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
-> 1192 renderer)
1193
1194 for tick in ticks_to_draw:
~/anaconda3/lib/python3.6/site-packages/matplotlib/axis.py in _get_tick_bboxes(self, ticks, renderer)
1128 for tick in ticks:
1129 if tick.label1On and tick.label1.get_visible():
-> 1130 extent = tick.label1.get_window_extent(renderer)
1131 ticklabelBoxes.append(extent)
1132 if tick.label2On and tick.label2.get_visible():
~/anaconda3/lib/python3.6/site-packages/matplotlib/text.py in get_window_extent(self, renderer, dpi)
920 raise RuntimeError('Cannot get window extent w/o renderer')
921
--> 922 bbox, info, descent = self._get_layout(self._renderer)
923 x, y = self.get_unitless_position()
924 x, y = self.get_transform().transform_point((x, y))
~/anaconda3/lib/python3.6/site-packages/matplotlib/text.py in _get_layout(self, renderer)
307 w, h, d = renderer.get_text_width_height_descent(clean_line,
308 self._fontproperties,
--> 309 ismath=ismath)
310 else:
311 w, h, d = 0, 0, 0
~/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
230 fontsize = prop.get_size_in_points()
231 w, h, d = texmanager.get_text_width_height_descent(
--> 232 s, fontsize, renderer=self)
233 return w, h, d
234
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in get_text_width_height_descent(self, tex, fontsize, renderer)
499 else:
500 # use dviread. It sometimes returns a wrong descent.
--> 501 dvifile = self.make_dvi(tex, fontsize)
502 with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
503 page = next(iter(dvi))
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in make_dvi(self, tex, fontsize)
363 self._run_checked_subprocess(
364 ["latex", "-interaction=nonstopmode", "--halt-on-error",
--> 365 texfile], tex)
366 for fname in glob.glob(basefile + '*'):
367 if not fname.endswith(('dvi', 'tex')):
~/anaconda3/lib/python3.6/site-packages/matplotlib/texmanager.py in _run_checked_subprocess(self, command, tex)
342 prog=command[0],
343 tex=tex.encode('unicode_escape'),
--> 344 exc=exc.output.decode('utf-8')))
345 _log.debug(report)
346 return report
RuntimeError: latex was not able to process the following string:
b'$0.0$'
Here is the full report generated by latex:
This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015/Debian) (preloaded format=latex)
restricted \write18 enabled.
entering extended mode
(/root/.cache/matplotlib/tex.cache/c1d089b6baf6a9ebfc28a7497a9f3957.tex
LaTeX2e <2016/02/01>
Babel <3.9q> and hyphenation patterns for 3 language(s) loaded.
(/usr/share/texlive/texmf-dist/tex/latex/base/article.cls
Document Class: article 2014/09/29 v1.4h Standard LaTeX document class
(/usr/share/texlive/texmf-dist/tex/latex/base/size10.clo))
! LaTeX Error: File `type1cm.sty' not found.
Type X to quit or <RETURN> to proceed,
or enter new name. (Default extension: sty)
Enter file name:
! Emergency stop.
<read *>
l.4 ^^M
No pages of output.
Transcript written on c1d089b6baf6a9ebfc28a7497a9f3957.log.
<Figure size 432x288 with 1 Axes>
I can not use the DowkerComplex. I have taken an error as following :
AttributeError: module 'gudhi' has no attribute 'DowkerComplex'.
Hello! Thanks for your great work!
When using the AlphaDTMFiltration in DTM_filtrations.py, an IndexError: "This vertex is missing, maybe hidden by a duplicate or another heavier point." was reported. May I ask what is the cause of this error? How can I fixed it.
Hello!
Can you explain how to use data in human.off? I have seen in Tuto-GUDHI-extended-persistence.ipynb , it denotes each 3 values by triangle, and I am wondering what is the meaning of the data.
triangles = np.loadtxt("datasets/human.off", dtype=int)[:,1:]
coords = np.loadtxt("datasets/human.txt", dtype=float)
Thanks in advance.
People regularly ask how to visualize a simplicial complex, I think we should update the tutorial to show some ways to plot. As examples with 2 different libraries, first alpha complex
import numpy as np
import gudhi
ac = gudhi.AlphaComplex(off_file='/home/glisse/repos/gudhi/data/points/tore3D_1307.off')
st = ac.create_simplex_tree()
triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3 and s[1] <= .1])
points = np.array([ac.get_point(i) for i in range(st.num_vertices())])
import plotly.graph_objects as go
fig = go.Figure(data=[
go.Mesh3d(
x=points[:,0],
y=points[:,1],
z=points[:,2],
i = triangles[:,0],
j = triangles[:,1],
k = triangles[:,2],
)
])
fig.show()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(points[:,0], points[:,1], points[:,2], triangles=triangles)
plt.show()
and Rips complex (we could pick different coordinates than the first 3 for the projection, possibly multiply by a 5x3 matrix)
import numpy as np
import gudhi
points = np.array(gudhi.read_off('/home/glisse/repos/gudhi/data/points/Kl.off'))
rc = gudhi.RipsComplex(points=points,max_edge_length=.2)
st = rc.create_simplex_tree(max_dimension=2)
triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3])
import plotly.graph_objects as go
fig = go.Figure(data=[
go.Mesh3d(
x=points[:,0],
y=points[:,1],
z=points[:,2],
i = triangles[:,0],
j = triangles[:,1],
k = triangles[:,2],
)
])
fig.show()
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_trisurf(points[:,0], points[:,1], points[:,2], triangles=triangles)
plt.show()
On Tuto-GUDHI-cubical-complexes.ipynb 1st cell:
from sklearn.neighbors.kde import KernelDensity
FutureWarning: The sklearn.neighbors.kde module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API.
Since gudhi 3.7.0, Tuto-GUDHI-representations.ipynb is failing on the model = model.fit(train_dgms, train_labs)
cell.
Seems to come from GUDHI/gudhi-devel#719
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[22], line 1
----> 1 model = model.fit(train_dgms, train_labs)
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/model_selection/_search.py:875, in BaseSearchCV.fit(self, X, y, groups, **fit_params)
869 results = self._format_results(
870 all_candidate_params, n_splits, all_out, all_more_results
871 )
873 return results
--> 875 self._run_search(evaluate_candidates)
877 # multimetric is determined here because in the case of a callable
878 # self.scoring the return type is only known after calling
879 first_test_score = all_out[0]["test_scores"]
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/model_selection/_search.py:1389, in GridSearchCV._run_search(self, evaluate_candidates)
1387 def _run_search(self, evaluate_candidates):
1388 """Search all candidates in param_grid"""
-> 1389 evaluate_candidates(ParameterGrid(self.param_grid))
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/model_selection/_search.py:822, in BaseSearchCV.fit.<locals>.evaluate_candidates(candidate_params, cv, more_results)
814 if self.verbose > 0:
815 print(
816 "Fitting {0} folds for each of {1} candidates,"
817 " totalling {2} fits".format(
818 n_splits, n_candidates, n_candidates * n_splits
819 )
820 )
--> 822 out = parallel(
823 delayed(_fit_and_score)(
824 clone(base_estimator),
825 X,
826 y,
827 train=train,
828 test=test,
829 parameters=parameters,
830 split_progress=(split_idx, n_splits),
831 candidate_progress=(cand_idx, n_candidates),
832 **fit_and_score_kwargs,
833 )
834 for (cand_idx, parameters), (split_idx, (train, test)) in product(
835 enumerate(candidate_params), enumerate(cv.split(X, y, groups))
836 )
837 )
839 if len(out) < 1:
840 raise ValueError(
841 "No fits were performed. "
842 "Was the CV iterator empty? "
843 "Were there no candidates?"
844 )
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/parallel.py:1088, in Parallel.__call__(self, iterable)
1085 if self.dispatch_one_batch(iterator):
1086 self._iterating = self._original_iterator is not None
-> 1088 while self.dispatch_one_batch(iterator):
1089 pass
1091 if pre_dispatch == "all" or n_jobs == 1:
1092 # The iterable was consumed all at once by the above for loop.
1093 # No need to wait for async callbacks to trigger to
1094 # consumption.
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/parallel.py:901, in Parallel.dispatch_one_batch(self, iterator)
899 return False
900 else:
--> 901 self._dispatch(tasks)
902 return True
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/parallel.py:819, in Parallel._dispatch(self, batch)
817 with self._lock:
818 job_idx = len(self._jobs)
--> 819 job = self._backend.apply_async(batch, callback=cb)
820 # A job can complete so quickly than its callback is
821 # called before we get here, causing self._jobs to
822 # grow. To ensure correct results ordering, .insert is
823 # used (rather than .append) in the following line
824 self._jobs.insert(job_idx, job)
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/_parallel_backends.py:208, in SequentialBackend.apply_async(self, func, callback)
206 def apply_async(self, func, callback=None):
207 """Schedule a func to be run"""
--> 208 result = ImmediateResult(func)
209 if callback:
210 callback(result)
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/_parallel_backends.py:597, in ImmediateResult.__init__(self, batch)
594 def __init__(self, batch):
595 # Don't delay the application, to avoid keeping the input
596 # arguments in memory
--> 597 self.results = batch()
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/parallel.py:288, in BatchedCalls.__call__(self)
284 def __call__(self):
285 # Set the default nested backend to self._backend but do not set the
286 # change the default number of processes to -1
287 with parallel_backend(self._backend, n_jobs=self._n_jobs):
--> 288 return [func(*args, **kwargs)
289 for func, args, kwargs in self.items]
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/joblib/parallel.py:288, in <listcomp>(.0)
284 def __call__(self):
285 # Set the default nested backend to self._backend but do not set the
286 # change the default number of processes to -1
287 with parallel_backend(self._backend, n_jobs=self._n_jobs):
--> 288 return [func(*args, **kwargs)
289 for func, args, kwargs in self.items]
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/utils/fixes.py:117, in _FuncWrapper.__call__(self, *args, **kwargs)
115 def __call__(self, *args, **kwargs):
116 with config_context(**self.config):
--> 117 return self.function(*args, **kwargs)
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:672, in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, return_estimator, split_progress, candidate_progress, error_score)
670 cloned_parameters = {}
671 for k, v in parameters.items():
--> 672 cloned_parameters[k] = clone(v, safe=False)
674 estimator = estimator.set_params(**cloned_parameters)
676 start_time = time.time()
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/base.py:87, in clone(estimator, safe)
79 raise TypeError(
80 "Cannot clone object '%s' (type %s): "
81 "it does not seem to be a scikit-learn "
82 "estimator as it does not implement a "
83 "'get_params' method." % (repr(estimator), type(estimator))
84 )
86 klass = estimator.__class__
---> 87 new_object_params = estimator.get_params(deep=False)
88 for name, param in new_object_params.items():
89 new_object_params[name] = clone(param, safe=False)
File ~/miniconda3/envs/test_notebooks/lib/python3.8/site-packages/sklearn/base.py:170, in BaseEstimator.get_params(self, deep)
168 out = dict()
169 for key in self._get_param_names():
--> 170 value = getattr(self, key)
171 if deep and hasattr(value, "get_params") and not isinstance(value, type):
172 deep_items = value.get_params().items()
AttributeError: 'Landscape' object has no attribute 'sample_range'
https://github.com/GUDHI/TDA-tutorial/blob/master/Tuto-GUDHI-simplicial-complexes-from-data-points.ipynb contains a link to https://render.githubusercontent.com/view/Tuto-GUDHI-simplicial-complexes-from-distance-matrix.ipynb which doesn't work. What is the preferred way of linking to another tutorial?
I think it could be nice to have a notebook on ToMATo (so that all examples are in one place).
Maybe we can just copy-paste Marc's code in a notebook: https://gudhi.inria.fr/python/latest/clustering.html
Now that ATOL is in GUDHI, we should import the corresponding tutorial and tweak it so it uses the version of ATOL in GUDHI.
With gudhi 3.1.0, the axes shall not be handled this way:
' Usual Rips complex on X '
st_rips = gudhi.RipsComplex(X).create_simplex_tree(max_dimension=2) # create a Rips complex
diagram_rips = st_rips.persistence() # compute the persistence
# plot the persistence diagram
fig, ax = plt.subplots()
gudhi.plot_persistence_diagram(diagram_rips)
ax.set_title('Persistence diagram of the Rips complex')
It is better to do:
# plot the persistence diagram
gudhi.plot_persistence_diagram(diagram_rips)
plt.title('Persistence diagram of the Rips complex')
For axes management, cf. persistence density documentation
Hello, when I add sulc value of each node to filtration, A error message appears:
for i in range(len(data_vertices)):
#print("filteration_value[0]:",filteration_value[0])#filteration_value[i]: 4.8394809709861875e-05
#print(len(data_vertices),filteration_value.shape)#264 (264,)
st.assign_filtration([i], filtration =filteration_value[i])
D:\anaconda3.4\python.exe F:/free_output/PH-brain/Brain_area/compute_betti_gudhi1.py
Process finished with exit code -1073741819 (0xC0000005)
I have found some solutions, but no use, Do you know how to solve?
Hello, sorry to bother you again.
Q1: I have use brain cortical curv of each vertex as filteration value, but when i use the following code:
for i in range(len(data_vertices)):
st.assign_filtration([i], filtration = filteration_value[i])
#st.assign_filtration([0], filtration = filteration_value[0])#success!
#st.assign_filtration([0], filtration = filteration_value[i])#success!
error:Process finished with exit code -1073741819 (0xC0000005)
I have tried some solution, but it's doesn't work! I'm curious why filter values can only be fed one node at a time.
Q2: I add filter value manually
st.assign_filtration([0], filtration = filteration_value[0])
st.assign_filtration([1], filtration = filteration_value[1])
st.assign_filtration([2], filtration = filteration_value[2])
st.assign_filtration([3], filtration = filteration_value[3])
But you said use extended_persistence, I have use it, but :
st2 = gd.SimplexTree()
st2.extend_filtration()
dgms2 = st2.extended_persistence(min_persistence=1e-5)
gd.plot_persistence_barcode(dgms2)#IndexError: list index out of range
plt.show()
the error message:
IndexError: list index out of range
but when I use gd.plot_persistence_barcode(dgm),use ordinary persistent barcode, there's no problem. What code should I use to get the extended continuous bar code directly?
Q3: I've added filter values to each node. What pair of distances do I need to use to weight the edges?
Q4: Afetr I get the extended_persistent_barcode, How to get betti???
My datasets and codes are in:https://github.com/tanjia123456/Brain_area_PH/blob/main/compute_betti_gudhi3.py
I have POT, pymanopt autograd following POT instructions in https://pythonot.github.io/index.html. I made git clone of this github to try the tutorials and so. everything installed, most of them work. But when I try to run Tuto-GUDHI-Barycenters-of-persistence.diagrams.ipyb when it calls bary this happens:
b, log = bary(diags,
init=0,
verbose=True) # we initialize our estimation on the first diagram (the red one.)
NameError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_10868/2194333359.py in
----> 1 b, log = bary(diags,
2 init=0,
3 verbose=True) # we initialize our estimation on the first diagram (the red one.)
~\anaconda3\lib\site-packages\gudhi\wasserstein\barycenter.py in lagrangian_barycenter(pdiagset, init, verbose)
92 # the points of Y.
93 # If points disappear, there thrown
---> 94 # on [0,0] by default.
95 new_created_points = [] # will store potential new points.
96
~\anaconda3\lib\site-packages\gudhi\wasserstein\wasserstein.py in wasserstein_distance(X, Y, matching, order, internal_p, enable_autodiff, keep_essential_parts)
319 if matching:
320 assert not enable_autodiff, "matching and enable_autodiff are currently incompatible"
--> 321 P = ot.emd(a=a,b=b,M=M, numItermax=2000000)
322 ot_cost = np.sum(np.multiply(P,M))
323 P[-1, -1] = 0 # Remove matching corresponding to the diagonal
NameError: name 'ot' is not defined
I revised the files and they import OT correctly. I even tried to put "import ot" in the exact subprocess but even then it does not recognize it.
Hello, thank you for your works.
I want to use ph to extract topological features and perform a post-processing on the surface segmentation of the cerebral cortex. But I am currently a little confused as to whether I can apply ph to my subject. I see that the Tuto-GUDHI-extended-persistence.py file processes 3D data. For the brain, I also obtained the 3D trisurf based on the three-dimensional coordinate points (coord) and the triangle connection relationship (triangle). For this structure of the brain, should I use persistence barcode or extended persistence barcode? Or something else lasts? Or do you have any suggestions for me?
Hi,
Thanks for the tutorial for point cloud optimization with tensorflow. I am wondering if currently there is a CUDA version for computing persistence diagram? Because for now it's a bit slow while increasing the number of points e.g., to one thousand. Thanks in advance.
Hello,
Thanks for maintaining this repo.
Two questions on processing image datasets (e.g. torchvision MNIST).
Example
input (X1) : 1x28x28
emb1 = conv (X1) : 1x512
diag = RipsComplex( emb1)
I put RipsComplex but any object for persistence would be ok.
Many thanks!
In README.md, in section '4 - Statistical tools for persistence', Tuto-GUDHI-ConfRegions-PersDiag-BottleneckBootstrap.ipynb is referenced.
This file is not in github repository.
Some tools like https://www.reviewnb.com/ may be helpful for the reviews
In Tuto-GUDHI-persistence-diagrams.ipynb, in the section 'Bottleneck distance', the image Images/MatchingDiag.png is referenced but is not present in the repository.
Hi, all,
I got the following error when I execute the following line:
wd = WD.transform([acY.persistence_intervals_in_dimension(1)])
in notebook Tuto-GUDHI-representations.ipynb. Does it mean some package related to ot
is missing?
Could you again give some hints to solve this issue?
THX!
NameError Traceback (most recent call last)
<ipython-input-50-237b8d0a401c> in <module>()
----> 1 wd = WD.transform([acY.persistence_intervals_in_dimension(1)])
~/anaconda3/envs/gudhi/lib/python3.6/site-packages/gudhi/representations/metrics.py in transform(self, X)
371 Xfit = pairwise_persistence_diagram_distances(X, self.diagrams_, metric=self.metric, order=self.order, internal_p=self.internal_p, delta=self.delta)
372 else:
--> 373 Xfit = pairwise_persistence_diagram_distances(X, self.diagrams_, metric=self.metric, order=self.order, internal_p=self.internal_p, matching=False)
374 return Xfit
375
~/anaconda3/envs/gudhi/lib/python3.6/site-packages/gudhi/representations/metrics.py in pairwise_persistence_diagram_distances(X, Y, metric, **kwargs)
160 try:
161 from gudhi.wasserstein import wasserstein_distance as pot_wasserstein_distance
--> 162 return pairwise_distances(XX, YY, metric=_sklearn_wrapper(pot_wasserstein_distance, X, Y, **kwargs))
163 except ImportError:
164 print("POT (Python Optimal Transport) is not installed. Please install POT or use metric='wasserstein' or metric='hera_wasserstein'")
~/anaconda3/lib/python3.6/site-packages/sklearn/metrics/pairwise.py in pairwise_distances(X, Y, metric, n_jobs, **kwds)
1245 func = partial(distance.cdist, metric=metric, **kwds)
1246
-> 1247 return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
1248
1249
~/anaconda3/lib/python3.6/site-packages/sklearn/metrics/pairwise.py in _parallel_pairwise(X, Y, func, n_jobs, **kwds)
1088 if n_jobs == 1:
1089 # Special case to avoid picklability checks in delayed
-> 1090 return func(X, Y, **kwds)
1091
1092 # TODO: in some cases, backend='threading' may be appropriate
~/anaconda3/lib/python3.6/site-packages/sklearn/metrics/pairwise.py in _pairwise_callable(X, Y, metric, **kwds)
1126 iterator = itertools.product(range(X.shape[0]), range(Y.shape[0]))
1127 for i, j in iterator:
-> 1128 out[i, j] = metric(X[i], Y[j], **kwds)
1129
1130 return out
~/anaconda3/envs/gudhi/lib/python3.6/site-packages/gudhi/representations/metrics.py in flat_metric(a, b)
126 else:
127 def flat_metric(a, b):
--> 128 return metric(X[int(a[0])], Y[int(b[0])], **kwargs)
129 return flat_metric
130
~/anaconda3/envs/gudhi/lib/python3.6/site-packages/gudhi/wasserstein/wasserstein.py in wasserstein_distance(X, Y, matching, order, internal_p, enable_autodiff)
177 # Note: it is the Wasserstein distance to the power q.
178 # The default numItermax=100000 is not sufficient for some examples with 5000 points, what is a good value?
--> 179 ot_cost = ot.emd2(a, b, M, numItermax=2000000)
180
181 return ot_cost ** (1./order)
NameError: name 'ot' is not defined
In Tuto-GUDHI-ConfRegions-PersDiag-datapoints.ipynb (Confidence regions for persistence diagrams of filtrations based on pairwise distances section), you try to open a file (./datasets/trefoil_dist
that do not exist in repository.
The optimisation tutorial relies on the tensorflow_addons module to run.
Consider updating the YAML script that sets up the Github Action CI to avoid failure here.
Hi,
I am trying to build a DTM filtration and needed some clarification from the tutorial. Is the DTMFiltration() function from the notebook tutorial equivalent to the DTMRipsComplex() class in the reference manual?
Thanks in advance!
The tutorials Simplicial complexes from data points and Simplicial complexes from distance matrix are almost identical.
I propose that we merge them. The Rips complex takes one out of three possible main arguments:
So building a Rips complex from data points or from a distance matrix is immediate.
The case of the Alpha complex is less trivial. It theoretically needs a point cloud on which the Delaunay triangulation is computed. As a result, it is natural that the class has a data_points
main argument.
It is the current choice of Gudhi to not allow a distance matrix to be passed as argument to the Alpha Complex constructor. The reason seems to be that, on a purely metric space, we then resort to statistical techniques (such as MDS) that push our data from that metric space into an approximate R^p vector space on which a point cloud approximately respecting the original distances between points can be drawn and used to build the Alpha complex.
However, we could add the argument distance_matrix
to the Alpha complex constructor, with a proper documentation. In details, we would acknowledge that it is an approximation and that Gudhi uses sklearn.MDS to do that approximation.
What are your thoughts on this ?
Hello, thank you for your works.
I want to take a closer look at these documents, but some functions are not very clear. Do you have any documentation for the functions of this library?
... instead of starting the notebook with def DTM
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.