Coder Social home page Coder Social logo

theislab / cellrank_notebooks Goto Github PK

View Code? Open in Web Editor NEW
6.0 4.0 5.0 362.64 MB

Tutorials and examples for CellRank.

Home Page: https://cellrank.org

License: BSD 3-Clause "New" or "Revised" License

Jupyter Notebook 100.00%
tutorials cellrank examples

cellrank_notebooks's People

Contributors

dependabot[bot] avatar marius1311 avatar michalk8 avatar pre-commit-ci[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cellrank_notebooks's Issues

Add an example for the transport map kernel

...
As we're working with OT quite a bit now in the group, it would be nice to have an example to show people how to easily transition to a CellRank analysis, no matter how these transport maps have been computed previously.

How to compute the "develop" probability of any specific group of cells?

Hi cellrank developers,

I found that cellrank could compute the likelihood that each cell develop to final cells/root cells.

cr.tl.lineages(adata)
cr.pl.lineages(adata)

I wonder if we could compute the transition probability to any specific groups, not only to the final cells or root cells.

Pancreas basic tutorial - Gene expression trends: Var annotations not found

I'm trying out some example from the notebook and got stuck in the gene dynamics heatmap section right at the bottom of the turoial. In my adata object, I cannot find the key adata.var['to TERMINAL_STATE'] (TERMINAL_STATE being Alpha in the tutorial. In my adata.var I only have the columns to TERMINAL_STATE corr and to TERMINAL_STATE qval

Did I maybe miss a step in the pipeline that would have created the missing column?

This is the section I am referring to:

Screenshot 2020-11-24 at 13 40 26

Update time series tutorial

Currently, our tutorial subsamples the data to 25% of the cells to speed up computations - rather than doing that, we should use our adaptive thresholding scheme, which is currently not used. Would be good to see whether this changes results - it's okay if it does actually, subsampling to 1/4 of the data is quite a reduction in cell number.

Plotting smooth gene expression trends along lineages

Hello,
Thank you for the wonderful pipeline.

I am facing an issue while running the section of 'basics', plot smooth gene expression trends along lineages.

As before that we need to run scv.tl.diffmap(adata)
When i run it, it shows an error :
TypeError: '>=' not supported between instances of 'tuple' and 'int'

Please help as what could be the possible reason for that. I have already run dynamical mode of scVelo.

Thank you for the help in advance.

Use git lfs

@Marius1311
I think it would be best to recreate the repo/purge the history (the second option is preferred) and use git-lfs. I've tested this if it's possible to download the data using scanpy.read(..., backup_url=...) and it works.

Pros:

  • currently, I download 230MB, which is a bit too much for such a small repo
  • it will speedup the download (to run the tutorials, you don't need to download the datasets in the datasets directory)
    using GIT_LFS_SKIP_SMUDGE=1 git clone ...
  • gallery can be finished (I don't know what the rate limits on figshare are, but with gh, we've never had an issue before)

Cons:

  • 1GiB limit: currently, we'd be using ~180MiB (30 for pancres, 150 + pancreas_preprocessed)
  • needs git-lfs to download without a warning (see last point in Todos)

Todos:

  • for gallery, we want to download the file only once, so the path should be absolute
  • I need to triple authenticate before I push - this will break automatic the travis pipeline in the main repo,
    but this does not happens with a personal access token
  • git-lfs has to be installed to "properly" download the repo (i.e. the clone succeeds, the checkout fails - this automatically remove the files and adds the changes to the staging area, ergo we need git-lfs on the main repo travis, which is not an issue)

Notebook testing

======================================================= test session starts ========================================================
platform darwin -- Python 3.6.9, pytest-5.3.2, py-1.8.1, pluggy-0.13.1
rootdir: /Users/marius/Projects/cellrank_notebooks, inifile: tox.ini, testpaths: tests/
plugins: cov-2.9.0, parallel-0.1.0, notebook-0.6.0
collected 3 items                                                                                                                  
pytest-parallel: 2 workers (processes), 1 test per worker (thread)
..FFFF
============================================================= FAILURES =============================================================
_______________________________________________________ test_pancreas_basic ________________________________________________________

nb_regression = NBRegressionFixture(exec_notebook=True, exec_cwd='/var/folders/mx/0hyv8t2s26jdj79f55kvc_b80000gn/T', exec_allow_errors...tputs/*/data/application/vnd.jupyter.widget-view+json'), diff_use_color=True, diff_color_words=True, force_regen=False)

    def decorator(nb_regression) -> None:
        fpath = f"{(TUTORIALS / func.__name__[5:]).resolve()}.ipynb"
    
        nb = nbformat.read(fpath, as_version=4)
        _inject_sentinel(nb)
    
        with tempfile.NamedTemporaryFile("w", suffix=".ipynb") as tmpf:
            nbformat.write(nb, tmpf)
            result = nb_regression.check(tmpf.name, raise_errors=False)
    
>       _assert_execute_sentinel(result)

tests/test_tutorials.py:42: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

result = NBRegressionResult(diff_full_length=1,diff_filtered_length=1)

    def _assert_execute_sentinel(result) -> None:
        match = parse(
            f'$.[*].diff[*].diff[*].diff[*].valuelist[?(@.text = "{SENTINEL}\n")].text'
        )
        found = match.find(result.diff_filtered)
    
        if not found:
>           raise RuntimeError(result.diff_string)
E           RuntimeError: 
E           --- expected
E           +++ obtained
E           ## modified /cells/6/outputs/0/text:
E           @@ -1 +1,5 @@
E           try downloading from url
E           https://github.com/theislab/cellrank_notebooks/raw/master/datasets/pancreas/endocrinogenesis_day15.5.h5ad
E           ... this may take a while but only happens once
E           
E           Abundance of ['spliced', 'unspliced']: [0.81 0.19]
E           
E           ## inserted before /cells/6/outputs/1:
E           +  output:
E           +    output_type: display_data
E           +    data:
E           +      application/vnd.jupyter.widget-view+json:
E           +        model_id: a41d4aa7dd4d4c848f04034d03bcb2a1
E           +        version_major: 2
E           +        version_minor: 0
E           +      text/plain: HBox(children=(FloatProgress(value=1.0, bar_style='info', description='endocrinogenesis_day15.5.h5ad', max=1.0…
E           
E           ## deleted /cells/6/outputs/1:
E           -  output:
E           -    output_type: execute_result
E           -    execution_count: 2
E           -    data:
E           -      text/plain:
E           -        AnnData object with n_obs × n_vars = 2531 × 27998
E           -            obs: 'day', 'proliferation', 'G2M_score', 'S_score', 'phase', 'clusters_coarse', 'clusters', 'clusters_fine', 'louvain_Alpha', 'louvain_Beta'
E           -            var: 'highly_variable_genes'
E           -            uns: 'clusters_colors', 'clusters_fine_colors', 'day_colors', 'louvain_Alpha_colors', 'louvain_Beta_colors', 'neighbors', 'pca'
E           -            obsm: 'X_pca', 'X_umap'
E           -            layers: 'spliced', 'unspliced'
E           -            obsp: 'connectivities', 'distances'
E           
E           ## inserted before /cells/6/outputs/2:
E           +  output:
E           +    output_type: execute_result
E           +    execution_count: 2
E           +    data:
E           +      text/plain:
E           +        AnnData object with n_obs × n_vars = 2531 × 27998
E           +            obs: 'day', 'proliferation', 'G2M_score', 'S_score', 'phase', 'clusters_coarse', 'clusters', 'clusters_fine', 'louvain_Alpha', 'louvain_Beta'
E           +            var: 'highly_variable_genes'
E           +            uns: 'clusters_colors', 'clusters_fine_colors', 'day_colors', 'louvain_Alpha_colors', 'louvain_Beta_colors', 'neighbors', 'pca'
E           +            obsm: 'X_pca', 'X_umap'
E           +            layers: 'spliced', 'unspliced'
E           +            obsp: 'connectivities', 'distances'
E           
E           ## inserted before /cells/12/outputs/0:
E           +  output:
E           +    output_type: error
E           +    ename: PermissionError
E           +    evalue: [Errno 13] Permission denied: '../../cached_files'
E           +    traceback:
E           +      item[0]: ---------------------------------------------------------------------------
E           +      item[1]: PermissionError                           Traceback (most recent call last)
E           +      item[2]:
E           +        <ipython-input-4-bd53879ad1ad> in <module>
E           +              1 try:
E           +              2     import scachepy
E           +        ----> 3     c = scachepy.Cache('../../cached_files/basic_tutorial/')
E           +              4     c.tl.recover_dynamics(adata, force=False)
E           +              5 except ModuleNotFoundError:
E           +      item[3]:
E           +        ~/Projects/scachepy/scachepy/cache.py in __init__(self, root_dir, backend, compression, ext, separate_dirs)
E           +             62         else:
E           +             63             # shared backend
E           +        ---> 64             self._backend = backend_type(root_dir, self._ext, cache=self)
E           +             65             self.pp = PpModule(self._backend)
E           +             66             self.tl = TlModule(self._backend)
E           +      item[4]:
E           +        ~/Projects/scachepy/scachepy/backends.py in __init__(self, dirname, ext, cache)
E           +             14     def __init__(self, dirname, ext, *, cache):
E           +             15         if not os.path.exists(dirname):
E           +        ---> 16             os.makedirs(dirname)
E           +             17 
E           +             18         self._dirname = Path(dirname)
E           +      item[5]:
E           +        ~/miniconda3/envs/cellrank/lib/python3.6/os.py in makedirs(name, mode, exist_ok)
E           +            208     if head and tail and not path.exists(head):
E           +            209         try:
E           +        --> 210             makedirs(head, mode, exist_ok)
E           +            211         except FileExistsError:
E           +            212             # Defeats race condition when another thread created the path
E           +      item[6]:
E           +        ~/miniconda3/envs/cellrank/lib/python3.6/os.py in makedirs(name, mode, exist_ok)
E           +            218             return
E           +            219     try:
E           +        --> 220         mkdir(name, mode)
E           +            221     except OSError:
E           +            222         # Cannot rely on checking for EEXIST, since the operating system
E           +      item[7]: PermissionError: [Errno 13] Permission denied: '../../cached_files'
E           
E           ## deleted /cells/12/outputs/0:
E           -  output:
E           -    output_type: stream
E           -    name: stdout
E           -    text:
E           -      You don't seem to have scachepy installed, but that's fine, you just have to be a bit patient (~10min). 
E           -      recovering dynamics
E           -          finished (DURATION) --> added 
E           -          'fit_pars', fitted parameters for splicing dynamics (adata.var)
E           
E           ## modified /cells/21/outputs/0/text:
E           @@ -1 +1,21 @@
E           Computing transition matrix based on velocity correlations using mode `'deterministic'`
E           
E               Finish (DURATION)
E           Using a connectivity kernel with weight `0.2`
E           Computing transition matrix based on connectivities
E               Finish (DURATION)
E           Computing eigendecomposition of the transition matrix
E           Adding `.eigendecomposition`
E                  `adata.uns['eig_fwd']`
E           Adding `.eigendecomposition`
E                  `adata.uns['eig_fwd']`
E           Computing metastable states
E           INFO: Using pre-computed schur decomposition
E           Adding `.schur`
E                  `.coarse_T`
E                  `.coarse_stationary_distribution`
E               Finish (DURATION)
E           Adding `adata.obs['final_states_probs']`
E                  `adata.obs['final_states']`
E                  `.final_states_probabilities`
E                  `.final_states`
E           
E           ## deleted /cells/21/outputs/3:
E           
E           ## modified /cells/26/outputs/0/text:
E           @@ -1 +1,17 @@
E           Computing transition matrix based on velocity correlations using mode `'deterministic'`
E           
E               Finish (DURATION)
E           Computing eigendecomposition of the transition matrix
E           Adding `.eigendecomposition`
E                  `adata.uns['eig_bwd']`
E           Computing metastable states
E           WARNING: For `n_states=1`, stationary distribution is computed
E           Computing eigendecomposition of the transition matrix
E           Adding `.eigendecomposition`
E                  `adata.uns['eig_bwd']`
E           Adding `.metastable_states`
E               Finish (DURATION)
E           Adding `adata.obs['root_states_probs']`
E                  `adata.obs['root_states']`
E                  `.final_states_probabilities`
E                  `.final_states`
E           
E           ## inserted before /cells/26/outputs/2:
E           +  output:
E           +    output_type: display_data
E           +    data:
E           +      image/png: iVBORw0K...<snip base64, md5=63def2b2dfc95257...>
E           +      text/plain: <Figure size 600x400 with 1 Axes>
E           +    metadata (unknown keys):
E           +      image/png:
E           +        height: 345
E           +        width: 617
E           
E           ## deleted /cells/26/outputs/2:
E           -  output:
E           -    output_type: stream
E           -    name: stdout
E           -    text:
E           -      
E           -          Finish (DURATION)
E           -      Computing eigendecomposition of the transition matrix
E           -      Adding `.eigendecomposition`
E           -             `adata.uns['eig_bwd']`
E           -      Computing metastable states
E           -      WARNING: For `n_states=1`, stationary distribution is computed
E           -      Computing eigendecomposition of the transition matrix
E           -      Adding `.eigendecomposition`
E           -             `adata.uns['eig_bwd']`
E           -      Adding `.metastable_states`
E           -          Finish (DURATION)
E           -      Adding `adata.obs['root_states_probs']`
E           -             `adata.obs['root_states']`
E           -             `.final_states_probabilities`
E           -             `.final_states`
E           
E           ## deleted /cells/26/outputs/4:
E           
E           

tests/test_tutorials.py:27: RuntimeError
______________________________________________________ test_pancreas_advanced ______________________________________________________

nb_regression = NBRegressionFixture(exec_notebook=True, exec_cwd='/var/folders/mx/0hyv8t2s26jdj79f55kvc_b80000gn/T', exec_allow_errors...tputs/*/data/application/vnd.jupyter.widget-view+json'), diff_use_color=True, diff_color_words=True, force_regen=False)

    def decorator(nb_regression) -> None:
        fpath = f"{(TUTORIALS / func.__name__[5:]).resolve()}.ipynb"
    
        nb = nbformat.read(fpath, as_version=4)
        _inject_sentinel(nb)
    
        with tempfile.NamedTemporaryFile("w", suffix=".ipynb") as tmpf:
            nbformat.write(nb, tmpf)
            result = nb_regression.check(tmpf.name, raise_errors=False)
    
>       _assert_execute_sentinel(result)

tests/test_tutorials.py:42: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

result = NBRegressionResult(diff_full_length=1,diff_filtered_length=1)

    def _assert_execute_sentinel(result) -> None:
        match = parse(
            f'$.[*].diff[*].diff[*].diff[*].valuelist[?(@.text = "{SENTINEL}\n")].text'
        )
        found = match.find(result.diff_filtered)
    
        if not found:
>           raise RuntimeError(result.diff_string)
E           RuntimeError: 
E           --- expected
E           +++ obtained
E           ## modified /cells/6/outputs/0/text:
E           @@ -1 +1,5 @@
E           try downloading from url
E           https://github.com/theislab/cellrank_notebooks/raw/master/datasets/pancreas/endocrinogenesis_day15.5.h5ad
E           ... this may take a while but only happens once
E           
E           Abundance of ['spliced', 'unspliced']: [0.81 0.19]
E           
E           ## inserted before /cells/6/outputs/1:
E           +  output:
E           +    output_type: display_data
E           +    data:
E           +      application/vnd.jupyter.widget-view+json:
E           +        model_id: 2ed812db3df54097a598085fa1de204c
E           +        version_major: 2
E           +        version_minor: 0
E           +      text/plain: HBox(children=(FloatProgress(value=1.0, bar_style='info', description='endocrinogenesis_day15.5.h5ad', max=1.0…
E           
E           ## deleted /cells/6/outputs/1:
E           -  output:
E           -    output_type: execute_result
E           -    execution_count: 2
E           -    data:
E           -      text/plain:
E           -        AnnData object with n_obs × n_vars = 2531 × 27998
E           -            obs: 'day', 'proliferation', 'G2M_score', 'S_score', 'phase', 'clusters_coarse', 'clusters', 'clusters_fine', 'louvain_Alpha', 'louvain_Beta'
E           -            var: 'highly_variable_genes'
E           -            uns: 'clusters_colors', 'clusters_fine_colors', 'day_colors', 'louvain_Alpha_colors', 'louvain_Beta_colors', 'neighbors', 'pca'
E           -            obsm: 'X_pca', 'X_umap'
E           -            layers: 'spliced', 'unspliced'
E           -            obsp: 'connectivities', 'distances'
E           
E           ## inserted before /cells/6/outputs/2:
E           +  output:
E           +    output_type: execute_result
E           +    execution_count: 2
E           +    data:
E           +      text/plain:
E           +        AnnData object with n_obs × n_vars = 2531 × 27998
E           +            obs: 'day', 'proliferation', 'G2M_score', 'S_score', 'phase', 'clusters_coarse', 'clusters', 'clusters_fine', 'louvain_Alpha', 'louvain_Beta'
E           +            var: 'highly_variable_genes'
E           +            uns: 'clusters_colors', 'clusters_fine_colors', 'day_colors', 'louvain_Alpha_colors', 'louvain_Beta_colors', 'neighbors', 'pca'
E           +            obsm: 'X_pca', 'X_umap'
E           +            layers: 'spliced', 'unspliced'
E           +            obsp: 'connectivities', 'distances'
E           
E           ## inserted before /cells/12/outputs/0:
E           +  output:
E           +    output_type: error
E           +    ename: PermissionError
E           +    evalue: [Errno 13] Permission denied: '../../cached_files'
E           +    traceback:
E           +      item[0]: ---------------------------------------------------------------------------
E           +      item[1]: PermissionError                           Traceback (most recent call last)
E           +      item[2]:
E           +        <ipython-input-4-bd53879ad1ad> in <module>
E           +              1 try:
E           +              2     import scachepy
E           +        ----> 3     c = scachepy.Cache('../../cached_files/basic_tutorial/')
E           +              4     c.tl.recover_dynamics(adata, force=False)
E           +              5 except ModuleNotFoundError:
E           +      item[3]:
E           +        ~/Projects/scachepy/scachepy/cache.py in __init__(self, root_dir, backend, compression, ext, separate_dirs)
E           +             62         else:
E           +             63             # shared backend
E           +        ---> 64             self._backend = backend_type(root_dir, self._ext, cache=self)
E           +             65             self.pp = PpModule(self._backend)
E           +             66             self.tl = TlModule(self._backend)
E           +      item[4]:
E           +        ~/Projects/scachepy/scachepy/backends.py in __init__(self, dirname, ext, cache)
E           +             14     def __init__(self, dirname, ext, *, cache):
E           +             15         if not os.path.exists(dirname):
E           +        ---> 16             os.makedirs(dirname)
E           +             17 
E           +             18         self._dirname = Path(dirname)
E           +      item[5]:
E           +        ~/miniconda3/envs/cellrank/lib/python3.6/os.py in makedirs(name, mode, exist_ok)
E           +            208     if head and tail and not path.exists(head):
E           +            209         try:
E           +        --> 210             makedirs(head, mode, exist_ok)
E           +            211         except FileExistsError:
E           +            212             # Defeats race condition when another thread created the path
E           +      item[6]:
E           +        ~/miniconda3/envs/cellrank/lib/python3.6/os.py in makedirs(name, mode, exist_ok)
E           +            218             return
E           +            219     try:
E           +        --> 220         mkdir(name, mode)
E           +            221     except OSError:
E           +            222         # Cannot rely on checking for EEXIST, since the operating system
E           +      item[7]: PermissionError: [Errno 13] Permission denied: '../../cached_files'
E           
E           ## deleted /cells/12/outputs/0:
E           -  output:
E           -    output_type: stream
E           -    name: stdout
E           -    text:
E           -      You don't seem to have scachepy installed, but that's fine, you just have to be a bit patient (~10min). 
E           -      recovering dynamics
E           -          finished (DURATION) --> added 
E           -          'fit_pars', fitted parameters for splicing dynamics (adata.var)
E           
E           ## modified /cells/25/outputs/0/text:
E           @@ -1 +1,3 @@
E           Computing transition matrix based on velocity correlations using mode `'deterministic'`
E           
E               Finish (DURATION)
E           
E           ## inserted before /cells/25/outputs/2:
E           +  output:
E           +    output_type: execute_result
E           +    execution_count: 10
E           +    data:
E           +      text/plain: <Velo>
E           
E           ## deleted /cells/25/outputs/2:
E           -  output:
E           -    output_type: stream
E           -    name: stdout
E           -    text:
E           -      
E           -          Finish (DURATION)
E           
E           ## deleted /cells/25/outputs/4:
E           
E           

tests/test_tutorials.py:27: RuntimeError
============================================= 2 failed, 1 passed in 101.63s (0:01:41)

Kernel and estimator tutorial fails in several steps

Discussion

Running the Tutorial Kernels and estimators with the latest developer version of CellRank fails in several steps:

  • ptk.compute_projection() causes AttributeError: 'PseudotimeKernel' object has no attribute 'compute_projection' (already reported in #41)

  • The cell
ptk = PseudotimeKernel(adata)
ptk.compute_transition_matrix(threshold_scheme='hard', frac_to_keep=0)
ptk.plot_projection()
scv.pl.velocity_embedding_stream(adata, vkey='T_fwd')

throws the error ValueError: Transition matrix is not row stochastic, 15 rows do not sum to 1..

Traceback
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/var/folders/7p/j67nx0vs6j968c2w2d9pqmzryt4ryq/T/ipykernel_87197/1839256166.py in <module>
      1 ptk = PseudotimeKernel(adata)
----> 2 ptk.compute_transition_matrix(threshold_scheme='hard', frac_to_keep=0)
      3 ptk.plot_projection()
      4 scv.pl.velocity_embedding_stream(adata, vkey='T_fwd')

~/code/cellrank/cellrank/tl/kernels/_pseudotime_kernel.py in compute_transition_matrix(self, threshold_scheme, frac_to_keep, b, nu, check_irreducibility, n_jobs, backend, show_progress_bar, **kwargs)
    170             logg.warning("Biased KNN graph is not irreducible")
    171 
--> 172         self.transition_matrix = biased_conn
    173         logg.info("    Finish", time=start)
    174 

~/code/cellrank/cellrank/tl/kernels/_base_kernel.py in transition_matrix(self, matrix)
    549     @transition_matrix.setter
    550     def transition_matrix(self, matrix: Any) -> None:
--> 551         KernelExpression.transition_matrix.fset(self, matrix)
    552 
    553     @property

~/code/cellrank/cellrank/tl/kernels/_base_kernel.py in transition_matrix(self, matrix)
    447             if should_norm(matrix):  # some rows are all 0s/contain invalid values
    448                 n_inv = np.sum(~np.isclose(np.asarray(matrix.sum(1)).squeeze(), 1.0, rtol=1e-12))
--> 449                 raise ValueError(f"Transition matrix is not row stochastic, {n_inv} rows do not sum to 1.")
    450         # fmt: on
    451 

ValueError: Transition matrix is not row stochastic, 15 rows do not sum to 1.

  • g.plot_schur(use=3) throws AttributeError: 'GPCCA' object has no attribute 'plot_schur'

  • g.plot_absorption_probabilities(state='Alpha') fails with AttributeError: 'PathCollection' object has no property 'state'
Traceback
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/var/folders/7p/j67nx0vs6j968c2w2d9pqmzryt4ryq/T/ipykernel_87197/2479001957.py in <module>
----> 1 g.plot_absorption_probabilities(state='Alpha')

~/code/cellrank/cellrank/tl/estimators/mixins/_utils.py in wrapper(wrapped, instance, args, kwargs)
    413             _colors = getattr(instance, colors, None)
    414 
--> 415         return wrapped(
    416             *args,
    417             _data=data,

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/scanpy/_utils/__init__.py in func_wrapper(*args, **kwargs)
    117             # reset filter
    118             warnings.simplefilter('default', DeprecationWarning)
--> 119             return func(*args, **kwargs)
    120 
    121         return func_wrapper

~/code/cellrank/cellrank/tl/estimators/mixins/_utils.py in _plot_dispatcher(self, states, color, discrete, mode, time_key, same_plot, title, cmap, **kwargs)
    340         )
    341 
--> 342     return _plot_continuous(
    343         self,
    344         states=states,

~/code/cellrank/cellrank/tl/estimators/mixins/_utils.py in _plot_continuous(self, _data, _colors, _title, states, color, mode, time_key, title, same_plot, cmap, **kwargs)
    276         _ = kwargs.pop("color_gradients", None)
    277 
--> 278     scv.pl.scatter(
    279         self.adata,
    280         title=title,

~/code/scvelo/scvelo/plotting/scatter.py in scatter(adata, basis, x, y, vkey, color, use_raw, layer, color_map, colorbar, palette, size, alpha, linewidth, linecolor, perc, groups, sort_order, components, projection, legend_loc, legend_loc_lines, legend_fontsize, legend_fontweight, legend_fontoutline, legend_align_text, xlabel, ylabel, title, fontsize, figsize, xlim, ylim, add_density, add_assignments, add_linfit, add_polyfit, add_rug, add_text, add_text_pos, add_margin, add_outline, outline_width, outline_color, n_convolve, smooth, normalize_data, rescale_color, color_gradients, dpi, frameon, zorder, ncols, nrows, wspace, hspace, show, save, ax, **kwargs)
    292                         _adata = adata[c_bool] if np.sum(~c_bool) > 0 else adata
    293                         mkwargs["color"] = c_vals[c_bool]
--> 294                         ax = scatter(
    295                             _adata, ax=ax, **mkwargs, **scatter_kwargs, **kwargs
    296                         )

~/code/scvelo/scvelo/plotting/scatter.py in scatter(adata, basis, x, y, vkey, color, use_raw, layer, color_map, colorbar, palette, size, alpha, linewidth, linecolor, perc, groups, sort_order, components, projection, legend_loc, legend_loc_lines, legend_fontsize, legend_fontweight, legend_fontoutline, legend_align_text, xlabel, ylabel, title, fontsize, figsize, xlim, ylim, add_density, add_assignments, add_linfit, add_polyfit, add_rug, add_text, add_text_pos, add_margin, add_outline, outline_width, outline_color, n_convolve, smooth, normalize_data, rescale_color, color_gradients, dpi, frameon, zorder, ncols, nrows, wspace, hspace, show, save, ax, **kwargs)
    612 
    613             marker = kwargs.pop("marker", ".")
--> 614             smp = ax.scatter(
    615                 x, y, c=c, alpha=alpha, marker=marker, zorder=zorder, **kwargs
    616             )

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs)
   1359     def inner(ax, *args, data=None, **kwargs):
   1360         if data is None:
-> 1361             return func(ax, *map(sanitize_sequence, args), **kwargs)
   1362 
   1363         bound = new_sig.bind(ax, *args, **kwargs)

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, edgecolors, plotnonfinite, **kwargs)
   4595                 )
   4596         collection.set_transform(mtransforms.IdentityTransform())
-> 4597         collection.update(kwargs)
   4598 
   4599         if colors is None:

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/matplotlib/artist.py in update(self, props)
   1060                     func = getattr(self, f"set_{k}", None)
   1061                     if not callable(func):
-> 1062                         raise AttributeError(f"{type(self).__name__!r} object "
   1063                                              f"has no property {k!r}")
   1064                     ret.append(func(v))

AttributeError: 'PathCollection' object has no property 'state'

  • The cell
alpha_drivers = g.compute_lineage_drivers(lineages="Alpha", return_drivers=True)
alpha_drivers.sort_values(by="Alpha corr", ascending=False)

results in KeyError: 'Alpha corr'

Traceback
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/var/folders/7p/j67nx0vs6j968c2w2d9pqmzryt4ryq/T/ipykernel_87197/2472769031.py in <module>
      1 alpha_drivers = g.compute_lineage_drivers(lineages="Alpha", return_drivers=True)
----> 2 alpha_drivers.sort_values(by="Alpha corr", ascending=False)

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/pandas/util/_decorators.py in wrapper(*args, **kwargs)
    309                     stacklevel=stacklevel,
    310                 )
--> 311             return func(*args, **kwargs)
    312 
    313         return wrapper

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/pandas/core/frame.py in sort_values(self, by, axis, ascending, inplace, kind, na_position, ignore_index, key)
   6252 
   6253             by = by[0]
-> 6254             k = self._get_label_or_level_values(by, axis=axis)
   6255 
   6256             # need to rewrap column in Series to apply key function

~/opt/miniconda3/envs/cellrank/lib/python3.8/site-packages/pandas/core/generic.py in _get_label_or_level_values(self, key, axis)
   1774             values = self.axes[axis].get_level_values(key)._values
   1775         else:
-> 1776             raise KeyError(key)
   1777 
   1778         # Check for duplicates

Versions:

cellrank==1.5.1+g10479a98.d20211116 scanpy==1.8.1 anndata==0.7.6 numpy==1.21.1 numba==0.52.0 scipy==1.7.0 pandas==1.3.1 pygpcca==1.0.2 scikit-learn==0.24.2 statsmodels==0.12.2 python-igraph==0.9.8 scvelo==0.2.5.dev81+g64c76b2.d20220202 pygam==0.8.0 matplotlib==3.4.2 seaborn==0.11.1

Minor improvements for the docs

Hi Mike, here's a list for some minor improvements that would be nice:

  • Make sure the "Getting started" tutorial uses the dark-mode version of the concept figure (when in dark mode).
  • Make sure the tutorial hierarchy is also displayed in the side bar, and is collapsable (just like for the API)
  • Use the new dark-mode version of the CellRank logo when in dark mode

Reversed direction of trajectory

Hello,
I am facing an issue while running the pipeline. As while running the scvelo basic flow, the trajectory shows correct direction, i.e., towards mature cell cluster.
But when I ran cellrank part of dynamical model, the PAGA plotting shows reverse direction, from mature to immature. what might be the reason for same?

Also, when i am plotting gene trends to check the gene expression trend in lineage, i am getting a blank plot. Also the heatmap which is plotted to get the gene expression, is blank.

Please suggest what can be the reason, and any alternative which can be implemented.
Thank you so much in advance.

Update the basic tutorial

I have updated the basic tutorial, however, I wasn't able to update the lineage trends. I was thinking whether you could maybe fit the gene expression trends using a python module, so that I can also update them? Or is it the case that we always get much better results using R's mcgv package?

CytoTRACE tutorial outdated

Description

If I am not mistaken, the API for the CytoTRACE kernel changed with theislab/cellrank#791. Previously, the all statistics were calculated when instantiating the kernel, i.e.,

ctk = CytoTRACEKernel(adata)

To achieve the same behaviour, the following code needs to be used

ctk = CytoTRACEKernel(adata).compute_cytotrace()

The tutorial has not yet been updated accordingly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.