Coder Social home page Coder Social logo

oubounyt / scanet Goto Github PK

View Code? Open in Web Editor NEW
3.0 1.0 0.0 9.87 MB

SCANet is a python package that incorporates the inference of gene co-expression networks from single-cell gene expression data

Home Page: https://pypi.org/project/scanet/

License: MIT License

Python 84.69% R 15.31%
co-expression gcn grn network single-cell

scanet's People

Contributors

oubounyt avatar tilmanimmisch avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar

scanet's Issues

An error for this code: cor = sn.co.modules_to_annotation_cor(adata_r, net, figsize=(12,6), cor_method="spearman")

Hi,

A nice job! I encountered a technical issue when I run this code:
cor = sn.co.modules_to_annotation_cor(adata_r, net, figsize=(12,6), cor_method="spearman")
It shows such an error:
Traceback (most recent call last):
File "C:\Users\PycharmProjects\pythonProject\main.py", line 127, in
cor = sn.co.modules_to_annotation_cor(adata_r, net, figsize=(12,6), cor_method="spearman")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\scanet\coexpression.py", line 197, in modules_to_annotation_cor
MEtrait = module_to_annotation_cor_r(final_exp_, net, cor_method=cor_method)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\rpy2\robjects\functions.py", line 208, in call
return (super(SignatureTranslatedFunction, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\rpy2\robjects\functions.py", line 131, in call
res = super(Function, self).call(*new_args, **new_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\rpy2\rinterface_lib\conversion.py", line 45, in _
cdata = function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\PycharmProjects\pythonProject\venv\Lib\site-packages\rpy2\rinterface.py", line 873, in call
raise embedded.RRuntimeError(_rinterface._geterrmessage())
rpy2.rinterface_lib.embedded.RRuntimeError: Error in module_trait_cor(exp = final_exp, MEs = net$MEs, cor_method = cor_method, :
unused arguments (cex.lab.x = 0.01, cex.lab.y = 0.01, cex.text = 0.01)

Could you provide some information about this error? Thank you!

databases for GRN inference

Hi,
Thank you for this cool tool. I manage to follow the tutorial until the inference of the GRN.

Before running the function for this, we download data using sn.download_db(). However, I get HTTP Error 404. I tried checking manually the URL from the script but I get that the website is no longer available. I am not quite sure which files and versions this link included. Could you provide a new link or information on which files I need ?

Request for Assistance: ImportError with 'joblib' in Scikit-learn

Hello,

A very nice job! I'm truly impressed with this exceptional tool for single cell analysis.

However, I've encountered an error while working on my project. Below is the traceback for your reference:
Traceback (most recent call last):
ImportError: cannot import name 'joblib' from 'sklearn.externals'

It seems to be an ImportError, likely involving the 'joblib' module within the scikit-learn package, or possibly a version issue with the UMAP package. Could you offer some guidance on resolving this matter?

Thank you very much!

Cannot visualize the drugs in a network

Hello,

I have been testing your method and everything seems to work except for one thing. I can run the full script and get the GRN where I can visualize the TFs and their target genes. However when trying to run the last step of the script to see the drugs within the network I get the following error:

In [109]: sn.pl.plot_grn(df=grn_df, occurrence_pct=80, regulon="all", layout="None", drug_interaction="direct")

Out of 389 edges, 67 edges satisfying occurrence threshold 80% where kept

['NR2F2', 'CREB5', 'TBX1', 'NR2F1']
(67, 3)
None
Local cdn resources have problems on chrome/safari when used in jupyter-notebook.
INFO:root:degree progress is at: 0.0%
INFO:root:degree is done.

AttributeError Traceback (most recent call last)
Cell In[109], line 1
----> 1 sn.pl.plot_grn(df=grn_df, occurrence_pct=80, regulon="all", layout="None", drug_interaction="direct")

File ~/anaconda3/envs/scanet/lib/python3.9/site-packages/scanet/plottingNetworks.py:98, in Plot.plot_grn(df, occurrence_pct, regulon, name, drug_interaction, algorithm, layout)
96 # drug interaction
97 if drug_interaction:
---> 98 drug_interactions = DrugInteractions.get_drug_interactions(net_genes, algorithm)
99 for inter in drug_interactions:
100 network.add_node(inter[0], shape='star', color='darkgreen')

File ~/anaconda3/envs/scanet/lib/python3.9/site-packages/scanet/drug_interactions.py:24, in DrugInteractions.get_drug_interactions(cls, genes, algorithm)
22 r = task.get_result()
23 if r.get_genes() == {}:
---> 24 also_broken = cls.find_broken(genes, algorithm=algorithm)
25 genes = [gene for gene in genes if gene not in also_broken]
26 task = new_task(genes, parameters)

AttributeError: type object 'DrugInteractions' has no attribute 'find_broken'

And then the html file with the GRN+drugs is not generated. I am bit lost on what is causing this problem so any help would be highly appreciated!

Thank you very much

Best
Guillermo

GRN inference

Hi,

thank you for this nice tool. I followed the tutorial without any issues appearing until excuting this cell :

grn_df = sn.grn.grn_inference(adata_processed=adata_processed,modules_df=modules_df, module=Mod_, groupby_=groupby_, anno_name=anno_name, specie=specie_, subsampling_pct=80, n_iteration=n_iteration, num_workers=num_workers)

I always get the response "0 edges where found". I tried to change the number of workers and iterations but every time I get the same response. After some debugging, I found out that the problem is in this line No. 261 in grn.py file "result_list = pool.map(GRN.regulon, args_)". The function regulon is never excuted by me. Could you please help resolving this matter? Thanks !

Error when importing the package

Hello,

Thank you for developing this package. I wanted to test it on my data, however I am running in an issue when importing it. I am using python 3.9.18 on a Mac. I get the following error when importing it:

AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. 
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
    https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations

Is there a preferred python version I should use?

An error about grn inference

Hi,

A very nice tool! I have such an error:
usage: pyscenic ctx [-h] [-o OUTPUT] [-n] [--chunk_size CHUNK_SIZE]
[--mode {custom_multiprocessing,dask_multiprocessing,dask_cluster}]
[-a] [-t] [--rank_threshold RANK_THRESHOLD]
[--auc_threshold AUC_THRESHOLD]
[--nes_threshold NES_THRESHOLD]
[--min_orthologous_identity MIN_ORTHOLOGOUS_IDENTITY]
[--max_similarity_fdr MAX_SIMILARITY_FDR]
--annotations_fname ANNOTATIONS_FNAME
[--num_workers NUM_WORKERS]
[--client_or_address CLIENT_OR_ADDRESS]
[--thresholds THRESHOLDS [THRESHOLDS ...]]
[--top_n_targets TOP_N_TARGETS [TOP_N_TARGETS ...]]
[--top_n_regulators TOP_N_REGULATORS [TOP_N_REGULATORS ...]]
[--min_genes MIN_GENES]
[--expression_mtx_fname EXPRESSION_MTX_FNAME]
[--mask_dropouts] [--cell_id_attribute CELL_ID_ATTRIBUTE]
[--gene_attribute GENE_ATTRIBUTE] [--sparse]
module_fname database_fname [database_fname ...]
pyscenic ctx: error: argument module_fname: can't open 'C:\Users\AppData\Local\Temp\tmpgtaq66d0\grn.csv': [Errno 2] No such file or directory: 'C:\Users\AppData\Local\Temp\tmpgtaq66d0\grn.csv'
When I run this code: grn_df = sn.grn.grn_inference(adata_processed=adata_processed, modules_df=modules_df, module=Mod_, groupby_ = groupby_, anno_name = anno_name, specie=specie_, subsampling_pct=80, n_iteration=n_iteration, num_workers=num_workers)
I am looking forward to your help!

Cannot download human dataset

Hello,

Thank you very much for developing this software. Seems quite interesting and I am looking forward to try it on my data.

I have been following the full-example.ipynb document but I have encountered an error that blocks my progress.

When running sn.download.db() I get the following error:

In [113]: sn.download_db()
Do you want to install datasets for (human/mouse/both)?
human
human.zip: 0.00B [00:03, ?B/s]

HTTPError Traceback (most recent call last)
Cell In[113], line 1
----> 1 sn.download_db()

File ~/anaconda3/envs/scanet/lib/python3.9/site-packages/scanet/download_db.py:41, in download_db()
39 filename_ = os.path.join(MAIN_PATH, "databases",os.path.basename(dataset_human))
40 with DownloadProgressBar(unit='B', unit_scale=True, miniters=1, desc=dataset_human.split('/')[-1]) as t:
---> 41 request.urlretrieve(dataset_human, filename=filename_,
42 reporthook=t.update_to)
45 if answer == "mouse" or answer == "both":
46 if db_exists("mouse"):

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:239, in urlretrieve(url, filename, reporthook, data)
222 """
223 Retrieve a URL into a temporary location on disk.
224
(...)
235 data file as well as the resulting HTTPMessage object.
236 """
237 url_type, path = _splittype(url)
--> 239 with contextlib.closing(urlopen(url, data)) as fp:
240 headers = fp.info()
242 # Just return the local path and the "headers" for file://
243 # URLs. No sense in performing a copy unless requested.

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:214, in urlopen(url, data, timeout, cafile, capath, cadefault, context)
212 else:
213 opener = _opener
--> 214 return opener.open(url, data, timeout)

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:523, in OpenerDirector.open(self, fullurl, data, timeout)
521 for processor in self.process_response.get(protocol, []):
522 meth = getattr(processor, meth_name)
--> 523 response = meth(req, response)
525 return response

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:632, in HTTPErrorProcessor.http_response(self, request, response)
629 # According to RFC 2616, "2xx" code indicates that the client's
630 # request was successfully received, understood, and accepted.
631 if not (200 <= code < 300):
--> 632 response = self.parent.error(
633 'http', request, response, code, msg, hdrs)
635 return response

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:561, in OpenerDirector.error(self, proto, *args)
559 if http_err:
560 args = (dict, 'default', 'http_error_default') + orig_args
--> 561 return self._call_chain(*args)

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:494, in OpenerDirector._call_chain(self, chain, kind, meth_name, *args)
492 for handler in handlers:
493 func = getattr(handler, meth_name)
--> 494 result = func(*args)
495 if result is not None:
496 return result

File ~/anaconda3/envs/scanet/lib/python3.9/urllib/request.py:641, in HTTPDefaultErrorHandler.http_error_default(self, req, fp, code, msg, hdrs)
640 def http_error_default(self, req, fp, code, msg, hdrs):
--> 641 raise HTTPError(req.full_url, code, msg, hdrs, fp)

HTTPError: HTTP Error 404: Not Found

When looking into download_db.py I see that there are two links for the human and mouse datasets:

dataset_human = "https://wolken.zbh.uni-hamburg.de/index.php/s/DZ3XT76Z7xYAT64/download/human.zip"
dataset_mouse = "https://wolken.zbh.uni-hamburg.de/index.php/s/6syqpRTgDLaQs4t/download/mouse.zip"

I suspect that the problem is here, when trying to download the data from this link. I think there is some kind of access problem as this is what I see when trying to open the link in a normal browser:

image

Could you please provide some help to solve this? Or maybe an alternative way of downloading the dataset?

Thank very much

Best
Guillermo

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.