Coder Social home page Coder Social logo

emg-toolkit's Issues

Bulk download failing to download large numbers of files

bulk_download.py currently doesn't process all available files due to relatively simple count-iteration bug / typo. I made a pull request #14 that I think should fix the issue.

Example of run that failed: mg-toolkit -d bulk_download -p 5.0 -a MGYS00002401 -g taxonomic_analysis_ssu_rrna

Stops after downloading 150 files as num_results_processed is incremented cumulatively in each loop and 25 + 50 + 75 + 100 + 125 + 150 = 525 which is greater than the total number of files to download.

Bulk download issue with result pagination

I've been trying to download all functional analyses for a study (MGYS00000410). For some analyses which are accessible via the MGnify site no files were being downloaded. One of the examples is MGYA00005084. Logging in the console doesn't look like it attempts to download files and fails, rather that it moves on after the request to the API without attempting any downloads.

For the analyses where no files were downloaded, I noticed in the API results with default page_size, items with the group-type 'Functional analysis' don't seem to appear until the after the first page. I'm not sure that's the cause though it seems to be the case for the cases I checked.

Failed to download metadata

Hello,
I've been trying to download metadata for this project ERP005534 / PRJEB6070 (I tried both) but it fails to download any metadata. I've had waited for 10 plus minutes but nothing happens. I tried the -d option and I got this:

$ mg-toolkit -d original_metadata -a ERP005534
DEBUG: Accession ERP005534
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/portal/api/search?result=read_run&query=study_accession%3DERP005534+OR+secondary_study_accession%3DERP005534&fields=run_accession%2Csecondary_sample_accession%2Csample_accession%2Cdepth&format=json HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/browser/api/xml/ERS433375 HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/browser/api/xml/ERS433376 HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/browser/api/xml/ERS433377 HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
...

Then I tried the example in the README and also got the same behavior:

$ mg-toolkit -d original_metadata -a ERP001736
DEBUG: Accession ERP001736
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/portal/api/search?result=read_run&query=study_accession%3DERP001736+OR+secondary_study_accession%3DERP001736&fields=run_accession%2Csecondary_sample_accession%2Csample_accession%2Cdepth&format=json HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/browser/api/xml/ERS478017 HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/browser/api/xml/ERS477998 HTTP/1.1" 200 None
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/browser/api/xml/ERS477979 HTTP/1.1" 200 None
....

Does this just mean that there isn't metadata available for these projects (None)?

Regards

mg-toolkit for confidential MGnify study

Dear Ola,

I would like to try out the mg-toolkit for retrieving the metadata etc. for our team's study, ERP116156, (I am following the tutorials we learnt at EMBL metagenomics bioinformatics 2018) but receive a traceback error (please see attached metadata_error.txt) when trying the command:

$ mg-toolkit original_metadata -a ERP116156

With the bulk download command the result is no retrieved files but otherwise a standard looking output (please see bulk.txt attached)

I suspect that this might be due to my study being listed as confidential at present (the command appears to work fine on a public study), if this is the case, might there be a workaround (asides from making it public just yet)?

Any advice would be most appreciated.

Best wishes,

James

P.S. I am not sure if this constitutes an issue with the package per se and apologize if this is not the forum for such an post (I am new to github).

metadata_error.txt
bulk.txt

bulk download metadata

I see that the emg-toolkit can be used to download metadata for an individual study. Is there a way to bulk download metadata for metagenomes from all studies?

Add an "-resume" option for the bulk_downloader

MGnify has some very large studies, downloading those is problematic. With the current implementation if there is a network issue there is no way to restart the download process using the files already downloaded.

This feature will require (this is just a brain dump)

  • Store the tool progress status in a .sqlite db or a text file (pages and the download status for each page, how many pages...)
  • Add a "--resume" flag or sniff at the results folder before starting downloading data
  • Use the state to start downloading from that point
  • Check the files check the downloaded file checksums

emg-toolkit as a python method

Hi,

Awesome work here!

I would like to know how I can use emg-toolkit as a python method, such as:

from mg_toolkit import original_metadata
original_metadata('ERP001736')

Sorry if I missed this in the documentation.

Best regards,
Matin N.

Can't download metadata, 500 error

I am trying to download study metadata and have a list of all the study secondary IDs in the form, ERP###### or SRP######.
When I execute the following commmand to get data from study ERP001736, I see the following error.

mg-toolkit -d original_metadata -a ERP001736
DEBUG: Accession ERP001736
DEBUG: Starting new HTTPS connection (1): www.ebi.ac.uk:443
DEBUG: https://www.ebi.ac.uk:443 "GET /ena/portal/api/search?result=read_run&query=study_accession%3DERP001736+OR+secondary_study_accession%3DERP001736&fields=run_accession%2Csecondary_sample_accession%2Csample_accession%2Cdepth&format=json HTTP/1.1" 500 10633
ERROR: Error decoding ENA sample_metadata response for accession: ERP001736

I was able to execute the same command a few days ago and it seemed to work, generating a .csv file of useful metadata. I have replicated my workflow the other day step by step, and have no idea what the problem is now. Feel free to let me know if this is a temporary server-side issue, or if there is any other command I can try.

Overwrites already downloaded data.

My bulk_download stopped because of a HTPP ERROR 500.
urllib.error.HTTPError: HTTP Error 500: Internal Server Error

But when i restart the download with the same command it overwrites the already existing data. The download already took 30 minutes and wasn't even half way.

Is it possible that the code first checks if there are already data from a failed download and skips these while downloading ?

Command i used to download :
bash mg-toolkit bulk_download -a MGYS00001225 -g taxonomic_annotations

Versions:

  • python:3.6.7 conda-forge
  • mg-toolkit:0.6.4

error running example

Trouble running the listed example:


$ conda create -n py3.6 python=3.6

$ conda activate py3.6

$ pip install -U mg-toolkit

$ mg-toolkit original_metadata -a ERP001736
Traceback (most recent call last):
  File "/anaconda2/envs/py3.6/bin/mg-toolkit", line 8, in <module>
    sys.exit(main())
  File "/anaconda2/envs/py3.6/lib/python3.6/site-packages/mg_toolkit/__init__.py", line 198, in main
    return getattr(mg_toolkit, args.tool)(args)
  File "/anaconda2/envs/py3.6/lib/python3.6/site-packages/mg_toolkit/metadata.py", line 46, in original_metadata
    om.save_to_csv(om.fetch_metadata())
  File "/anaconda2/envs/py3.6/lib/python3.6/site-packages/mg_toolkit/metadata.py", line 106, in fetch_metadata
    _meta = self.get_metadata(sample['sample_accession'])
  File "/anaconda2/envs/py3.6/lib/python3.6/site-packages/mg_toolkit/metadata.py", line 71, in get_metadata
    for m in x['ROOT']['SAMPLE']['SAMPLE_ATTRIBUTES']['SAMPLE_ATTRIBUTE']:
KeyError: 'ROOT'

Data not being found

I'm having the following problem when requesting download of certain accession ids for bulk download:

0%| | 0/9 [00:00<?, ?it/sERROR: HTTP Error 404: Not Found | 0/9 [00:00<?, ?it/s]
0%| | 0/9 [00:00<?, ?it/s]
0%| | 0/9 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/bin/mg-toolkit", line 8, in
sys.exit(main())
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/lib/python3.8/site-packages/mg_toolkit/init.py", line 198, in main
return getattr(mg_toolkit, args.tool)(args)
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/lib/python3.8/site-packages/mg_toolkit/bulk_download.py", line 44, in bulk_download
program.run()
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/lib/python3.8/site-packages/mg_toolkit/bulk_download.py", line 213, in run
num_results_processed = self._process_page(res, progress_bar)
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/lib/python3.8/site-packages/mg_toolkit/bulk_download.py", line 253, in _process_page
self.download_file(
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/lib/python3.8/site-packages/mg_toolkit/bulk_download.py", line 163, in download_file
BulkDownloader.download_resource_by_url(
File "/nfs/sw/ebi-metagenomics/ebi-metagenomics-0.6.5/python/lib/python3.8/site-packages/mg_toolkit/bulk_download.py", line 125, in download_resource_by_url
urlretrieve(url, output_file_name)
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 247, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/nfs/sw/python/python-3.8.3/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found

dowload metadata error

Hello,
I'm trying to use mg-toolkit (version 0.10.0) to fetch metadata from a large Project PRJEB11419. After hours of execution I get the following error:
mg-toolkit_error

I have tried with other projects, and I have only been able to reproduce this error with this particular dataset.
Thanks in advance for any help that you can provide!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.