Coder Social home page Coder Social logo

jlab-code / methylstar Goto Github PK

View Code? Open in Web Editor NEW
30.0 2.0 6.0 3.03 MB

A fast and robust pre-processing pipeline for bulk or single-cell whole-genome bisulfite sequencing (WGBS) data.

License: GNU General Public License v3.0

Python 43.93% Shell 41.89% R 12.07% Dockerfile 2.11%
genome bismark methylation dna-methylation wgbs pipeline

methylstar's Introduction

shahryary/MethylStar

A fast and robust pre-processing pipeline for bulk or single-cell whole-genome bisulfite sequencing (WGBS) data

To process a large number of WGBS samples in a consistent, documented and reproducible manner it is advisable to use a pipeline system. MethylStar is a fast, stable and flexible pre-processing pipeline for bulk or single-cell (de-multiplexed) WGBS data.

Key features

(1) MethylStar provides a user-friendly interface for experts/non-experts that runs on a Unix based environment.

(2) Offers efficient memory usage and multithreading/multi-core processing support during all pipeline steps.

(3) Greater flexibility to the user to adjust parameters, execute and re-execute individual steps.

(4) Generates standard outputs for downstream analysis (formats compatible with DMR-callers such as Methylkit, DMRcaller) and visualisation on genome browsers (bedGraph/BigWig).

Pipeline Steps

In its current implementation, MethylStar comprises of the following core NGS components:

(A) Trimmomatic: A flexible read trimming tool (Java based) for processing of raw fastq reads for both single- and paired-end data.

(B) Bismark: Alignment, removal of PCR duplicates and cytosine context extraction steps are performed with the Bismark software suite. Alignments can be performed for both WGBS and Post Bisulfite Adapter tagging (PBAT) approaches for single-cell libraries. Bisulfite treated reads are mapped using the short read aligner Bowtie 2, and therefore it is a requirement that Bowtie 1 or Bowtie 2 are also installed on your machine. Bismark also requires SAMtools to be pre-installed on the computer.

(C) FastQC and bedtools: Tools for assessing data quality.

(D) METHimpute: Cytosine-level methylation calls can be obtained with METHimpute, a Bioconductor package for inferring the methylation status/level of individual cytosines, even in the presence of low sequencing depth and/or missing data.

Note: For information on specific software versions, please refer to Installation and Configuration section.

Documentation

  1. Installation and Configuration
  2. Running The Pipeline
  3. MethylStar tutorial on YouTube (https://www.youtube.com/watch?v=ll8mbPjVwnM)
  1. Published paper - BMC Genomics

Contributors:

methylstar's People

Contributors

lardenoije avatar shahryary avatar talha-tum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

methylstar's Issues

Bismark-mapper error

Hi team,
I have set up methystar on my system. Thanks for the simple and seamless process. While running the pipeline, there's an issue in the Bismark mapping step. The trimmomatic and genome indexing is running fine. The output log is slightly similar to issue #1, but the solution is not applicable to my problem. I have attached the log file. Kindly let me know your thoughts.

Sample1.log
Thanks,
Jeff

Issue with Methimpute-Part

Hi,
thank your for your really helpful Pipeline. I try to use the Pipeline in your prepared Docker but I just get stuck with a Error Message in the Methimpute-Part:

HMM: Convergence reached!
Time spent in Baum-Welch: 1574.5s
Compiling results ... 31.02s
ERROR : Can't join on 'seqnames' x 'seqnames' because of incompatible types (integer / character) 
Warning messages:
1: In nls.lm(par = start, fn = FCT, jac = jac, control = control, lower = lower,  :
  lmdif: info = -1. Number of iterations has reached `maxiter' == 50.

2: In nls.lm(par = start, fn = FCT, jac = jac, control = control, lower = lower,  :
  lmdif: info = -1. Number of iterations has reached `maxiter' == 50.

sort: cannot read: /mnt/meth2/Results_Methylstar/methimpute-out/file-processed.lst: No such file or directory

I use a customized Arabidopsis reference genome without mitochondrial and chloroplast sequences.
Can you tell me which files causes the Error so I can look at them more closely? I just can't find this in your code. Or do you even had the same problem once?
Thank you very much
Laura

BigWig production file issue

Hi @shahryary,

I'am trying to convert bedgraph to bigwig using your pipeline, but I have an error as you can see below and no production of bigwig a the end:

==================================================
Please choose from the menu:

	1. Convert Methimpute output to DMRCaller Format
	2. Convert Methimpute output to Methylkit Format
	3. Convert Methimpute output to bedGraph Format
	4. Convert bedGraph to BigWig Format

B. Back to main Menu

>>  4
converting bedGraph format to Bigwig format ...

Do you want continue to run? [y/n] y
Starting to convert to bigWig format ...
ls: impossible d'accéder à '/NetScratch/cpichot/WGBS_analysis/Zebularine_treatment_out/rdata/*_chr_all.txt': Aucun fichier ou dossier de ce type
Running in single mode. (Parallel is disabled.)
Running for methylome_Mock_FDLM202341331-1a_All-CG ...
bedGraphToBigWig v 4 - Convert a bedGraph file to bigWig format.
usage:
   bedGraphToBigWig in.bedGraph chrom.sizes out.bw
where in.bedGraph is a four column file in the format:
      <chrom> <start> <end> <value>
and chrom.sizes is a two-column file/URL: <chromosome name> <size in bases>
and out.bw is the output indexed big wig file.
If the assembly <db> is hosted by UCSC, chrom.sizes can be a URL like
  http://hgdownload.soe.ucsc.edu/goldenPath/<db>/bigZips/<db>.chrom.sizes
or you may use the script fetchChromSizes to download the chrom.sizes file.
If not hosted by UCSC, a chrom.sizes file can be generated by running
twoBitInfo on the assembly .2bit file.
The input bedGraph file must be sorted, use the unix sort command:
  sort -k1,1 -k2,2n unsorted.bedGraph > sorted.bedGraph
options:
   -blockSize=N - Number of items to bundle in r-tree.  Default 256
   -itemsPerSlot=N - Number of data points bundled at lowest level. Default 1024
   -unc - If set, do not use compression.
Running for methylome_Mock_FDLM202341331-1a_All-CHG ...
bedGraphToBigWig v 4 - Convert a bedGraph file to bigWig format.
usage:
   bedGraphToBigWig in.bedGraph chrom.sizes out.bw
where in.bedGraph is a four column file in the format:
      <chrom> <start> <end> <value>
and chrom.sizes is a two-column file/URL: <chromosome name> <size in bases>
and out.bw is the output indexed big wig file.
If the assembly <db> is hosted by UCSC, chrom.sizes can be a URL like
  http://hgdownload.soe.ucsc.edu/goldenPath/<db>/bigZips/<db>.chrom.sizes
or you may use the script fetchChromSizes to download the chrom.sizes file.
If not hosted by UCSC, a chrom.sizes file can be generated by running
twoBitInfo on the assembly .2bit file.
The input bedGraph file must be sorted, use the unix sort command:
  sort -k1,1 -k2,2n unsorted.bedGraph > sorted.bedGraph
options:
   -blockSize=N - Number of items to bundle in r-tree.  Default 256
   -itemsPerSlot=N - Number of data points bundled at lowest level. Default 1024
   -unc - If set, do not use compression.
Running for methylome_Mock_FDLM202341331-1a_All-CHH ...
bedGraphToBigWig v 4 - Convert a bedGraph file to bigWig format.
usage:
   bedGraphToBigWig in.bedGraph chrom.sizes out.bw
where in.bedGraph is a four column file in the format:
      <chrom> <start> <end> <value>
and chrom.sizes is a two-column file/URL: <chromosome name> <size in bases>
and out.bw is the output indexed big wig file.
If the assembly <db> is hosted by UCSC, chrom.sizes can be a URL like
  http://hgdownload.soe.ucsc.edu/goldenPath/<db>/bigZips/<db>.chrom.sizes
or you may use the script fetchChromSizes to download the chrom.sizes file.
If not hosted by UCSC, a chrom.sizes file can be generated by running
twoBitInfo on the assembly .2bit file.
The input bedGraph file must be sorted, use the unix sort command:
  sort -k1,1 -k2,2n unsorted.bedGraph > sorted.bedGraph
options:
   -blockSize=N - Number of items to bundle in r-tree.  Default 256
   -itemsPerSlot=N - Number of data points bundled at lowest level. Default 1024
   -unc - If set, do not use compression.
Running for methylome_R150mM_FDLM202341332-1a_All-CG ...
bedGraphToBigWig v 4 - Convert a bedGraph file to bigWig format.
usage:
   bedGraphToBigWig in.bedGraph chrom.sizes out.bw
where in.bedGraph is a four column file in the format:
      <chrom> <start> <end> <value>
and chrom.sizes is a two-column file/URL: <chromosome name> <size in bases>
and out.bw is the output indexed big wig file.
If the assembly <db> is hosted by UCSC, chrom.sizes can be a URL like
  http://hgdownload.soe.ucsc.edu/goldenPath/<db>/bigZips/<db>.chrom.sizes
or you may use the script fetchChromSizes to download the chrom.sizes file.
If not hosted by UCSC, a chrom.sizes file can be generated by running
twoBitInfo on the assembly .2bit file.
The input bedGraph file must be sorted, use the unix sort command:
  sort -k1,1 -k2,2n unsorted.bedGraph > sorted.bedGraph
options:
   -blockSize=N - Number of items to bundle in r-tree.  Default 256
   -itemsPerSlot=N - Number of data points bundled at lowest level. Default 1024
   -unc - If set, do not use compression.
Running for methylome_R150mM_FDLM202341332-1a_All-CHG ...
bedGraphToBigWig v 4 - Convert a bedGraph file to bigWig format.
usage:
   bedGraphToBigWig in.bedGraph chrom.sizes out.bw
where in.bedGraph is a four column file in the format:
      <chrom> <start> <end> <value>
and chrom.sizes is a two-column file/URL: <chromosome name> <size in bases>
and out.bw is the output indexed big wig file.
If the assembly <db> is hosted by UCSC, chrom.sizes can be a URL like
  http://hgdownload.soe.ucsc.edu/goldenPath/<db>/bigZips/<db>.chrom.sizes
or you may use the script fetchChromSizes to download the chrom.sizes file.
If not hosted by UCSC, a chrom.sizes file can be generated by running
twoBitInfo on the assembly .2bit file.
The input bedGraph file must be sorted, use the unix sort command:
  sort -k1,1 -k2,2n unsorted.bedGraph > sorted.bedGraph
options:
   -blockSize=N - Number of items to bundle in r-tree.  Default 256
   -itemsPerSlot=N - Number of data points bundled at lowest level. Default 1024
   -unc - If set, do not use compression.
Running for methylome_R150mM_FDLM202341332-1a_All-CHH ...
bedGraphToBigWig v 4 - Convert a bedGraph file to bigWig format.
usage:
   bedGraphToBigWig in.bedGraph chrom.sizes out.bw
where in.bedGraph is a four column file in the format:
      <chrom> <start> <end> <value>
and chrom.sizes is a two-column file/URL: <chromosome name> <size in bases>
and out.bw is the output indexed big wig file.
If the assembly <db> is hosted by UCSC, chrom.sizes can be a URL like
  http://hgdownload.soe.ucsc.edu/goldenPath/<db>/bigZips/<db>.chrom.sizes
or you may use the script fetchChromSizes to download the chrom.sizes file.
If not hosted by UCSC, a chrom.sizes file can be generated by running
twoBitInfo on the assembly .2bit file.
The input bedGraph file must be sorted, use the unix sort command:
  sort -k1,1 -k2,2n unsorted.bedGraph > sorted.bedGraph
options:
   -blockSize=N - Number of items to bundle in r-tree.  Default 256
   -itemsPerSlot=N - Number of data points bundled at lowest level. Default 1024
   -unc - If set, do not use compression.
Converted files. finished in 0 minutes.
You can find the results in /NetScratch/cpichot/WGBS_analysis/Zebularine_treatment_out/bigwig-format folder.

Processing files is finished.

Please, press ENTER to continue ...

Do you have any suggestion please ?

Thanks

Unable to install MethylStar

Hi,

There seems to be a problem with the docker image, I am unable to download MethylStar:

wget www.jlabdata.org/methylstar.tar.gz
--2022-08-31 14:09:11-- http://www.jlabdata.org/methylstar.tar.gz
Resolving www.jlabdata.org (www.jlabdata.org)... 129.187.153.217
Connecting to www.jlabdata.org (www.jlabdata.org)|129.187.153.217|:80... failed: Connection timed out.
Retrying.

While trying to install through git clone, I am getting the following error:

$ python2 run.py
cp: cannot create regular file '/home/': Not a directory
chmod: cannot access '/home/ins_parallel.sh': No such file or directory
sh: 0: cannot open /home/ins_parallel.sh: No such file
Traceback (most recent call last):
File "run.py", line 35, in
os.remove('/home/' + user + '/ins_parallel.sh')
OSError: [Errno 2] No such file or directory: '/home/ins_parallel.sh'

Could you please help me with this?

No files found

Hi,

I found your MethylStar paper and pipeline while searching for straightforward methods for WGBS data preprocessing and I was particularly drawn to it since it is specifically mentioned that it is suitable for both experts and non-experts (I fall in the latter category). I am very used to work with R programming, but I am completely new to the ubuntu/bash environment so my question might be rather dumb!

While changing the configuration to set the proper paths to each requested file, I noticed that MethylStar is not recognizing my .fastq files, any idea why would that be?
After specifying the corresponding path in the first step of the configuration "1. Path: RAW files" I get the following messages:

Founded 0 files in the directory.

Also, the size of your data-set almost: 4.8G

So, my 920 .fastq files do take up almost 4.8G but MethylStar does not seem to be recognizing the individual files as suitable input raw files.

I have tried running the first step of the pipeline (trimmomatic) but I get the following error messages:
Starting Trimmomatic ...
Trimmomatic finished. Total time 0 Minutes.
sort: cannot read: /home/palmagudiel/export_results/trimmomatic-files/list-finished.lst: No such file or directory
rm: cannot remove '/home/palmagudiel/export_results/trimmomatic-files/list-finished.lst': No such file or directory
Something went wrong...
something is going wrong... please run again.

But I don't really know if that's related to the "founded 0 files" issue or if it's completely unrelated.

Thank you very much in advance!!

How can I make TEs.Rdata in mouse?

Thanks for developing good pipeline.
I run MerhylStar in mouse WGBS data.
But I got an following error in methimpute step.

ERROR : undefined columns selected
[1] "Running...../cx-reports/301-100_2.CX_report.txt"
Reading file ../cx-reports/301-100_2.CX_report.txt ..../src/bash/methimpute.sh: line 9: 11405 Killed Rscript ./src/bash/methimpute.R $result_pipeline $genome_ref $genome_name $tmp_rdata $intermediate $fit_output $enrichment_plot $full_report $context_report $intermediate_mode --no-save --no-restore --verbose
sort: cannot read: /results/methimpute-out/file-processed.lst: No such file or directory

I'm expecting this problem might be due to the TEs.Rdata file format.
I changed Arabidopsis thaliana genes.RData to mice genes.RData as you suggested in manual.
But I dind't find the code about TEs.RData.
So, I made TEs.RData using GRCm38_Ensembl_rmsk_TE.gtf (http://hammelllab.labsites.cshl.edu/software/#TEtranscripts) like this.

library(rtracklayer)
file2 <- "GRCm38_Ensembl_rmsk_TE.gtf"
mygtf <- import(file2)
names(mygtf) <- elementMetadata(mygtf)$family_id
save(mygtf, file="TEs.RData")

Could you let me know how to make TEs.RData?

Couldn't find any coverage file starting to run the bismark meth extractor ..

Hello,

after running deduplicate, I did run Genome coverage & Sequencing depth.
I got the warning: WARNING: Genome (-g) files are ignored when BAM input is provided.

Now I run Bismark Methylation Extractor and got the
Note: Cytosine Calls (cx-reports) will start automatically after Methylation ExtractorCouldn't find any coverage file starting to run the bismark meth extractor ..

What is wrong, how to obtain a 'coverage file' and why is the sorted BAM file deleted after 'Genome coverage & Sequencing depth' is finished?

Thank you for help,
Olaf.

NoSectionError: No section: 'GENERAL'

Unfortunately, on two different linux machines, I am getting the same python error with clean installs ...
BTW, I am getting the exact same error when I run the docker instance through singularity.

C >> 1


    *** In this part you can specify your data-set location ***

If you have data-set in pair-end mode, you have to give the pattern of extension.

ERROR:root:Traceback (most recent call last):
File "/project/6029819/marcovth/ecoli/MethylStar/src/py/configuration.py", line 219, in raw_dataset
if confirm("GENERAL", "raw_dataset", 3):
File "/project/6029819/marcovth/ecoli/MethylStar/src/py/configuration.py", line 191, in confirm
str_conf = read_config(config_section, config_value)
File "/project/6029819/marcovth/ecoli/MethylStar/src/py/globalParameters.py", line 87, in read_config
val_str = config.get(section, get_string)
File "/home/marcovth/miniconda3/envs/MethylStar/lib/python2.7/ConfigParser.py", line 607, in get
raise NoSectionError(section)
NoSectionError: No section: 'GENERAL'

Something is going wrong... please run again.

Trimmomatic

Hi,
I am trying to run spruce WGBS through your pipeline.

Is it possible to run the pipeline without Trimmomatic? My reads are fairly have quality and the adaptors have been trimmed already. I am also having trouble with Trimmomatic not giving any output. See below.

Thanks!
Melissa

1
Configuration Summary:

Configured Java location: /usr/bin/java
Trimmomatic path: /home/software/Trimmomatic-0.38
Trimmomatic Adapter: /home/software/Trimmomatic-0.38/adapters/TruSeq3-PE.fa
Trimmomatic Running mode: PE
Trimmomatic ILLUMINACLIP: 1:30:9
Trimmomatic LEADING: 20
Trimmomatic TRAILING: 20
Trimmomatic SLIDINGWINDOW: 4:20
Trimmomatic MINLEN: 36
Trimmomatic Threading: 16
Parallel mode is: Disabled

Do you want continue to run? [y/n] y

Running Trimmomatic Part...

Starting Trimmomatic ...
Running Trimmomatic for /data/V300033699_L1_PL2003110001-2_2.fq.gz
V300033699_L1_PL2003110001-2_2 : at java.base/java.io.FileInputStream.(FileInputStream.java:157)
Trimmomatic finished. Duration 0 Minutes.
Running Trimmomatic for /data/V300033699_L1_PL2003110001-2_1.fq.gz
V300033699_L1_PL2003110001-2_1 : at java.base/java.io.FileInputStream.(FileInputStream.java:157)
Trimmomatic finished. Duration 0 Minutes.
Trimmomatic finished. Total time 0 Minutes.

Processing files is finished, You can check the log files in Menu, part 'Trimmomatic-log'

2


    *** Running FastQC Report Part ***

Configuration Summary:

  • Fastq Path: /home/software/FastQC/fastqc
  • Parallel mode is: Disabled

Do you want continue to run? [y/n] y

Running FastQC reports ...
ls: cannot access '/results/trimmomatic-files/*.gz': No such file or directory

Starting Fasqc-report ...
Running in single mode. (parallel mode disabled.)

QC report finished in 0 minutes.
You can find the results in /results/qc-fastq-reports folder.

sort: cannot read: /results/qc-fastq-reports/list-finished.lst: No such file or directory
rm: cannot remove '/results/qc-fastq-reports/list-finished.lst': No such file or directory

Processing files are finished.

Please, press ENTER to continue ...

Python NoSectionError while browsing menu

Hi,

I get an error when I try to configure the pipeline. When starting MethylStar, I go to C. Configuration > 1. Path: RAW files, after which I get this error:

ERROR:root:Traceback (most recent call last):
  File "MethylStar/src/py/configuration.py", line 219, in raw_dataset
    if confirm("GENERAL", "raw_dataset", 3):
  File "MethylStar/src/py/configuration.py", line 191, in confirm
    str_conf = read_config(config_section, config_value)
  File "MethylStar/src/py/globalParameters.py", line 87, in read_config
    val_str = config.get(section, get_string)
  File "/usr/lib64/python2.7/ConfigParser.py", line 607, in get
    raise NoSectionError(section)
NoSectionError: No section: 'GENERAL'

Other menu items give similar errors. I am using Python version 2.7.5 on a CentOS 7 cluster.
Do you maybe know how to resolve this?

Thanks,
Roy

Can I use methylstar in mouse?

It seems that only TAIR10_chr_all.fa is used as the reference genome. Do you have any plans to put mm10 ㅡmouse genome or hg38 human genome into your docker image?

Running last step - Methimpute troobleshooting

Hi @shahryary,

Thanks for your last response that I didn't response before the closing.

I have continue the analysis using your pipeline. When I launch the Methimpute I have an error message directly after the launch ( see bellow the error message from the Shell console).

See below the error message :

---------------------------------------------------------------------------

	*** Running Methimpute Part ***

Configuration Summary:
 
- Intermediate: Enabled
- Fit reports: Enabled
- Enrichment reports: Enabled
- Full reports: Enabled

================================================================================

Running Methimpute Part...
Found reference chromosome file.

Error in contrib.url(repos, type) : 
  trying to use CRAN without setting a mirror
Calls: source ... eval -> req_pkg -> install.packages -> grep -> contrib.url
Exécution arrêtée
sort: impossible de lire: /NetScratch/cpichot/WGBS_analysis/Zebularine_treatment_out/methimpute-out/file-processed.lst: Aucun fichier ou dossier de ce type
rm: impossible de supprimer '/NetScratch/cpichot/WGBS_analysis/Zebularine_treatment_out/methimpute-out/file-processed.lst': Aucun fichier ou dossier de ce type
(535, '5.7.8 Username and Password not accepted. Learn more at\n5.7.8  https://support.google.com/mail/?p=BadCredentials o18sm8299173wme.19 - gsmtp')
Something went wrong...
something is going wrong... please run again. 

Please, press ENTER to continue ...

Do you have any idea ?

Thanks in advance,

Error while running Bismark-Mapper

Dear madam/sir,

Currently, I am setting up all the pre-processing steps for analyzing my cell-free DNA sequencing data, expected to arrive any time soon. While exploring all the possibilities for processing these WGBS samples, I ran into your paper. As I am still an inexperienced user of Ubuntu, I was quite impressed by your user-friendly interface which is perfectly substantiated by the associated Github protocol.

So far I was able to perform trimmomatic and fastqc on some practicing data. However, I am getting an error at the Bismark Mapper step. It looks like the bismark_genome_preparation worked (which I did separately from your pipeline because of memory issues), but when the single-end alignment starts, it keeps given the same error, see attachment of the .log file:

SRR6294810.log

It think there should be a easy solution as it seems like the error occurs with a simple "gzip" tool.

I used the following configuration set-up:

Configuration

Hoping to hear from you soon.

Kind regards,
Tim

Error: samtools broken pipe

Dear Sir,
I am running deduplication step and encountered the error reported in log file:

Output will be written into the directory: /520_info1/project/arora/2.pig/wgbs/wg/bismark-deduplicate/
Processing paired-end Bismark output file(s) (SAM format):
/520_info1/project/arora/2.pig/wgbs/wg/bismark-mappers/1_6.bam

If there are several alignments to a single position in the genome the first alignment will be chosen. Since the input files are not in any way sorted this is a near-enough random selection of reads.

Checking file >>/520_info1/project/arora/2.pig/wgbs/wg/bismark-mappers/1_6.bam<< for signs of file truncation...

samtools view: writing to standard output failed: Broken pipe
samtools view: error closing standard output: -1

Now testing Bismark result file /520_info1/project/arora/2.pig/wgbs/wg/bismark-mappers/1_6.bam for positional sorting (which would be bad...) ...passed!
Output file is: 1_6.deduplicated.bam

skipping header line: @hd VN:1.0 SO:unsorted
skipping header line: @sq SN:1 LN:274330532
skipping header line: @sq SN:2 LN:151935994

Although this finally output bam file but while running bismark-meth extractor I again encountered with:

samtools view: writing to standard output failed: Broken pipe
samtools view: error closing standard output: -1

May I know what wrong going on with the steps. I am running with samtools 1.14 version.

Genome not found

Hello,

I have properly formatted all of the parameters necessary for running the program (see below), but somehow am still getting the error message:
./src/bash/preparing.sh: 5: ./src/bash/tmp.conf: genome/: not found

Any ideas on why this is happening? My ref genome is hg38 formatted as a .fa file from UCSC.

============================================================
Here is summary of configuration parameters:

  • RAW files location: /media/grayson/data1/grayson/WGBS
  • Number and Size of the data-set: 64 Files and Total size: 566.0 Gigabyte
  • The directory of results: /
  • Genome type: Human
  • Genome folder location: /media/grayson/data1/grayson/Reference genome/
    -- Genome Reference name: hg38.fa
  • Paired End: Enabled
  • Trimmomatic location: /bin/Trimmomatic-Src-0.39/trimmomatic-0.39
    -- JAVA path: /usr/bin/java
    -- ILLUMINACLIP: /bin/Trimmomatic-Src-0.39/trimmomatic-0.39/adapters/TruSeq3-PE.fa:1:30:9
    -- LEADING: 20
    -- TRAILING: 20
    -- SLIDINGWINDOW: 4:20
    -- MINLEN: 36
    -- Number of Threads: 8
  • QC-Fastq path: /bin/fastqc_v0.11.8/FastQC/fastqc
  • Bismark parameters: /bin/Bismark-0.19.1
    -- scBS-Seq (--pbat)? Disabled
    -- Nucleotide status: false
    -- Number of Parallel: 8 Threads.
    -- Buffer size: 40 Gigabyte.
    -- Samtools Path: /bin/samtools
    -- Bedtools Path: /usr/bin/bedtools
    -- Intermediate for MethExtractor: Disabled
  • Methylation extraction parameters( Only for quick run)
    -- Minimum read coverage: 1
  • Methimpute Part:
    -- Methimpute Intermediate : Enabled
    -- Methimpute probability(Intermediate): constrained
    -- Methimpute Fit reports: Disabled
    -- Methimpute Enrichment plots: Disabled
    -- Methimpute Full report: Disabled
    -- Methimpute Context: All
  • Parallel mode is: Disabled
  • E-mail notification: Disabled
  • MethylStar version: 1.1.1
    Thanks!

MethImpute run error

Hello

I am running Methylstar pipeline using reference as mm10. i got the below given error:

[1] "It's first time you are running Methimpute for this data-set!"
Scanning for ambiguous nucleotides ... 263.42s
Extracting cytosines from forward strand ...Error: cannot allocate vector of size 1.8 Gb
Execution halted
Warning message:
system call failed: Cannot allocate memory
sort: cannot read: /results/methimpute-out/file-processed.lst: No such file or directory
./src/bash/methimpute.sh: line 12: [: too many arguments

I am running MethylStar in docker.

Any help regarding this error is appreciated. Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.