Coder Social home page Coder Social logo

nf-core / airrflow Goto Github PK

View Code? Open in Web Editor NEW
46.0 144.0 32.0 8.33 MB

B-cell and T-cell Adaptive Immune Receptor Repertoire (AIRR) sequencing analysis pipeline using the Immcantation framework

Home Page: https://nf-co.re/airrflow

License: MIT License

HTML 0.89% R 5.30% Python 12.95% Nextflow 75.75% Shell 4.74% CSS 0.37%
b-cell immcantation immunorepertoire repseq nf-core nextflow workflow pipeline airr

airrflow's Introduction

nf-core/airrflow

GitHub Actions CI Status GitHub Actions Linting Status AWS CI Cite with Zenodo nf-test Nextflow run with conda run with docker run with singularity Launch on Seqera Platform Get help on Slack Follow on Twitter Follow on Mastodon Watch on YouTube AIRR compliant

Introduction

nf-core/airrflow is a bioinformatics best-practice pipeline to analyze B-cell or T-cell repertoire sequencing data. It makes use of the Immcantation toolset. The input data can be targeted amplicon bulk sequencing data of the V, D, J and C regions of the B/T-cell receptor with multiplex PCR or 5' RACE protocol, single-cell VDJ sequencing using the 10xGenomics libraries, or assembled reads (bulk or single-cell).

nf-core/airrflow overview

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website.

Pipeline summary

nf-core/airrflow allows the end-to-end processing of BCR and TCR bulk and single cell targeted sequencing data. Several protocols are supported, please see the usage documentation for more details on the supported protocols. The pipeline has been certified as AIRR compliant by the AIRR community, which means that it is compatible with downstream analysis tools also supporting this format.

nf-core/airrflow overview

  1. QC and sequence assembly
  • Bulk
    • Raw read quality control, adapter trimming and clipping (Fastp).
    • Filter sequences by base quality (pRESTO FilterSeq).
    • Mask amplicon primers (pRESTO MaskPrimers).
    • Pair read mates (pRESTO PairSeq).
    • For UMI-based sequencing:
      • Cluster sequences according to similarity (optional for insufficient UMI diversity) (pRESTO ClusterSets).
      • Build consensus of sequences with the same UMI barcode (pRESTO BuildConsensus).
    • Assemble R1 and R2 read mates (pRESTO AssemblePairs).
    • Remove and annotate read duplicates (pRESTO CollapseSeq).
    • Filter out sequences that do not have at least 2 duplicates (pRESTO SplitSeq).
  • single cell
    • cellranger vdj
      • Assemble contigs
      • Annotate contigs
      • Call cells
      • Generate clonotypes
  1. V(D)J annotation and filtering (bulk and single-cell)
  • Assign gene segments with IgBlast using a germline reference (Change-O AssignGenes).
  • Annotate alignments in AIRR format (Change-O MakeDB)
  • Filter by alignment quality (locus matching v_call chain, min 200 informative positions, max 10% N nucleotides)
  • Filter productive sequences (Change-O ParseDB split)
  • Filter junction length multiple of 3
  • Annotate metadata (EnchantR)
  1. QC filtering (bulk and single-cell)
  • Bulk sequencing filtering:
    • Remove chimeric sequences (optional) (SHazaM, EnchantR)
    • Detect cross-contamination (optional) (EnchantR)
    • Collapse duplicates (Alakazam, EnchantR)
  • Single-cell QC filtering (EnchantR)
    • Remove cells without heavy chains.
    • Remove cells with multiple heavy chains.
    • Remove sequences in different samples that share the same cell_id and nucleotide sequence.
    • Modify cell_ids to ensure they are unique in the project.
  1. Clonal analysis (bulk and single-cell)
  • Find threshold for clone definition (SHazaM, EnchantR).
  • Create germlines and define clones, repertoire analysis (SCOPer, EnchantR).
  • Build lineage trees (Dowser, IgphyML, RAxML, EnchantR).
  1. Repertoire analysis and reporting
  • Custom repertoire analysis pipeline report (Alakazam).
  • Aggregate QC reports (MultiQC).

Usage

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, ensure that the pipeline tests run on your infrastructure:

nextflow run nf-core/airrflow -profile test,<docker/singularity/podman/shifter/charliecloud/conda/institute> --outdir <OUTDIR>

To run nf-core/airrflow with your data, prepare a tab-separated samplesheet with your input data. Depending on the input data type (bulk or single-cell, raw reads or assembled reads) the input samplesheet will vary. Please follow the documentation on samplesheets for more details. An example samplesheet for running the pipeline on bulk BCR / TCR sequencing data in fastq format looks as follows:

sample_id filename_R1 filename_R2 filename_I1 subject_id species pcr_target_locus tissue sex age biomaterial_provider single_cell intervention collection_time_point_relative cell_subset
sample01 sample1_S8_L001_R1_001.fastq.gz sample1_S8_L001_R2_001.fastq.gz sample1_S8_L001_I1_001.fastq.gz Subject02 human IG blood NA 53 sequencing_facility FALSE Drug_treatment Baseline plasmablasts
sample02 sample2_S8_L001_R1_001.fastq.gz sample2_S8_L001_R2_001.fastq.gz sample2_S8_L001_I1_001.fastq.gz Subject02 human TR blood female 78 sequencing_facility FALSE Drug_treatment Baseline plasmablasts

Each row represents a sample with fastq files (paired-end).

A typical command to run the pipeline from bulk raw fastq files is:

nextflow run nf-core/airrflow \
-r <release> \
-profile <docker/singularity/podman/shifter/charliecloud/conda/institute> \
--mode fastq \
--input input_samplesheet.tsv \
--library_generation_method specific_pcr_umi \
--cprimers CPrimers.fasta \
--vprimers VPrimers.fasta \
--umi_length 12 \
--umi_position R1 \
--outdir ./results

For common bulk sequencing protocols we provide pre-set profiles that specify primers, UMI length, etc for common commercially available sequencing protocols. Please check the Supported protocol profiles for a full list of available profiles. An example command running the NEBNext UMI protocol profile with docker containers is:

nextflow run nf-core/airrflow \
-profile nebnext_umi,docker \
--mode fastq \
--input input_samplesheet.tsv \
--outdir results

A typical command to run the pipeline from single cell raw fastq files (10X genomics) is:

nextflow run nf-core/airrflow -r dev \
-profile <docker/singularity/podman/shifter/charliecloud/conda/institute> \
--mode fastq \
--input input_samplesheet.tsv \
--library_generation_method sc_10x_genomics \
--reference_10x reference/refdata-cellranger-vdj-GRCh38-alts-ensembl-5.0.0.tar.gz \
--outdir ./results

A typical command to run the pipeline from single-cell AIRR rearrangement tables or assembled bulk sequencing fasta data is:

nextflow run nf-core/airrflow \
-r <release> \
-profile <docker/singularity/podman/shifter/charliecloud/conda/institute> \
--input input_samplesheet.tsv \
--mode assembled \
--outdir results

See the usage documentation and the parameter documentation for more details on how to use the pipeline and all the available parameters.

:::warning Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs. :::

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the the results of a test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Credits

nf-core/airrflow was originally written by:

We thank the following people for their extensive assistance in the development of the pipeline:

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #airrflow channel (you can join with this invite).

Citations

If you use nf-core/airrflow for your analysis, please cite the preprint as follows:

nf-core/airrflow: an adaptive immune receptor repertoire analysis workflow employing the Immcantation framework

Gisela Gabernet, Susanna Marquez, Robert Bjornson, Alexander Peltzer, Hailong Meng, Edel Aron, Noah Y. Lee, Cole Jensen, David Ladd, Friederike Hanssen, Simon Heumos, nf-core community, Gur Yaari, Markus C. Kowarik, Sven Nahnsen, Steven H. Kleinstein.

BioRxiv. 2024. doi: 10.1101/2024.01.18.576147.

The specific pipeline version using the following DOI: 10.5281/zenodo.2642009

Please also cite all the tools that are being used by the pipeline. An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

airrflow's People

Contributors

adamrtalbot avatar apeltzer avatar dladd avatar drpatelh avatar ewels avatar friederikehanssen avatar ggabernet avatar kevinmenden avatar mapo9 avatar maxulysse avatar mdeboth avatar nf-core-bot avatar ssnn-airr avatar subwaystation avatar tbugfinder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

airrflow's Issues

Error when executing pipeline on AWS with fusion mounts

Hi, this is kojix2.

I'm a newbie and have very little understanding of what bcellmagic is a workflow for.
First, to understand what bcellmagic can do, I ran -profile test,docker in Nextflow Tower and got the following error.

What are the possible causes of this?

 Workflow execution completed unsuccessfully

The exit status of the task that caused the workflow execution to fail was: 1

Error executing process > 'NFCORE_BCELLMAGIC:BCELLMAGIC:ALAKAZAM_SHAZAM_REPERTOIRES (report)'

Caused by:
  Essential container in task exited

Command executed:

  execute_report.R repertoire_comparison.Rmd
  Rscript -e "library(alakazam); write(x=as.character(packageVersion('alakazam')), file='alakazam.version.txt')"
  Rscript -e "library(shazam); write(x=as.character(packageVersion('shazam')), file='shazam.version.txt')"
  echo $(R --version 2>&1) | awk -F' '  '{print $3}' > R.version.txt

Command exit status:
  1

Command output:
    |............................................................          |  86%
    ordinary text without R code
  
  
    |                                                                            
    |..............................................................        |  88%
  label: unnamed-chunk-10 (with options) 
  List of 4
   $ echo     : logi FALSE
   $ fig.width: num 10
   $ fig.asp  : num 0.8
   $ fig.align: chr "center"
  
  
    |                                                                            
    |...............................................................       |  91%
    ordinary text without R code
  
  
    |                                                                            
    |.................................................................     |  93%
  label: unnamed-chunk-11 (with options) 
  List of 4
   $ echo     : logi FALSE
   $ fig.width: num 10
   $ fig.asp  : num 0.8
   $ fig.align: chr "center"
  
  
    |                                                                            
    |...................................................................   |  95%
    ordinary text without R code
  
  
    |                                                                            
    |....................................................................  |  98%
  label: unnamed-chunk-12 (with options) 
  List of 4
   $ echo     : logi FALSE
   $ fig.width: num 10
   $ fig.asp  : num 0.8
   $ fig.align: chr "center"
  
  
    |                                                                            
    |......................................................................| 100%
    ordinary text without R code
  
  
  /usr/local/bin/pandoc +RTS -K512m -RTS repertoire_comparison.knit.md --to html4 --from markdown+autolink_bare_uris+tex_math_single_backslash --output /tmp/nxf.XXXXj9Glzr/Bcellmagic_report.html --lua-filter /usr/local/lib/R/library/rmarkdown/rmarkdown/lua/pagebreak.lua --lua-filter /usr/local/lib/R/library/rmarkdown/rmarkdown/lua/latex-div.lua --self-contained --variable bs3=TRUE --standalone --section-divs --table-of-contents --toc-depth 3 --variable toc_float=1 --variable toc_selectors=h1,h2,h3 --variable toc_collapsed=1 --variable toc_smooth_scroll=1 --variable toc_print=1 --template /usr/local/lib/R/library/rmarkdown/rmd/h/default.html --highlight-style pygments --variable theme=bootstrap --css nf-core_style.css --include-in-header /tmp/Rtmp9ZwV8C/rmarkdown-str3a595dbe04.html --mathjax --variable 'mathjax-url:https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' --citeproc 

Command error:
  
  
  processing file: repertoire_comparison.Rmd
  output file: repertoire_comparison.knit.md
  
  File ./references.bibtex not found in resource path
  Error: pandoc document conversion failed with error 99
  Execution halted

Work dir:
  /fusion/s3/ABCDEFG/bcellmagic/wd/25/96cdb23bc7f02ad73b2a5a2e087162

Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line

clonal analysis

Clonal analysis results should produce consistently both png and svg plots.

mv command in merge r1 umi

Original bug report from Susanna Marquez:

I have just now started trying bcellmagic. I get the error below. Any suggestions? .command.sh simply has the same error message, about moving the files.

[14/b08493] process > get_software_versions      [100%] 1 of 1 ✔ 
[-        ] process > multiqc                    - 
[a9/753701] process > output_documentation (1)   [100%] 1 of 1 ✔ 
Execution cancelled -- Finishing pending tasks before exit 
[0;35m[nf-core/bcellmagic] Pipeline completed with errors
Error executing process > 'merge_r1_umi (HD07M)'

Caused by:
 Process `merge_r1_umi (HD07M)` terminated with an error exit status (1)

Command executed:

 gunzip -f "HD07M_R1.fastq.gz"
 mv "HD07M_R1.fastq" "HD07M_R1.fastq"
 gunzip -f "HD07M_R2.fastq.gz"
 mv "HD07M_R2.fastq" "HD07M_R2.fastq"

Command exit status:
 1

Command output:
 (empty)

Command error:
 mv: 'HD07M_R1.fastq' and 'HD07M_R1.fastq' are the same file

Work dir:
 /home/susanna/Documents/projects/collaborations/bcell-magic/work/e3/e7655ee3723008a802d5b887ea585a

Linting tests failing

Hey there are a couple of linting tests failing still in Travis, the rest passed!

INFO: ===========
LINTING RESULTS

82 tests passed 2 tests had warnings 2 tests failed
WARNING: Test Warnings:
http://nf-co.re/errors#4: Config variable not found: params.reads
http://nf-co.re/errors#4: Config variable not found: params.singleEnd
ERROR: Test Failures:
http://nf-co.re/errors#8: Conda environment name is incorrect (nf-core-bcellmagic-1.0, should be nf-core-bcellmagic-1.0dev)
http://nf-co.re/errors#8: Conda dependency did not have pinned version number: bioconda::biopython
ERROR: Sorry, some tests failed - exiting with a non-zero error code...

clones graphml not stored

New release introduced an issue that graphml clones are not stored properly. Problem in calling dnapars from R script clonal_analysis.R

Add log parsing as part of the pipeline

Currently there is process logging as part of the pipeline already, but the logs need to be parsed for extracting the number and percentage of sequences that passed the process and the ones that were filetered out.

Add the python script to parse the logs as part of the pipeline.

Use `enchantr` validate input also for `Fastq` samplesheet

Description of feature

As of now the input validation of the fastq samplesheet is done with a custom python script. It would be nice that this input would also be validated with the enchantr validate_input script.

Fastq tsv file

Reveal tsv file

The main difference is the column filename which is not present in the fastq tsv as 3 files might be provided in the columns filename_R1, filename_R2, filename_I1. Would it be possible to accept more than filename_xx column and allow also .fq-gz, and .fastq.gz extensions there?

GUNZIP fails on Google Cloud

Hi There!

I wanted to test out some cloud instances using nf-core airrflow. I launched the test pipeline on my laptop and that worked just fine.

But when I wanted to port this to the cloud, the test pipeline fails at the GUNZIP step. I noticed that the AWS results page is also not showing any results. Maybe this issue is similar on AWS?

Description of the bug

Airrflow test pipeline fails at the GUNZIP workflow on Google Cloud. It fails because it can not find the FASTQ files that it needs to unzip.

Steps to reproduce

Steps to reproduce the behaviour:

  1. Command line:
    nextflow run nf-core/airrflow -r 2.0.0 -profile test,google --google_bucket gs://my-bucket
  2. See error:
Error executing process > 'NFCORE_BCELLMAGIC:BCELLMAGIC:GUNZIP (Sample3)'

Caused by:
  Process `NFCORE_BCELLMAGIC:BCELLMAGIC:GUNZIP (Sample3)` terminated with an error exit status (9)

Command executed:

  gunzip -f "Sample3_UMI_R1.fastq.gz"
  gunzip -f "Sample3_R2.fastq.gz"
  echo $(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*$//' > gunzip.version.txt

Command exit status:
  9

Command output:
  (empty)

Command error:
  Execution failed: generic::failed_precondition: while running "nf-986be5683bbd72136fc79e9bc86b254e-main": unexpected exit status 1 was not ignored

Expected behaviour

Successful completion of the gunzip step.

Log files

Here is the .command.log file of the relevant process:

+ cd /light-test/e3/d02332d9d12ca6f93ddf5b85b853ad
+ gsutil -m -q cp gs://my-bucket//light-test/e3/d02332d9d12ca6f93ddf5b85b853ad/command.run .
+ bash .command.run nxf_stage
+ [[ '' -gt 0 ]]
+ true
+ cd /light-test/e3/d02332d9d12ca6f93ddf5b85b853ad
+ bash .command.run nxf_unstage
CommandException: No URLs matched: .command.out
CommandException: 1 file/object could not be transferred.
CommandException: No URLs matched: .command.err
CommandException: 1 file/object could not be transferred.
CommandException: No URLs matched: .command.trace
CommandException: 1 file/object could not be transferred.
CommandException: No URLs matched: .exitcode
CommandException: 1 file/object could not be transferred.
ls: cannot access '*_R1.fastq': No such file or directory
ls: cannot access '*_R2.fastq': No such file or directory
ls: cannot access '*.version.txt': No such file or directory

Nextflow Installation

21.10.6

Improving documentation

  • Include all parameters in documentation
  • Input file description
  • Better usage description
  • Short description about RNAseq experimental design

Database download

Currently the database download is a process inside the nexflow pipeline but it is actually also included in the dockerfile as part of container build.

We would need to decide which one is the best option and just go for one of them.

Dowser lineages `igphyml` when using `r-enchantr=0.0.3`

Description of the bug

There is an error in the Dowser lineages process when running the pipeline with the test profile (uses r-enchantr 0.0.3)

Command used and terminal output

run nf-core/airrflow -r airrflow -profile test,docker --outdir "results" -resume


label: unnamed-chunk-2
Quitting from lines 124-159 (_main.Rmd)
Error in buildIgphyml(data, igphyml = exec, temp_path = file.path(dir,  :
  The file /usr/local/share/igphyml/src/igphyml cannot be executed.
Calls: <Anonymous> ... eval_with_user_handlers -> eval -> eval -> getTrees -> buildIgphyml
In addition: Warning messages:
1: replacing previous import ‘data.table::last’ by ‘dplyr::last’ when loading ‘enchantr’
2: replacing previous import ‘data.table::first’ by ‘dplyr::first’ when loading ‘enchantr’
3: replacing previous import ‘data.table::between’ by ‘dplyr::between’ when loading ‘enchantr’
4: In buildIgphyml(data, igphyml = exec, temp_path = file.path(dir,  :
  Dowser igphyml doesn't mask split codons!


### Relevant files

_No response_

### System information

nextflow version 22.10.2.5832

Check whether file compressed or not

It would be nice if the pipeline didn't crash when using plain FASTQ files, rather than gzipped ones :)

Not a huge deal, but it's not specified into the documentation (or I missed it).

Recommended parameters for Takara Bio SMARTer Human TCR v2 do not work. --index-file FALSE needs to also be set

Description of the bug

The instructions to run airrflow 2.3.0 according the "Takara Bio SMARTer Human TCR v2" section at https://nf-co.re/airrflow/2.3.0/usage#dt-oligo-rt-and-5race-pcr are as follows:

nextflow run nf-core/airrflow -profile docker \
--input samplesheet.tsv \
--library_generation_method dt_5p_race_umi \
--cprimers CPrimers.fasta \
--race_linker linker.fasta \
--umi_length 12 \
--umi_position R2 \
--cprimer_start 5 \
--cprimer_position R1 \
--outdir ./results

However, when using these settings on my data, the pipeline rapidly exits with the following error:

Cannot invoke method and() on null object

 -- Check script '/home/ubuntu/.nextflow/assets/nf-core/airrflow/./workflows/bcellmagic.nf' at line: 100 or see '.nextflow.log' file for more details

Examining line 100 in bcellmagic.nf, as suggested, gives the following:

if (params.index_file & params.umi_position == 'R2') {exit 1, "Please do not set `--umi_position` option if index file with UMIs is provided."}

Setting the parameter --index_file FALSE solves this issue and the pipeline successfully launches. This suggests the pipeline expects a default value of FALSE to be present for params.index_file, but that this parameter value is in fact empty.

Command used and terminal output

No response

Relevant files

No response

System information

No response

BCELLMAGIC: Convert all parameter docs to JSON schema

Hi!

this is not necessarily an issue with the pipeline, but in order to streamline the documentation group next week for the hackathon, I'm opening issues in all repositories / pipeline repos that might need this update to switch from parameter docs to auto-generated documentation based on the JSON schema.

This will then supersede any further parameter documentation, thus making things a bit easier :-)

If this doesn't apply (anymore), please close the issue. Otherwise, I'm hoping to have some helping hands on this next week in the documentation team on Slack https://nfcore.slack.com/archives/C01QPMKBYNR

Add flexibility cprimers / vprimers in R1 or R2

As reported by @fabio-t , the pipeline currently expects C-region primers to be in the R1 fastq and V-region primers to be in the R2 fastq.

Better docs about this or the possibility of specifying R1/R2 for both primers would be needed.

Updating immcantation tool versions

Description of feature

Updating Immcantation tool versions for mulled containers (here the packages with the old versions are listed):

  • conda-forge::r-base=4.1.2 bioconda::r-alakazam=1.2.0 bioconda::changeo=1.2.0 bioconda::phylip=3.697 conda-forge::r-optparse=1.7.1
  • conda-forge::r-base=4.1.2 bioconda::r-alakazam=1.2.0 bioconda::r-shazam=1.1.0 conda-forge::r-kableextra=1.3.4 conda-forge::r-knitr=1.33 conda-forge::r-stringr=1.4.0 conda-forge::r-dplyr=1.0.6 conda-forge::r-optparse=1.7.1
  • bioconda::changeo=1.2.0 bioconda::igblast=1.17.1 conda-forge::wget=1.20.1
  • conda-forge::r-base=4.1.2 bioconda:r-alakazam=1.2.0 bioconda::changeo=1.2.0 bioconda::igphyml=1.1.3
  • bioconda::changeo=1.2.0 bioconda::igblast=1.17.1
  • conda-forge::r-base=4.1.2 bioconda:r-enchantr=0.0.1
  • enchantr
  • mulled
    And whatever containers are needed for the reveal processes :D

Adding tests

  • Need to add minimal tests for travis in test.config.
  • Use subset of fastq file for one sample.
  • Need to remove real primer sequences for fakes.

output organization

currently, there are a lot of subfoders in the results folder. I want to organize this to have only the following subfolders:

  • preprocessing
  • clonal_analysis
  • repertoire analysis
  • multiQC
  • pipeline_info

So main.nf and output docs need to be changed accordingly.

Define clones per patient

  • Join samples belonging to the same patient before running define clones.
  • Check join operator from nextflow to do this.

MakeDb igblast missing light chain reference germlines

The script part in process CHANGEO_MAKEDB can be simplified by providing a folder path to -r, not all individual fasta files:
-r ${imgt_base}/${params.species}/vdj/. This would also fix a bug where Ig light chain reference germlines (IGKV, IGKJ, IGLV, IGLJ) are not being passed to -r.

Process labels and versions

  • make sure all processes have resource labels
  • make sure all processes print the versions properly
  • define clones and create germlines results within changeo, not shazam

Trouble running bcellmagic

Hi.

I'm a new user and I'm trying to follow the Takara Bio TCR workflow.
However I'm having trouble with memory requirements.
I'm getting the following error:

Error executing process > 'NFCORE_BCELLMAGIC:BCELLMAGIC:FETCH_DATABASES (IMGT IGBLAST)'

Caused by:
Process requirement exceed available memory -- req: 12 GB; avail: 8 GB

Command executed:

fetch_databases.sh
echo $(date "+%F") > IMGT.version.txt

Command exit status:

Command output:
(empty)

Work dir:
/Volumes/Hard drive/bcellmagic/work/db/c293fe251fb5decf6087e4940e22a7

Tip: view the complete command output by changing to the process work dir and entering the command cat .command.out

Would you have any ideas please?

Thanks so much.
Aislinn.

Automatic handling of NA threshold

Currently when a threshold could not be defined in the shazam step (naive samples) the pipeline breaks. Solution is to submit it with a manually defined threshold. This should be handled automatically.

vsearch in dependencies, but usearch called

At startup, I got a vsearch missing error, and installed that tool. Turns out however that ClusterSets.py still uses usearch.

I would suggest that either the tools are converted to vsearch (possibly the best option?), or that a usearch version check is performed at the beginning.

Make docker image nf-core compliant

At the moment, most tools that are build in the docker image do not make use of any conda channel. The aim is to bring all tools in the docker image to conda. In the following, all tools in the container are listed. The bracket notation tells, if the tool is already in conda or not.

  • protocol data, utility scripts, pipelines [--> not needed for now]
  • muscle muscle
  • vsearch vsearch
  • cd-hit cd-hit
  • blastx + executables blast
  • igblast igblast
  • phylib phylib
  • tbl2asn tbl2asn
  • airr reference libraries airr r-airr
  • presto presto
  • changeo changeo
  • alakazam alakazam
  • shazam shazam
  • tigger tigger
  • rdi [--> not needed for now]
  • scope [not needed for now]
  • prestor [not needed for now]
  • download and build reference databases [inside of the container?! --> not]

Fastq samplesheet auto single_cell FALSE

Description of the bug

As input samplesheet only supports single_cell FALSE, autogenerate this instead of requiring column in samplesheet.

Command used and terminal output

No response

Relevant files

No response

System information

No response

Fetch databases separate from build igblast reference

As proposed by @ssnn-airr, it would be nice to separate the process in fetch_databases between pulling the references from IMGT and building the igblast references, to also allow users to provide their own fasta databases but still build them with igblast.

Currently, the fetch_databases.sh script is executed in the Fetch_databases process which calls the underlying scripts:

  • fetch_imgt.sh pulls the references directly from IMGT for 4 species and stores them in fasta format.
  • fetch_igblastdb.sh gets some internal data needed for igblast
  • imgt2igblast.sh uses makeblastdb to create the necessary Igblast references.

That could be separated into two processes, one for pulling the fasta references, and one for building the necessary references for igblast.

So if the reference data is available in IMGT, one could adapt the first script to also pull it and then build the igblast needed references. Or if custom reference data is available in fasta format, it should also be possible to provide it directly to imgt2igblast to build the references.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.