Coder Social home page Coder Social logo

sv-pipeline's Introduction

Cohort SV detection pipeline

Table of contents

  1. Overview
  2. WDL scripts
  3. Docker images

Overview

This repository contains pipeline scripts for structural variation detection in large cohorts. The pipeline is designed for Illumina paired-end whole genome sequencing data, preferably with at least 30x sequence coverage. Data inputs should be a set of sorted CRAM files, aligned with BWA-MEM.

This pipeline detects structural variation based on breakpoint sequence evidence using both the LUMPY and Manta algorithms. Structural variant (SV) breakpoints are then unified and merged using the SVTools workflow, followed by re-genotyping with SVTyper and read-depth annotation with CNVnator. Finally, SV types are reclassified based on the concordance between read-depth and breakpoint genotype.

Additional details on the SVTools pipeline are available in the SVTools tutorial.

Workflow

WDL scripts

Pipeline scripts (in WDL format) are available in the scripts directory. These scripts can be launched using Cromwell (version 25 or later).

While the SV pipeline can be run in its entirety via the SV_Pipeline_Full.wdl script, we recommend running the pipeline in three stages to enable intermediate quality control checkpoints.

For each sample:

  • SV discovery with LUMPY using the smoove wrapper
  • Preliminary SV genotyping with SVTyper (also done within the smoove wrapper)
  • SV discovery with Manta, including insertions
  • Generate CNVnator histogram files

After this step, we recommend performing quality control checks on each sample before merging them into the cohort-level VCF (step 2). To help with this, per-sample variant counts are generated for both LUMPY and Manta outputs.

This step merges the sample-level VCF files from step 1 using the LUMPY breakpoint probability curves to produce a single cohort-level VCF.

This step re-genotypes each sample at the sites in the cohort-level VCF file from step 2, and then combines the results into a set of final VCFs, split by variant type for efficiency (deletions, insertions, breakends, and other:duplications+inversions).

For each sample:

  • Re-genotype each SV using SVTyper (note that insertion calls from Manta are taken from the per-sample genotypes and not processed with SVTyper)
  • Annotate the read-depth at each SV using CNVnator
  • Generate a .ped file of sample names and sexes

For the cohort:

  • Combine the re-genotyped VCFs into a single cohort-level VCF
  • Prune overlapping SVs
  • Classify SV type based on the concordance between variant genotypes and read-depths
  • Sort and index the VCF

Docker images

  • Docker images for this pipeline are available at https://hub.docker.com/u/halllab.
  • Dockerfiles for these containers are available in the docker directory.
  • WDL test scripts for each of these Docker containers are available in the test directory.

sv-pipeline's People

Contributors

apregier avatar cc2qe avatar ernfrid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sv-pipeline's Issues

Error while pulling sv-pipeline docker image

Hi,

I have 100s of CRAM files which was aligned to hg38, and interested to call structural variation. Tried to use docker since it supports hg38 version reference. I get an error while pulling sv docker image in the Centos7 machine.

#docker pull halllab/cadd-b38-v1-6 Using default tag: latest Error response from daemon: manifest for halllab/cadd-b38-v1-6:latest not found: manifest unknown: manifest unknown

Next try:

Because of this issue, tried to run to call SVs by wdl script. The Readme file instructs to use Cromwell version 25 or later (These scripts can be launched using Cromwell (version 25 or later) to execute wdl script. I think version 25 is much older because of the latest release version is 53.1. Which is the stable and newer version of Cromwell can be used to execute WDL script sv-pipeline for SV calling of CRAM files which are aligned to GRCh38?

I am new to this, could anyone help me how to call SVs form CRAM files, tried to install SVtools seprately, it's getting python2 compatible issues. Please suggest to me which type of workflow is best (wdl, docker, or SVtools) to identify the SVs.

Thanks in advance.
Nitha

hexdump is missing from lumpy container

Seeing errors like:

/opt/lumpy-sv/bin/lumpyexpress: line 15: -n: command not found
/opt/lumpy-sv/bin/lumpyexpress: line 16: [: ==: unary operator expected

in my log files.

This results from hexdump not being installed in the container. Thus, the environment variable is undefined and these errors result.

Note that I believe these to be inconsequential and thus may not fix immediately since I'd prefer not to change pipeline versions midstream.

Dockstore import error

Hi,

The script Pre_Merge_SV_Single.wdl fails to import to Dockstore. This error carries over to the Terra workspace (mgi-hall-anvil-terra/svtools_1000Genomes), so I haven't been able to get the example workspace to run.

warning /scripts/Pre_Merge_SV_Single.wdl: Failed to import 'https://raw.githubusercontent.com/hall-lab/sv-pipeline/terra-compatible-hja/scripts/Pre_Merge_SV_per_sample.wdl' (reason 1 of 1): ERROR: Unexpected symbol (line 49, col 5) when parsing '_gen19'. Expected rbrace, got "output_vcf_basename". output_vcf_basename = basename + '.filtered' ^ $e = $e <=> :dot :identifier -> MemberAccess( value=$0, member=$2 )

Link to Dockstore: https://dockstore.org/workflows/github.com/hall-lab/sv-pipeline/pre_merge_sv_single:terra-compatible-hja?tab=files

Thank you,
Laura

cromwell 35 errors out on workflow

Hi!

I was directed to your workflow by my colleague Devin Locke and I was excited to finally find a well written WDL workflow that uses runtimes everywhere, rather than some assumed local state. I was also very happy to find that you provide an easily accessible and small test data set and input.json! I really appreciate the ease with which I could assemble a run for your workflow!

Unfortunately when I try and run the workflow on Cromwell 35 it errors out.

ERROR: Two or more calls or values in the workflow have the same name:

Declaration statement here (line 24, column 12):
    String basename = sub(sub(aligned_cram, "^.*/", ""), aligned_cram_suffix + "$", "")
           ^
      
Declaration statement here (line 96, column 12):
    String basename = sub(sub(aligned_cram, "^.*/", ""), aligned_cram_suffix + "$", "")
           ^

Thanks!
-Kaushik

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.