Coder Social home page Coder Social logo

tutorial_finnland's People

Contributors

silask avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

tutorial_finnland's Issues

PROJAPPL environment variable not defined.

I run

 module load bioconda

And got


Lmod is automatically replacing "intel/19.0.4" with "gcc/7.4.0".

PROJAPPL environment variable not defined.

To define this variable, please run command:
 export PROJAPPL=/projappl/your_project_name

To see the available projects, Run command:
   csc-workspaces

Inactive Modules:
  1) hpcx-mpi/2.4.0     2) intel-mkl/2019.0.4

What should I select?

Error

snakemake: error: unrecognized arguments: --threads=8 --mem=60 --large_mem=250 --large_threads=8 --assembly_threads=8 --assembly_memory=250 --tmpdir=/local_scratch/student198 --database_dir=/scratch/project_2004930/databases --data_type=metagenome --interleaved_fastqs=False --deduplicate=True --duplicates_only_optical=False --duplicates_allow_substitutions=2 --preprocess_adapters=/scratch/project_2004930/databases/adapters.fa --preprocess_minimum_base_quality=10 --preprocess_minimum_passing_read_length=51 --preprocess_minimum_base_frequency=0.05 --preprocess_adapter_min_k=8 --preprocess_allowable_kmer_mismatches=1 --preprocess_reference_kmer_match_length=27 --error_correction_overlapping_pairs=True --contaminant_max_indel=20 --contaminant_min_ratio=0.65 --contaminant_kmer_length=13 --contaminant_minimum_hits=1 --contaminant_ambiguous=best --error_correction_before_assembly=True --merge_pairs_before_assembly=True --merging_k=62 --merging_extend2=40 --merging_flags=ecct iterations=5 --assembler=spades --megahit_min_count=2 --megahit_k_min=21 --megahit_k_max=121 --megahit_k_step=20 --megahit_merge_level=20,0.98 --megahit_prune_level=2 --megahit_low_local_ratio=0.2 --megahit_preset=default --spades_skip_BayesHammer=True --spades_use_scaffolds=True --spades_k=auto --spades_preset=meta --spades_extra= --longread_type=none --filter_contigs=True --prefilter_minimum_contig_length=300 --contig_trim_bp=0 --minimum_average_coverage=1 --minimum_percent_covered_bases=20 --minimum_mapped_reads=0 --minimum_contig_length=500 --contig_min_id=0.9 --contig_map_paired_only=True --contig_max_distance_between_pairs=1000 --maximum_counted_map_sites=10 --final_binner=DASTool --binner=metabat --binner=maxbin --metabat={'sensitivity': 'sensitive', 'min_contig_length': 1500} --maxbin={'max_iteration': 50, 'prob_threshold': 0.9, 'min_contig_length': 1000} --DASTool={'search_engine': 'diamond', 'score_threshold': 0.5} --genome_dereplication={'ANI': 0.95, 'overlap': 0.6, 'opt_parameters': '', 'filter': {'noFilter': False, 'length': 5000, 'completeness': 50, 'contamination': 10}, 'score': {'completeness': 1, 'contamination': 5, 'N50': 0.5, 'length': 0}} --rename_mags_contigs=True --annotations=gtdb_tree --annotations=gtdb_taxonomy --annotations=genes --genecatalog={'source': 'contigs', 'clustermethod': 'linclust', 'minlength_nt': 100, 'minid': 0.95, 'coverage': 0.9, 'extra': '', 'SubsetSize': 500000} --eggNOG_use_virtual_disk=False --virtual_disk=/dev/shm assembly
[2021-09-27 14:20 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/atlas/Snakefile --directory /users/student198/First_Run --jobs 40 --rerun-incomplete --configfile '/users/student198/First_Run/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy assembly ' returned non-zero exit status 2.
(atlasenv) [student198@puhti-login1 First_Run]$

cluster_config

The ~/.config/snakemake/cluster/cluster_config.yaml should looke like that

## This is a yaml file, defining options for specific rules or by default.
## The '#' defines a comment.
## the two spaces at the beginning of rows below rulenames are important.
## For more information see https://snakemake.readthedocs.io/en/stable/executing/cluster-cloud.html#cluster-execution

# Overwrite/Define arguments for all rules
__default__:
  account: project_2004930


rename_contigs:
  threads: 2



#   queue: normal



# You can  overwrite values for specific rules
rulename:
  queue: long
  account: ""
  time_min:  # min
  threads:

Error on test data: error in rule dereplication

Here's the log:


..:: dRep dereplicate Step 1. Filter ::..

Will filter the genome list
5 genomes were input to dRep
Calculating genome info of genomes
100.00% of genomes passed length filtering
0.00% of genomes passed checkM filtering


..:: dRep dereplicate Step 2. Cluster ::..

Running primary clustering
Running pair-wise MASH clustering
Final step: comparing between all groups
Traceback (most recent call last):
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/bin/dRep", line 32, in
Controller().parseArguments(args)
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/controller.py", line 100, in parseArguments
self.dereplicate_operation(**vars(args))
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/controller.py", line 48, in dereplicate_operation
drep.d_workflows.dereplicate_wrapper(kwargs['work_directory'],**kwargs)
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/d_workflows.py", line 37, in dereplicate_wrapper
drep.d_cluster.controller.d_cluster_wrapper(wd, **kwargs)
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/d_cluster/controller.py", line 179, in d_cluster_wrapper
GenomeClusterController(workDirectory, **kwargs).main()
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/d_cluster/controller.py", line 32, in main
self.run_primary_clustering()
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/d_cluster/controller.py", line 100, in run_primary_clustering
Mdb, Cdb, cluster_ret = drep.d_cluster.compare_utils.all_vs_all_MASH(self.Bdb, self.wd.get_dir('MASH'), **self.kwargs)
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/d_cluster/compare_utils.py", line 120, in all_vs_all_MASH
return run_second_round_clustering(Bdb, genome_chunks, data_folder, verbose=True, **kwargs)
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/drep/d_cluster/compare_utils.py", line 223, in run_second_round_clustering
Cdb = pd.concat(dbs)
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 284, in concat
sort=sort,
File "/scratch/project_2004930/databases/conda_envs/8809f67c9f62ae27fd6598d8ded7a608/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 331, in init
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate

What is this error about?

What is this error about?

snakemake.exceptions.WorkflowError: Config file is not valid JSON or YAML. In case of YAML, make sure to not mix whitespace and tab indentation.
[2021-09-27 15:12 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/atlas/Snakefile --directory /users/student215/First_Run --jobs 40 --rerun-incomplete --configfile '/users/student215/First_Run/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy assembly ' returned non-zero exit status 1.

Originally posted by @preckrasna in #8 (comment)

FileNotFoundError cluster_config.yaml

I configured that but while running atlas I got this error. Though the dryrun completes the true run never completes. Thanks for helping"

FileNotFoundError: [Errno 2] No such file or directory: '/users/student231/.config/snakemake/sewunet/cluster_config.yaml'
[2021-09-28 12:28 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/atlas/atlas/Snakefile --directory /users/student231/ProjectFolder --jobs 10 --rerun-incomplete --configfile '/users/student231/ProjectFolder/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy all --keep-going ' returned non-zero exit status 1.

Originally posted by @Sewunet-Abera in #10 (comment)

Error during assembly

Traceback (most recent call last):
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/__init__.py", line 699, in snakemake
    success = workflow.execute(
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/workflow.py", line 1056, in execute
    success = scheduler.schedule()
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/scheduler.py", line 501, in schedule
    self.run(runjobs)
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/scheduler.py", line 518, in run
    executor.run_jobs(
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 149, in run_jobs
    self.run(
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 978, in run
    jobscript = self.get_jobscript(job)
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 752, in get_jobscript
    f = job.format_wildcards(self.jobname, cluster=self.cluster_wildcards(job))
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 859, in cluster_wildcards
    return Wildcards(fromdict=self.cluster_params(job))
  File "/scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 835, in cluster_params
    cluster.update(self.cluster_config.get(job.name, dict()))
ValueError: need more than 1 value to unpack
[2021-09-27 14:27 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/atlas/Snakefile --directory /users/student196/first_run --jobs 40 --rerun-incomplete --configfile '/users/student196/first_run/config.yaml' --nolock  --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs   --scheduler greedy  assembly   ' returned non-zero exit status 1.

Error in run_spades and deduplicate_reads (groundwater)

Error in rule run_spades:
jobid: 42
output: S9/assembly/contigs.fasta, S9/assembly/scaffolds.fasta
log: S9/logs/assembly/spades.log (check log file(s) for error message)
conda-env: /scratch/project_2004930/databases/conda_envs/be76c22fee4f2ed5b53dc8f36aeb87ed
shell:
rm -f S9/assembly/pipeline_state/stage_*_copy_files 2> S9/logs/assembly/spades.log ; spades.py --threads 8 --memory 350 -o S9/assembly -k auto --restart-from last >> S9/logs/assembly/spades.log 2>&1
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 8251622

Error executing rule run_spades on cluster (jobid: 42, external: 8251622, jobscript: /users/student225/groundwater/.snakemake/tmp.v5zvn_o_/snakejob.run_spades.42.sh). For error details see the cluster log and the log files of the involved rule(s).

Error in rule deduplicate_reads:
jobid: 11
output: S2/sequence_quality_control/S2_deduplicated_R1.fastq.gz, S2/sequence_quality_control/S2_deduplicated_R2.fastq.gz
log: S2/logs/QC/deduplicate.log (check log file(s) for error message)
conda-env: /scratch/project_2004930/databases/conda_envs/d01975eec998d193985b8b6d0a2fb2a4
shell:

clumpify.sh in1=S2/sequence_quality_control/S2_raw_R1.fastq.gz in2=S2/sequence_quality_control/S2_raw_R2.fastq.gz out1=S2/sequence_quality_control/S2_deduplicated_R1.fastq.gz out2=S2/sequence_quality_control/S2_deduplicated_R2.fastq.gz overwrite=true dedupe=t dupesubs=2 optical=f threads=12 pigz=t unpigz=t -Xmx102G 2> S2/logs/QC/deduplicate.log

    (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 8251623

Error executing rule deduplicate_reads on cluster (jobid: 11, external: 8251623, jobscript: /users/student225/groundwater/.snakemake/tmp.v5zvn_o_/snakejob.deduplicate_reads.11.sh). For error details see the cluster log and the log files of the involved rule(s).

Error in rule deduplicate_reads:
jobid: 64
output: S10/sequence_quality_control/S10_deduplicated_R1.fastq.gz, S10/sequence_quality_control/S10_deduplicated_R2.fastq.gz
log: S10/logs/QC/deduplicate.log (check log file(s) for error message)
conda-env: /scratch/project_2004930/databases/conda_envs/d01975eec998d193985b8b6d0a2fb2a4
shell:

clumpify.sh in1=S10/sequence_quality_control/S10_raw_R1.fastq.gz in2=S10/sequence_quality_control/S10_raw_R2.fastq.gz out1=S10/sequence_quality_control/S10_deduplicated_R1.fastq.gz out2=S10/sequence_quality_control/S10_deduplicated_R2.fastq.gz overwrite=true dedupe=t dupesubs=2 optical=f threads=12 pigz=t unpigz=t -Xmx102G 2> S10/logs/QC/deduplicate.log

    (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 8251624

Error executing rule deduplicate_reads on cluster (jobid: 64, external: 8251624, jobscript: /users/student225/groundwater/.snakemake/tmp.v5zvn_o_/snakejob.deduplicate_reads.64.sh). For error details see the cluster log and the log files of the involved rule(s).

error in 'atlas run binning --profile cluster --jobs 10 --keep-going' for Groundwater_metagenomes

CLUSTER: 2021-09-30 20:54:52 Choose queue: hugemem
CLUSTER: 2021-09-30 20:54:52 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --account=project_2004930 --job-name=run_spades --cpus-per-task=8 -n1 --time=2880 --mem=350000m --partition=hugemem /users/student218/Groundwater_metagenomes/.snakemake/tmp.xnjf16u0/snakejob.run_spades.20.sh
Submitted job 20 with external jobid '8241133'.
[Thu Sep 30 20:56:58 2021]
Error in rule run_spades:
jobid: 20
output: S2/assembly/contigs.fasta, S2/assembly/scaffolds.fasta
log: S2/logs/assembly/spades.log (check log file(s) for error message)
conda-env: /scratch/project_2004930/databases/conda_envs/be76c22fee4f2ed5b53dc8f36aeb87ed
shell:
rm -f S2/assembly/pipeline_state/stage_*_copy_files 2> S2/logs/assembly/spades.log ; spades.py --threads 8 --memory 350 -o S2/assembly -k auto --meta --pe1-1 S2/assembly/reads/QC.errorcorr.merged_R1.fastq.gz --pe1-2 S2/assembly/reads/QC.errorcorr.merged_R2.fastq.gz --pe1-m S2/assembly/reads/QC.errorcorr.merged_me.fastq.gz --only-assembler >> S2/logs/assembly/spades.log 2>&1
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 8241133

Error executing rule run_spades on cluster (jobid: 20, external: 8241133, jobscript: /users/student218/Groundwater_metagenomes/.snakemake/tmp.xnjf16u0/snakejob.run_spades.20.sh). For error details see the cluster log and the log files of the involved rule(s).
Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message
Note the path to the log file for debugging.
Documentation is available at: https://metagenome-atlas.readthedocs.io
Issues can be raised at: https://github.com/metagenome-atlas/atlas/issues
Complete log: /users/student218/Groundwater_metagenomes/.snakemake/log/2021-09-30T194825.885753.snakemake.log
[2021-09-30 20:57 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/atlas/atlas/Snakefile --directory /users/student218/Groundwater_metagenomes --jobs 10 --rerun-incomplete --configfile '/users/student218/Groundwater_metagenomes/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy binning --keep-going ' returned non-zero exit status 1.

sbatch: error: Batch job submission failed: Job violates accounting/QOS policy

Hei,

By following instructions for 'analysing real samples (groundwater)' I faced an error:

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

error atlas run binning

[2021-09-27 16:14 INFO] Executing: snakemake --snakefile /scratch/project_2004930/atlas/atlas/Snakefile --directory /users/student223/Human --jobs 10 --rerun-incomplete --configfile '/users/student223/Human/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy binning --keep-going
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cluster nodes: 10
Singularity containers: ignored
Job stats:
job count min threads max threads


align_reads_to_final_contigs 8 8 8
apply_quality_filter 8 8 8
assembly 1 1 1
assembly_one_sample 8 1 1
bam_2_sam_binning 8 8 8
bam_2_sam_contigs 8 4 4
binning 1 1 1
build_assembly_report 1 1 1
build_bin_report 1 1 1
build_decontamination_db 1 8 8
build_qc_report 1 1 1
calculate_contigs_stats 16 1 1
calculate_insert_size 8 4 4
combine_bin_stats 1 1 1
combine_contig_stats 1 1 1
combine_insert_stats 1 1 1
combine_read_counts 1 1 1
combine_read_length_stats 1 1 1
convert_sam_to_bam 8 4 4
deduplicate_reads 8 8 8
do_not_filter_contigs 8 1 1
error_correction 8 8 8
finalize_contigs 8 1 1
finalize_sample_qc 8 1 1
get_bins 8 1 1
get_contig_coverage_from_bb 8 1 1
get_contigs_from_gene_names 8 1 1
get_maxbin_cluster_attribution 8 1 1
get_metabat_depth_file 8 8 8
get_read_stats 40 4 4
get_unique_bin_ids 16 1 1
get_unique_cluster_attribution 16 1 1
init_pre_assembly_processing 8 1 1
initialize_qc 8 4 4
maxbin 8 8 8
merge_checkm 1 1 1
merge_pairs 8 8 8
metabat 8 8 8
pileup 8 8 8
pileup_for_binning 8 8 8
predict_genes 8 1 1
qc 1 1 1
qcreads 8 1 1
rename_contigs 8 4 4
rename_spades_output 8 1 1
run_checkm_lineage_wf 8 8 8
run_checkm_tree_qa 8 1 1
run_das_tool 8 8 8
run_decontamination 8 8 8
run_spades 8 8 8
write_read_counts 8 1 1
total 373 1 8

[Mon Sep 27 16:14:45 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958164_1.fastq.gz, /users/student223/shared/Human/SAMEA103958164_2.fastq.gz
output: SAMEA103958164/sequence_quality_control/SAMEA103958164_raw_R1.fastq.gz, SAMEA103958164/sequence_quality_control/SAMEA103958164_raw_R2.fastq.gz
log: SAMEA103958164/logs/QC/init.log
jobid: 155
wildcards: sample=SAMEA103958164
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:45 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.155.sh
CLUSTER: 2021-09-27 16:14:45 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:45 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958163_1.fastq.gz, /users/student223/shared/Human/SAMEA103958163_2.fastq.gz
output: SAMEA103958163/sequence_quality_control/SAMEA103958163_raw_R1.fastq.gz, SAMEA103958163/sequence_quality_control/SAMEA103958163_raw_R2.fastq.gz
log: SAMEA103958163/logs/QC/init.log
jobid: 211
wildcards: sample=SAMEA103958163
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:46 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.211.sh
CLUSTER: 2021-09-27 16:14:46 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:46 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958167_1.fastq.gz, /users/student223/shared/Human/SAMEA103958167_2.fastq.gz
output: SAMEA103958167/sequence_quality_control/SAMEA103958167_raw_R1.fastq.gz, SAMEA103958167/sequence_quality_control/SAMEA103958167_raw_R2.fastq.gz
log: SAMEA103958167/logs/QC/init.log
jobid: 99
wildcards: sample=SAMEA103958167
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:46 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.99.sh
CLUSTER: 2021-09-27 16:14:46 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:46 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958160_1.fastq.gz, /users/student223/shared/Human/SAMEA103958160_2.fastq.gz
output: SAMEA103958160/sequence_quality_control/SAMEA103958160_raw_R1.fastq.gz, SAMEA103958160/sequence_quality_control/SAMEA103958160_raw_R2.fastq.gz
log: SAMEA103958160/logs/QC/init.log
jobid: 71
wildcards: sample=SAMEA103958160
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:46 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.71.sh
CLUSTER: 2021-09-27 16:14:46 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:46 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958165_1.fastq.gz, /users/student223/shared/Human/SAMEA103958165_2.fastq.gz
output: SAMEA103958165/sequence_quality_control/SAMEA103958165_raw_R1.fastq.gz, SAMEA103958165/sequence_quality_control/SAMEA103958165_raw_R2.fastq.gz
log: SAMEA103958165/logs/QC/init.log
jobid: 12
wildcards: sample=SAMEA103958165
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:47 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.12.sh
CLUSTER: 2021-09-27 16:14:47 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:47 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958162_1.fastq.gz, /users/student223/shared/Human/SAMEA103958162_2.fastq.gz
output: SAMEA103958162/sequence_quality_control/SAMEA103958162_raw_R1.fastq.gz, SAMEA103958162/sequence_quality_control/SAMEA103958162_raw_R2.fastq.gz
log: SAMEA103958162/logs/QC/init.log
jobid: 43
wildcards: sample=SAMEA103958162
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:47 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.43.sh
CLUSTER: 2021-09-27 16:14:47 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:47 2021]
rule build_decontamination_db:
input: /scratch/project_2004930/databases/phiX174_virus.fa
output: ref/genome/1/summary.txt
log: logs/QC/build_decontamination_db.log
jobid: 14
threads: 8
resources: tmpdir=/local_scratch/student223, mem=90, java_mem=76, time=5, mem_mb=90000, time_min=300

CLUSTER: 2021-09-27 16:14:47 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=build_decontamination_db --cpus-per-task=8 -n1 --time=300 --mem=90000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.build_decontamination_db.14.sh
CLUSTER: 2021-09-27 16:14:47 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:47 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958166_1.fastq.gz, /users/student223/shared/Human/SAMEA103958166_2.fastq.gz
output: SAMEA103958166/sequence_quality_control/SAMEA103958166_raw_R1.fastq.gz, SAMEA103958166/sequence_quality_control/SAMEA103958166_raw_R2.fastq.gz
log: SAMEA103958166/logs/QC/init.log
jobid: 127
wildcards: sample=SAMEA103958166
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:47 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.127.sh
CLUSTER: 2021-09-27 16:14:48 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.

[Mon Sep 27 16:14:48 2021]
rule initialize_qc:
input: /users/student223/shared/Human/SAMEA103958161_1.fastq.gz, /users/student223/shared/Human/SAMEA103958161_2.fastq.gz
output: SAMEA103958161/sequence_quality_control/SAMEA103958161_raw_R1.fastq.gz, SAMEA103958161/sequence_quality_control/SAMEA103958161_raw_R2.fastq.gz
log: SAMEA103958161/logs/QC/init.log
jobid: 183
wildcards: sample=SAMEA103958161
priority: 80
threads: 4
resources: tmpdir=/local_scratch/student223, mem=10, java_mem=8, time=1, mem_mb=10000, time_min=60

CLUSTER: 2021-09-27 16:14:48 submit command: sbatch --parsable --output=cluster_log/%j.out --error=cluster_log/%j.out --job-name=initialize_qc --cpus-per-task=4 -n1 --time=60 --mem=10000m /users/student223/Human/.snakemake/tmp.80uen2e6/snakejob.initialize_qc.183.sh
CLUSTER: 2021-09-27 16:14:48 Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message
Note the path to the log file for debugging.
Documentation is available at: https://metagenome-atlas.readthedocs.io
Issues can be raised at: https://github.com/metagenome-atlas/atlas/issues
Complete log: /users/student223/Human/.snakemake/log/2021-09-27T161441.941750.snakemake.log
[2021-09-27 16:14 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/atlas/atlas/Snakefile --directory /users/student223/Human --jobs 10 --rerun-incomplete --configfile '/users/student223/Human/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy binning --keep-going ' returned non-zero exit status 1.

Rerun with errors

(base) [student231@puhti-login1 First_Run]$ atlas run --dryrun
Traceback (most recent call last):
File "/scratch/project_2004930/miniconda3/bin/atlas", line 33, in
sys.exit(load_entry_point('metagenome-atlas', 'console_scripts', 'atlas')())
File "/scratch/project_2004930/miniconda3/bin/atlas", line 25, in importlib_load_entry_point
return next(matches).load()
StopIteration

(base) [student231@puhti-login1 First_Run]$ less /scratch/project_2004930/miniconda3/bin/atlas
#!/scratch/project_2004930/miniconda3/bin/python3.8

EASY-INSTALL-ENTRY-SCRIPT: 'metagenome-atlas','console_scripts','atlas'

import re
import sys

for compatibility with easy_install; see #2198

requires = 'metagenome-atlas'

try:
from importlib.metadata import distribution
except ImportError:
try:
from importlib_metadata import distribution
except ImportError:
from pkg_resources import load_entry_point

def importlib_load_entry_point(spec, group, name):
dist_name, _, _ = spec.partition('==')
matches = (
entry_point
for entry_point in distribution(dist_name).entry_points
if entry_point.group == group and entry_point.name == name
)
return next(matches).load()

globals().setdefault('load_entry_point', importlib_load_entry_point)

if name == 'main':
sys.argv[0] = re.sub(r'(-script.pyw?|.exe)?$', '', sys.argv[0])
sys.exit(load_entry_point('metagenome-atlas', 'console_scripts', 'atlas')())
(base) [student231@puhti-login1 First_Run]$

Error in rule initialize_qc

I have been working with the peat dataset and have repeated the assembly part of the tutorial 3 time. this is the command I used for that: atlas run assembly --profile cluster --jobs 10 --keep-going

Every time, I get the same error (see below). multiple errors associated to the qc step (I guess the same issue others have raised here, but I was not able to post in their issue).

Any ideas on how to fix this?

Example of the errors messages I get (2 examples)

Example 1

Error in rule initialize_qc:
jobid: 39
output: DNA8-1/sequence_quality_control/DNA8-1_raw_R1.fastq.gz, DNA8-1/sequence_quality_control/DNA8-1_raw_R2.fastq.gz
log: DNA8-1/logs/QC/init.log (check log file(s) for error message)
conda-env: /scratch/project_2004930/databases/conda_envs/d01975eec998d193985b8b6d0a2fb2a4
shell:

    reformat.sh in1=/users/student224/shared/Peat_metagenomes/DNA8-1_R1.fastq.gz in2=/users/student224/shared/Peat_metagenomes/DNA8-1_R2.fastq.gz             interleave

d=f out1=DNA8-1/sequence_quality_control/DNA8-1_raw_R1.fastq.gz out2=DNA8-1/sequence_quality_control/DNA8-1_raw_R2.fastq.gz iupacToN=t touppercase=t
qout=33 addslash=t trimreaddescription=t overwrite=true verifypaired=t threads=4 -Xmx8G 2> DNA8-1/logs/QC/init.log

    (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 8253162

Error executing rule initialize_qc on cluster (jobid: 39, external: 8253162, jobscript: /users/student224/peat/.snakemake/tmp.tm8wosfs/snakejob.initialize_qc.39.sh). For er
ror details see the cluster log and the log files of the involved rule(s).

Example 2

Error in rule initialize_qc:
jobid: 61
output: DNA9-2/sequence_quality_control/DNA9-2_raw_R1.fastq.gz, DNA9-2/sequence_quality_control/DNA9-2_raw_R2.fastq.gz
log: DNA9-2/logs/QC/init.log (check log file(s) for error message)
conda-env: /scratch/project_2004930/databases/conda_envs/d01975eec998d193985b8b6d0a2fb2a4
shell:

    reformat.sh in1=/users/student224/shared/Peat_metagenomes/DNA9-2_R1.fastq.gz in2=/users/student224/shared/Peat_metagenomes/DNA9-2_R2.fastq.gz             interleaved=f             out1=DNA9-2/sequence_quality_control/DNA9-2_raw_R1.fastq.gz out2=DNA9-2/sequence_quality_control/DNA9-2_raw_R2.fastq.gz             iupacToN=t touppercase=t qout=33 addslash=t trimreaddescription=t             overwrite=true             verifypaired=t             threads=4             -Xmx8G 2> DNA9-2/logs/QC/init.log
    
    (one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: 8253167

Error executing rule initialize_qc on cluster (jobid: 61, external: 8253167, jobscript: /users/student224/peat/.snakemake/tmp.tm8wosfs/snakejob.initialize_qc.61.sh). For error details see the cluster log and the log files of the involved rule(s).

errors when run assebly

I got this error when run atlas run assembly --profile cluster (with -j 10 or not)
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Error submitting jobscript (exit code 1):
Job can't be submitted
sbatch: error: AssocMaxSubmitJobLimit
sbatch: error: Batch job submission failed: Job violates accounting/QOS policy (job submit limit, user's size and/or time limits)

Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Note the path to the log file for debugging.
Documentation is available at: https://metagenome-atlas.readthedocs.io
Issues can be raised at: https://github.com/metagenome-atlas/atlas/issues
Complete log: /users/student221/First_Run/.snakemake/log/2021-09-27T150916.466418.snakemake.log
[2021-09-27 15:09 CRITICAL] Command 'snakemake --snakefile /scratch/project_2004930/miniconda3/envs/atlasenv/lib/python3.8/site-packages/atlas/Snakefile --directory /users/student221/First_Run --jobs 10 --rerun-incomplete --configfile '/users/student221/First_Run/config.yaml' --nolock --profile cluster --use-conda --conda-prefix /scratch/project_2004930/databases/conda_envs --scheduler greedy assembly ' returned non-zero exit status 1.

ssh unreachable

Tried to setup ssh but got to following massage:
ssh: connect to host puhti.csc.fi port 22: Network is unreachable

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.