The manual can be found here.
If you need further help, you can submit questions about usage to the dadi-user Google Group.
If you've found an apparent bug or want to suggest a feature, please submit an Issue so we can address it.
Automated and distributed population genetic model inference from allele frequency spectra
Home Page: https://dadi-cli.readthedocs.io/en/latest/
License: Apache License 2.0
The manual can be found here.
If you need further help, you can submit questions about usage to the dadi-user Google Group.
If you've found an apparent bug or want to suggest a feature, please submit an Issue so we can address it.
I recently discovered that InferDM using Work Queue was broken by the new bestfit_p0
functionality. (Fixed here: #38 ) We need robust tests to ensure this does not happen in the future.
I see there are some Work Queue tests in the test suite, but they are skipped @pytest.mark.skip()
. Why are these tests being skipped? If run length is an issue, they can be made to run very quickly by setting --maxeval very small.
The _sel and _sel_single_gamma suffixes aren't very informative. For models with a single gamma, _one_s might suffice. But for models with two gammas it's more complex, because one could incorporate those gammas into the models in multiple ways. Currently, we only have the case where s is equal in pop 1 and the ancestral pop, and different in pop 2. Any suggestions for what that should be referred to as?
It would be ideal to support GPU acceleration of optimization through Work Queue. But it will be challenging, and there are multiple obstacles.
We could use t.specify_gpus(1)
to indicate tasks for GPU execution. But we don't a priori know the proper number of GPU vs CPU tasks for efficient use of all resources. So to be efficient we would need to dynamically create tasks to fill in the queue as existing tasks finished, specifying them as GPU or CPU as necessary. This would require a significant rework of the Work Queue dadi-cli implementation. And it would require specifying the available resources in the Work Queue pool ahead of time, which isn't necessary now and seems contrary to the Work Queue philosophy.
Note that having each task try dadi.cuda_enabled(True)
seems likely to lead to competition for limited GPUs, potentially slowing overall performance.
Note also that PythonTasks don't preserve state, so we can't simply run dadi.cuda_enabled(True)
ahead of time.
I think we could add an --interactive
option to the Plot
command, allowing users to choose whether to display figures in the terminal. This would enhance the integration of dadi-cli
with snakemake
.
Users should be able to use custom models with GenerateCache. This will be particularly important for joint DFE analyses.
Currently we seem to be requiring exact versions of dill and ndcctools in the *-env.yml files. Is there some reason for this? If not, we should remove those requirements.
Also, gfortran should not be necessary for build-env.yml
I used the simulated data from https://github.com/popsim-consortium/analysis2 to fit a gamma
or lognormal
DFE with dadi
.
However, the proportions of mutations with different selection coefficients are different from the inferred gamma
and lognormal
DFEs.
The inferred gamma
DFE for pop0
from the Constant
demographic model without annotation is shape=0.187013
and scale=0.05817376022120527
The inferred lognormal
DFE from the same data is mu=-1.40769
and sigma=4.12304
To calculate the proportions of mutations with different selection coefficients, I used the following codes:
from scipy.stats import lognorm, gamma
def lognormal_mut_prop(mu, sigma):
ps = lognorm.cdf([1e-5, 1e-4, 1e-3, 1e-2], s=sigma, scale=np.exp(mu))
props = [ps[0], ps[1]-ps[0], ps[2]-ps[1], ps[3]-ps[2], 1-ps[3]]
return props
def gamma_mut_prop(shape, scale):
ps = gamma.cdf([1e-5, 1e-4, 1e-3, 1e-2], a=shape, scale=scale)
props = [ps[0], ps[1]-ps[0], ps[2]-ps[1], ps[3]-ps[2], 1-ps[3]]
return props
Then the proportions from the gamma
DFE are [0.21445487739385305, 0.11533939611182581, 0.17626500815128215, 0.2542754681968391, 0.23966525014619988]
the proportions from the lognormal
DFE are [0.00712460844025545, 0.022090862661779957, 0.06188924246452784, 0.1279129303758305, 0.7809823560576062]
If I used the parameters from Huber et al. 2017, then the proportions are similar.
For the gamma
DFE, shape=0.19
and scale=0.074
, the proportions are [0.19981642594537893, 0.10960250183554734, 0.16888719378656925, 0.24865042521127445, 0.27304345322123]
.
For the lognormal
DFE, mu=-6.86
and sigma=4.89
, the proportions are [0.17067061591167176, 0.14471479180194546, 0.18071862151578205, 0.18153626737811707, 0.32235970339248365]
.
There seems to be a Numpy version compatibility issue when using Travis-CI/GitHub to do automatic unit testing:
https://app.travis-ci.com/github/xin-huang/dadi-cli/jobs/566546856
The functionality for using custom model files (--model_file) should be document, and there should be tests of that functionality in the test suite.
Hello,
Thank you for your continued work on dadi-cli!
I'm still experiencing an NLopt error as raised in Issue #53.
Command
srun dadi-cli InferDM --fs <input>.fs --model growth --lbounds 1e-3 1e-5 --ubounds 100 1 --output <output>.growth --grids 40 50 60 --optimizations 9999 --nomisid
Error
Process Process-20:
Traceback (most recent call last):
File "/opt/anaconda_dadi-cli/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/opt/anaconda_dadi-cli/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/__main__.py", line 32, in _worker_func
results = func(*args)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/InferDM.py", line 74, in infer_demography
popt, _ = dadi.Inference.opt(
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi/NLopt_mod.py", line 134, in opt
xopt = opt.optimize(p0)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/nlopt.py", line 328, in optimize
return _nlopt.opt_optimize(self, *args)
nlopt.RoundoffLimited: NLopt roundoff-limited
I'd be most grateful of any advice?
Hi dadi-cli developers,
Thank you for making this tool, so far it's proving very useful and making access to dadi very much more straightforward.
I wanted to ask about a warning I have encountered when running InferDM
with the --nomisid
flag.
The error is:
Process Process-8:
Traceback (most recent call last):
File "/opt/anaconda_dadi-cli/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/opt/anaconda_dadi-cli/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/__main__.py", line 32, in _worker_func
results = func(*args)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/InferDM.py", line 74, in infer_demography
popt, _ = dadi.Inference.opt(
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi/NLopt_mod.py", line 134, in opt
xopt = opt.optimize(p0)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/nlopt.py", line 328, in optimize
return _nlopt.opt_optimize(self, *args)
nlopt.RoundoffLimited: NLopt roundoff-limited
This appears multiple times for different Processes, and the number specified by --optimizations
is never reached (e.g. only 80 / 100 ever run and the job doesn't end). I have tried inflating the number of optimizations but can't seem to get to higher numbers.
I am running on the process on slurm with the below script:
!/bin/bash -e
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 16
#SBATCH -p <medium length queue>
#SBATCH --mem 4G
#SBATCH -t 7-0:00
#SBATCH -o %x_%A_%a_%N.out
#SBATCH -e %x_%A_%a_%N.err
#SBATCH -J bh_dadi_trial_run
#SBATCH --mail-type=END,FAIL
cd <dadi-cli dir>
srun dadi-cli InferDM --fs 1D_test.fs --model growth --lbounds 1e-3 0 --ubounds 100 1 --output 1D_test.growth.demo.params --optimizations 100 --nomisid
I would be most grateful for any input you might have on this!
Thanks again
I am getting an error with dadi-cli Plot. A sample command
dadi-cli Plot --fs NOR.CEN.22.neutral_regions.folded.final.fs \
--demo-popt NOR.CEN.sym_mig.neutral.demo.params.InferDM.bestfits \
--output ./test.pdf --model sym_mig
returns the error.
Traceback (most recent call last):
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/bin/dadi-cli", line 10, in <module>
sys.exit(main())
^^^^^^
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/lib/python3.11/site-packages/dadi_cli/__main__.py", line 1869, in main
args.runner(args)
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/lib/python3.11/site-packages/dadi_cli/__main__.py", line 994, in run_plot
plot_fitted_demography(
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/lib/python3.11/site-packages/dadi_cli/Plot.py", line 98, in plot_fitted_demography
model = func_ex(popt, ns, pts_l)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/lib/python3.11/site-packages/dadi/Numerics.py", line 374, in extrap_func
result_l = list(map(partial_func, pts_l))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/lib/python3.11/site-packages/dadi/Numerics.py", line 122, in misid_func
fs = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/CEM/smacklab/libraries/python/.conda/envs/dadi-cli-gpu/lib/python3.11/site-packages/dadi/PortikModels/portik_models_2d.py", line 33, in sym_mig
nu1, nu2, m, T = params
^^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 4, got 3)
Based on the traceback, I edited the script Numerics.py
(this version). On line 121, I edited from args[0] = all_params[:-1]
to args[0] = all_params
, and this appeared to resolve the issue and produce the expected output. This is obviously non-ideal โ I'm unsure what else I could be breaking through this fix!
I believe that without this fix, a similar error crops up with other functions (e.g., the bootstrap workflow) for our group in dadi-cli.
Hello @tjstruck @sdavey @RyanGutenkunst
The repository currently has 23 branches.
Could you please review them and remove any that are no longer in use?
Thank you.
In a separate project, I experienced deadlocks when using multiprocessing.Queue
.
I found a solution by switching to Manager.Queue
, as recommended in the Python documentation:
Warning
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.
This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
Note that a queue created using a manager does not have this issue. See Programming guidelines.
Although I have not encountered deadlocks in dadi-cli
, I suggest switching from multiprocessing.Queue
to Manager.Queue
to avoid potential deadlock issues.
Line 344 in 708667f
Hi Xin,
Thanks for your nice work. I am confused about following two parameters in InferDM
model. I think they both mean stop optimization runs when convergence.
--check-convergence CHECK_CONVERGENCE
Start checking for convergence after a chosen number of optimizations. Stop optimization runs when convergence criteria are reached. BestFit results file
will be call <output_prefix>.InferDM.bestfits. Convergence not checked by default.
--force-convergence FORCE_CONVERGENCE
Start checking for convergence after a chosen number of optimizations. Only stop optimization once convergence criteria is reached. BestFit results file
will be call <output_prefix>.InferDM.bestfits. Convergence not checked by default.
I also read the user guide but still don't know the difference between them. Could you please give me more details or example?
Bests,
Xiaobo
Hello,
Is it possible to use frequency sepectra generated with ANGSD & realSFS in dadi-cli tools such as InferDM?
Thanks!
Because the param name "misid" should be logged in the output file from InferDM and InferDFE, any subcommands utilizing those output files should be able to parse out that "misid" is a parameter and automatically determine that a model should be wrapped with dadi.Numerics.make_anc_state_misid_func()
.
Hi @xin-huang ,
I have a bestfit file of a single population with 66 samples. Par of it is shown as following.
# /data/homezvol2/jenyuw/.conda/envs/dadi-cli/bin/dadi-cli BestFit --input-prefix gr.InferDM --lbounds 0.00001 0 --ubounds 100000 10000
...
...
#
# Converged results
# Log(likelihood) nu T misid theta
-154913.88339936367 0.08581037096179181 0.00738071925340792 46237.7356253171
-154913.88339936367 0.08581033593010697 0.007380720014850223 46237.737912280245
-154913.88339936372 0.08581034516624365 0.0073807224439230166 46237.738289533794
-154913.8833993638 0.0858103579387303 0.007380721501960894 46237.73720809068
-154913.88339936384 0.08581038150530884 0.007380724215024537 46237.736871764966
Then, I used this command to have the cache file. dadi-cli GenerateCache --model growth_sel --demo-popt "/dfs7/jje/jenyuw/SV-project-temp/result/fit_dadi/EU_sfs/dadi/gr.InferDM.bestfits" --sample-size 66 --grids 20 40 60 --gamma-pts 10 --gamma-bounds 0.0001 200 --output gr.bpkl
However, it always return error even I tried to change the gamma bounds, grids and gamma-pts. May I know How ot resolve it. Thank you!
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/Cache1D_mod.py", line 119, in _worker_sfs
sfs = popn_func_ex(tuple(params)+(gamma,), ns, pts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/Numerics.py", line 374, in extrap_func
result_l = list(map(partial_func, pts_l))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/DemogSelModels.py", line 446, in growth_sel
nu,T,gamma = params
^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/Cache1D_mod.py", line 119, in _worker_sfs
sfs = popn_func_ex(tuple(params)+(gamma,), ns, pts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/Numerics.py", line 374, in extrap_func
result_l = list(map(partial_func, pts_l))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/DemogSelModels.py", line 446, in growth_sel
nu,T,gamma = params
^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/Cache1D_mod.py", line 119, in _worker_sfs
sfs = popn_func_ex(tuple(params)+(gamma,), ns, pts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/Numerics.py", line 374, in extrap_func
result_l = list(map(partial_func, pts_l))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/DemogSelModels.py", line 446, in growth_sel
nu,T,gamma = params
^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/Cache1D_mod.py", line 119, in _worker_sfs
sfs = popn_func_ex(tuple(params)+(gamma,), ns, pts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/Numerics.py", line 374, in extrap_func
result_l = list(map(partial_func, pts_l))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/DemogSelModels.py", line 446, in growth_sel
nu,T,gamma = params
^^^^^^^^^^
Traceback (most recent call last):
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/bin/dadi-cli", line 10, in <module>
sys.exit(main())
^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi_cli/__main__.py", line 1890, in main
args.runner(args)
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi_cli/__main__.py", line 83, in run_generate_cache
generate_cache(
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi_cli/GenerateCache.py", line 49, in generate_cache
spectra = DFE.Cache1D(
^^^^^^^^^^^^
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/Cache1D_mod.py", line 59, in __init__
self._multiple_processes(cpus, gpus, verbose, demo_sel_func)
File "/data/homezvol2/jenyuw/.conda/envs/dadi-cli/lib/python3.12/site-packages/dadi/DFE/Cache1D_mod.py", line 103, in _multiple_processes
for ii, sfs in results:
^^^^^^^
TypeError: cannot unpack non-iterable ValueError object
I think we should verify whether the input VCF contains the AA info field when the --polarized
argument is set
hey @xin-huang -
Was updating everything on my stdpopsim analysis2 pipeline and dadi-cli
started to throw errors. It looks like there is a __param_name__
property associated with your models that isn't being set. Here is the error I'm getting
$ dadi-cli Model --names two_epoch
Traceback (most recent call last):
File "/home/adkern/popSim/analysis2/ext/dadi-cli/dadi_cli/Models.py", line 71, in get_model
params = func.__param_names__
AttributeError: 'function' object has no attribute '__param_names__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/adkern/miniconda3/envs/analysis2/bin/dadi-cli", line 33, in <module>
sys.exit(load_entry_point('dadi-cli', 'console_scripts', 'dadi-cli')())
File "/home/adkern/popSim/analysis2/ext/dadi-cli/dadi_cli/__main__.py", line 1636, in main
args.runner(args)
File "/home/adkern/popSim/analysis2/ext/dadi-cli/dadi_cli/__main__.py", line 800, in run_model
print_built_in_model_details(args.names)
File "/home/adkern/popSim/analysis2/ext/dadi-cli/dadi_cli/Models.py", line 108, in print_built_in_model_details
func, params = get_model(model_name)
File "/home/adkern/popSim/analysis2/ext/dadi-cli/dadi_cli/Models.py", line 73, in get_model
raise ValueError(
ValueError: Demographic model needs a .__param_names__ attribute!
Add one by adding the line two_epoch.__param_name__ = [LIST_OF_PARAMS]
Replacing LIST_OF_PARAMS with the names of the parameters as strings.
cheers
The current Stat analysis in the README is incorrect. Doing the Godambe analysis for the DFE is tricky, because the assumed theta needs to differ between bootstraps. Incorporating that will require
These complications suggest that maybe we should just split the Stat function into StatDM and StatDFE. What to you think @xin-huang ?
Hello @tjstruck
It appears that some filenames have been updated, e.g.,
./examples/data/1KG.YRI.CEU.biallelic.synonymous.snps.withanc.strict.vcf.gz
in the user guide.
Could you please review and update the user guide accordingly? Thank you.
Thanks @nschulmeister for bringing this to our attention.
dadi-cli GenerateFs --vcf ../file.vcf --pop-info popmap.txt --projections 10 10 10 --output file.fs --pop-ids pop1 pop2 pop3
dadi-cli Plot --fs ./folded_fs.fs --output ./output.pdf
results in a blank, 2kb pdf file.
.fs file looks fine, and no errors are produced at any point
using python 3.11.1 and dadi-cli 0.9.3
I found dadi.DFE.DemogSelModels.three_epoch
is not in the latest dadi
package from conda-forge
.
But it is in the bitbucket source codes. Could you please update it? Thank you.
Currently bounds are required for the BestFit command, and only used for the "near boundary" warning message. I think they should be optional, and if provided used more.
Hi developers,
I found that --maxtime
can't make any limitation to --force-convergence
. I think --maxtime
should be priority to --force-convergence
to kill the process when sometimes cannot achieve convergence very long time.
I don't know what I say is correct or not, you could think about it, or ignore it...
Best wishes,
Xiaobo
Hello,
Thanks again for your work on dadi-cli!
I have encountered a circular issue with the InferDM
module, where if I want to infer the standard neutral model in 1D (snm_1d
), upper and lower bounds are required, but I beleive there are no such paramters required for this model?
Error with no --lbounds
or --ubounds
dadi-cli InferDM --fs <input>.fs --model snm_1d --output <output>.snm_1d.demo.params --optimizations 1 --nomisid
usage: dadi-cli InferDM [-h] --fs FS [--p0 P0 [P0 ...]] --output-prefix OUTPUT_PREFIX [--optimizations OPTIMIZATIONS] [--check-convergence] [--force-convergence] [--work-queue WORK_QUEUE WORK_QUEUE] [--port PORT] [--debug-wq] [--maxeval MAXEVAL]
[--maxtime MAXTIME] [--cpus CPUS] [--gpus GPUS] [--bestfit-p0-file BESTFIT_P0] [--delta-ll DELTA_LL] --model MODEL [--model-file MODEL_FILE] [--grids GRIDS GRIDS GRIDS] [--nomisid] [--constants CONSTANTS [CONSTANTS ...]]
--lbounds LBOUNDS [LBOUNDS ...] --ubounds UBOUNDS [UBOUNDS ...] [--global-optimization] [--seed SEED]
dadi-cli InferDM: error: the following arguments are required: --lbounds, --ubounds
Error with --lbounds
and --ubounds
set
dadi-cli InferDM --fs <input>.fs --model snm_1d --lbounds 0 --ubounds 1 --output <output>.snm_1d.demo.params --optimizations 1 --nomisid
Traceback (most recent call last):
File "/opt/anaconda_dadi-cli/bin/dadi-cli", line 8, in <module>
sys.exit(main())
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/__main__.py", line 1603, in main
args.runner(args)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/__main__.py", line 133, in run_infer_dm
args.lbounds = _check_params(args.lbounds, args.model, "--lbounds", args.misid)
File "/opt/anaconda_dadi-cli/lib/python3.10/site-packages/dadi_cli/__main__.py", line 1482, in _check_params
raise Exception(
Exception:
Found 1 demographic parameters from the option --lbounds; however, 0 demographic parameters are required from the snm_1d model
You might be using the wrong model or need to add --nomisid if you did not use ancestral allele information to polarize the fs.
Error with --lbounds
and --ubounds
empty
dadi-cli InferDM --fs <input>.fs --model snm_1d --lbounds --ubounds --output <output>.snm_1d.demo.params --optimizations 1 --nomisid
usage: dadi-cli InferDM [-h] --fs FS [--p0 P0 [P0 ...]] --output-prefix OUTPUT_PREFIX [--optimizations OPTIMIZATIONS] [--check-convergence] [--force-convergence] [--work-queue WORK_QUEUE WORK_QUEUE] [--port PORT] [--debug-wq] [--maxeval MAXEVAL]
[--maxtime MAXTIME] [--cpus CPUS] [--gpus GPUS] [--bestfit-p0-file BESTFIT_P0] [--delta-ll DELTA_LL] --model MODEL [--model-file MODEL_FILE] [--grids GRIDS GRIDS GRIDS] [--nomisid] [--constants CONSTANTS [CONSTANTS ...]]
--lbounds LBOUNDS [LBOUNDS ...] --ubounds UBOUNDS [UBOUNDS ...] [--global-optimization] [--seed SEED]
dadi-cli InferDM: error: argument --lbounds: expected at least one argument
I'm not sure if I am using this model correctly, but I would be most grateful of any advice you can offer on this issue?
Many thanks
Dear teacher
Since I have used polyDFE and dadi-cli, however, I didn't get my ideal result. Could you teach me how to set a suitable model to run polyDFE?
Thank you for advance!!!
TAO
Sincerely
The current setting of [sample_sizes[0]+10, sample_sizes[0]+20, sample_sizes[0]+30] is likely to be insufficient when sampled gamma values are large. It's not clear to me how we should create a sensible set of defaults. We either need to test some possibilities, or we just remove the default and let users fumble around.
Is this helpful https://github.com/jupyterhub/repo2docker
for building docker images in AWS machines?
As with InferDM, we should be able to pick a reasonable p0 for the user, by using the middle of the specified ranges.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.