isaacovercast / gimmesad Goto Github PK
View Code? Open in Web Editor NEWJoint modelling of abundance and genetic diversity. An integrated model of population genetics and community ecology.
Joint modelling of abundance and genetic diversity. An integrated model of population genetics and community ecology.
Either cyclical (like Pleistocene sky island style) or Just one large growth and decay cycle (like GDM style island emergence and contraction).
Every so often you'll see this message, i'll wager it only happens on small values of k
.
Generating abunance distributions through time
Traceback (most recent call last):
File "./gimmeSAD.py", line 850, in <module>
plot_rank_abundance_through_time(args.outdir, sp_through_time, equilibria, verbose=args.verbose)
File "./gimmeSAD.py", line 193, in plot_rank_abundance_through_time
max_n_species, max_abundance, max_octave, max_class_count, max_n_bins, octave_bin_labels = prep_normalized_plots(sp_through_time)
File "./gimmeSAD.py", line 272, in prep_normalized_plots
max_abundance = max([max([y.abundance for y in sp]) for sp in sp_through_time.values()])
"Testing species abundance models: a new bootstrap approach applied to Indo-Pacific coral reefs", for certain communities higher colonization parameters are appropriate ( > 0.05).
This would make it it run faster and also i think it would make it easier to model larger communities. Instead of adding and removing from a list you'd be incrementing and decrementing a counter.
This is essentially probability of patch occupancy per species. There must be a better way to model this forward in time....
Right now each run is independent and there's a ton of stochastic variation in the abundance curves and also the heatplots. Would be cool to smooth this out a bit.
Right now theta isn't really being managed in a smart way, relying more on the whims of the amount of time a species is around than on any meaningful limit. It'd be smart to maybe scale theta to some max value given observed data?
Right now we have one colonization rate for all meta-community species. This is kind of implicitly modeling heterogeneous migration rates bcz meta-community species will colonize proportional to their frequency. But this doesn't track Ne properly. If we wanted to actually test heterogeneous migration we'd need to actually implement some kind of mechanism, maybe each species would get a migration rate then you could average over all of them to get proportional migration rates. This is a V2 feature.
James suggested a couple optimizations for the death step for the trait based part. One idea would rather than be calculating death probabilities over and over again per timestep, it might be more efficient to just uniform sample the individual to die and then adjust death based on some relationship between trait and environment.
This might not work good in the competition model.
The other idea is to just fudge it a little and only recalculate the death probabilities every once in a while.
There must be a better way to optimize the SGD size, rather than just using 10x10.
Because the stock neutral model is moran-style, generations should be of length K, not in time-steps.
We have recently learned that there was an error in the description of the Gutenkunst et al model provided as an example in the msprime tutorial. It appears that you are using a copy of the incorrect model in this repo, and so I am opening this issue to alert you.
Please see here for details on what the error is, and what actions you can take to fix it.
We have also written a short note analysing this and another related error, detailing the likely effects on downstream analysis. Thankfully, the differences between the misspecified model from msprime's documentation and the intended model are slight.
I apologise for this error and I sincerely hope that it has not affected your research.
Right now we're binning based on equal pi/dxy value widths, but we know the values aren't evenly distributed.
I like the idea of sampling from the infinite logseries metacommunity.
Instead of just using the final Ne for each species it would be possible to track expansions and contractions and times and then model this backwards in time. Mike suggested you could also do this in "epochs" by chunking and averaging Ne over each chunk, but I don't think that'd be any easier than the first way.
Vertebrate communities will tend to have smaller numbers of species, so test how low you can go in terms of species before it breaks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.