Coder Social home page Coder Social logo

fishr-core-team / fsa Goto Github PK

View Code? Open in Web Editor NEW
63.0 16.0 20.0 96.49 MB

FSA (Fisheries Stock Assessment) package provides R functions to conduct typical introductory fisheries analyses.

Home Page: https://fishr-core-team.github.io/FSA/

License: GNU General Public License v2.0

R 100.00%
fisheries fisheries-stock-assessment stock-assessment fish population-dynamics fisheries-management r

fsa's Introduction

 

FSA (Fisheries Stock Assessment)

The FSA package provides R functions to conduct typical introductory fisheries analyses. Example analyses that use FSA can be found in the Introductory Fisheries Analyses with R book (see note below) and on the fishR website. Please cite FSA if you use FSA in a publication.

 

Installation

The most recent stable version from CRAN may be installed with

install.packages("FSA")

The development version may be installed from GitHub with

if (!require('remotes')) install.packages('remotes'); require('remotes')
remotes::install_github('fishR-Core-Team/FSA')

You may need R Tools installed on your system to install the development version from GitHub. See the instructions for (R Tools for Windows or R Tools for Mac OS X).

 

Questions / Comments / Problems or Contributions

Report questions, comments, or bug reports on the issues page.

We are always looking for others to contribute to FSA. Please feel free to make a pull request via GitHub or to contact the maintainers.

Please adhere to the Code of Conduct.

 

Note about FSA and Introduction to Fisheries Analysis with R book

Versions of FSA beginning with v0.9.0 may no longer work as shown in the IFAR book. Many functions have not changed from when the book was published, but some have. Thus, you will need to install an FSA version before v0.9.0 to be assured that functions work as described in the IFAR book.

 

Project Status: Active - The project has reached a stable, usable state and is being actively developed. DOI CRAN Version License R-CMD-check Codecov test coverage CRAN RStudio mirror downloads rate CRAN RSTudio mirror downloads total

fsa's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fsa's Issues

functions in FSA package

I have successfully been (as recently as Monday 11/26) working with the FSA package. Today, however, none of the functions will run, even in the example dataset and code. Is there a bug or was there an update to the FSA package?

NAs in mrClosed

NAs cause problems with mrClosed(). Need to make a catch and send the user a warning.

Here is the issue. In a Schnabel method the first sample will have no marked fish (and thus m[1]=0) and it is irrelevant how many marked fish are returned to the population after the last sample (thus, either R[length(R)]=0 or R[length(R)]=n[length(n)]. However, some users may enter NA values for these positions, which will cause an error as shown below.

> WAL <- mrClosed(n =c(321,412,178,415,367),
+ m = c(NA,45,55,93,113),
+ R = c(321,412,178,415,NA),
+ method ="Schnabel")
> cbind(summary(WAL),confint(WAL,verbose=TRUE))
 Error in if (object$sum.m < 50) type <- "Poisson" else type <- "normal" : 
  missing value where TRUE/FALSE needed  

A better result would be to catch these and replace them with zeroes with a warning sent to the user.

Change dimensions of result from coef.catchCurve()

The result from coef.catchCurve() is a 1x2 matrix, which means that it cannot be easily cbind()ed with the result from confint.catchCurve(). Change the result to a named vector, like the generic coef(), to fix this.

Examine other coef.XXX functions for the same problem.

vonB curve comparison error?

I receive this error when plotting separate data and curves. What does this mean?

Error in plot.window(...) : need finite 'xlim' values
In addition: Warning messages:
1: In min(x) : no non-missing arguments to min; returning Inf
2: In max(x) : no non-missing arguments to max; returning -Inf
3: In min(x) : no non-missing arguments to min; returning Inf
4: In max(x) : no non-missing arguments to max; returning -Inf

psdPlot() bars

Make the bars for the psdPlot touch the x-axis (i.e., don't hover above the x-axis).

capHistConvert() tests

Need to add more tests for capHistConvert(). More tests of warnings and errors and more tests that conversions do what they are supposed to. Also make tests for cols2use= and cols2ignore= here and in capHistSum().

Error in loadNamespace(name) : there is no package called ‘dunn.test’

Hello FSA makers,

I just installed FSA to run a dunnTest on my data after a Kruskal analysis. When I run the function dunnTest I get a message saying Error in loadNamespace(name) : there is no package called 'dunn.test'

Can you help or guide me on how I can solve this?

Here is what i have installed:

install.packages("FSA")
if (!require('devtools')) install.packages('devtools'); require('devtools')
devtools::install_github('droglenc/FSA')

Here I try to run the test:
DunnT_d10 <- dunnTest(mtdt_d10$richness ~ mtdt_d10$Treatment, method = "bh"

Where mtdt_d10 is my df with richness as a column. the column Treatment is the type of grouping I want. Below my data

> mtdt_d10
             Treatment   CDW Chlorophyll   EPS Day richness  shannon   Invsimp  evenness
Day10F1      Untreated 2.700      34.714 0.571  10       31 3.160187 18.441811 0.9202675
Day10F2      Untreated 3.300      37.773 0.591  10       31 3.157187 18.468610 0.9193939
Day10F3      Untreated 3.133      40.540 0.597  10       24 2.852886 13.433176 0.8976834
Day10F4          Metro 1.333       4.670 0.166  10       34 3.253389 20.226557 0.9225910
Day10F5          Metro 1.333       4.670 0.171  10       35 3.245158 19.383861 0.9127539
Day10F6          Metro 1.367       4.934 0.179  10       34 3.226011 19.162289 0.9148274
Day10F7            Pen 3.300      52.546 0.482  10       26 2.919732 14.283367 0.8961466
Day10F8            Pen 3.533      49.280 0.464  10       25 2.893219 13.953305 0.8988290
Day10F9            Pen 3.367      54.492 0.488  10       20 2.585349 10.411644 0.8630107
Day10F10           Rif 1.333      14.394 0.197  10       25 2.950655 15.477977 0.9166726
Day10F11           Rif 1.400      15.214 0.207  10       21 2.813376 13.619792 0.9240778
Day10F12           Rif 1.333      14.672 0.220  10       25 2.951721 15.177262 0.9170038
Day10F13           Gen 0.400       0.820 0.074  10       27 2.981389 14.483322 0.9045925
Day10F14           Gen 0.600       0.820 0.073  10       26 2.995672 15.338943 0.9194546
Day10F16        Chlora 0.533       0.820 0.073  10       29 3.132111 18.800758 0.9301561
Day10F17        Chlora 0.733       1.098 0.093  10       30 3.119546 17.148702 0.9171904
Day10F20           Lin 1.367       0.820 0.069  10       29 3.130148 18.116496 0.9295731
Day10F22     Metro+Pen 1.433       8.242 0.229  10       26 2.900594 13.909013 0.8902725
Day10F23     Metro+Pen 1.633       6.881 0.213  10       23 2.730518 11.564800 0.8708413
Day10F24     Metro+Pen 1.433       4.934 0.204  10       23 2.748981 11.742467 0.8767297
Day10F25       Pen+Rif 1.400      13.353 0.241  10       17 2.454804  8.557118 0.8664380
Day10F26       Pen+Rif 1.300      14.466 0.232  10       16 2.454250  9.535190 0.8851836
Day10F27       Pen+Rif 1.367      12.797 0.233  10       17 2.586652 11.267516 0.9129746
Day10F28     Metro+Rif 1.033       6.545 0.177  10       21 2.767428 12.717616 0.9089859
Day10F29     Metro+Rif 0.967       6.823 0.152  10       25 2.909141 14.820081 0.9037754
Day10F30     Metro+Rif 0.933       6.003 0.160  10       20 2.730795 12.673225 0.9115619
Day10F31 Metro+Rif+Pen 0.900       6.003 0.177  10       17 2.347868  7.744762 0.8286942
Day10F32 Metro+Rif+Pen 1.067       5.447 0.152  10       18 2.609276 10.947822 0.9027475
Day10F33 Metro+Rif+Pen 1.033       5.054 0.160  10       16 2.452130  9.094436 0.8844188

Thank you
Epigouveia

xlim problems in ageKeyPlot

Here is my question/problem with a reproducible example (end of e-mail). How do I set limits on the x and y-axis using alkPlot? In my example, I was purposefully trying to over restrict the x-axis to look at a specific area of the entire plot (sort of a quick subset) just so I could further my understanding of how I can use this plotting function. I used pages 70 and 101 of your book as an example for coding xlim. The documentation for alkPlot says xlim = “A numeric of length 2 that provide the limits for the x-axis or y-axis.” There were no examples in the help description for alkPlot that demonstrated manipulations of either the x-axis or the y-axis.

#Reproducible example

library(FSA) #version 0.8.5
Age3<-data.frame(Age =3,FL=round(rnorm(1000,580,45)))
Age4<-data.frame(Age =4,FL=round(rnorm(1000,750,45)))
Age5<-data.frame(Age =5,FL=round(rnorm(1000,865,45)))
df<-rbind(Age3,Age4,Age5)
df<-lencat(~FL,data=df,startcat=300,w=20,vname="Bin20")#20 mm bins

test<-xtabs(~Bin20+Age,data=df)
ALK.obs<-prop.table(test,margin=1)
ALK.obs
alkPlot(ALK.obs,showLegend=TRUE,xlim=c(500,800))

See if `NA`s for species can be dealt with in `addZeroCatch()`

Some samples will contain no fish and thus may have an NA for the species variable (as would generally happen if the sample data is left_join()ed with the fish data. The current addZeroCatch() cannot deal with this problem (the NAs cause an error).

ALKPlot bar charts

Consider changing the bar charts in ALKplot to be more like a stacked "histogram" so that the axes can be controlled more easily. As a bar chart the axes are controlled by the number of the category rather than the value that that category looks like.

drop.levels removal?

can you remove the dependency on drop.levels() from gdata (and use droplevels() from base).

r-devel compatibility re: if(x) with length(x) > 1

Browsing through some checks performed for the upcoming change to if() (see https://twitter.com/groundwalkergmb/status/842434055425556480) I notice that there are some internal improvements that could be made, e.g. the best practice for code like

if(class(x) == "name") { }

is

if (inherits(x, "name")) { }

since x may inherit from multiple classes. An is.XXX <- function(x) inherits(x, "XXX") helper may be useful for these cases.

There are potentially a few places in FSA where if() is used with length > 1 vectors (hence the suppressWarnings() I suppose). I have interest in this package and would like to contribute - would you mind if I have a go and PR something with tests against r-devel and this new if() scenario?

Once I'm familiar, I may be able to contribute to the fisheries code also.

Add catch in mrClosed() for data.frame as first argument and other vectors also provided

For example,

lmb.dat <- data.frame(sample=1:4,
                      Total=   c(18,18,19,35),  # total number of fish (n)
                      Recaps=  c( 0, 9, 5,12),  # num recaptured fish (m)
                      Unmarked=c(18, 9,14,23),  # num fish not recaptured 
                      At_Large=c( 0,18,27,41),  # num marked fish prior to sample (M)
                      Rsubi=   c(18,18,19, 0))  # num marked fish returned to pop (R)
lmb.dat

library(FSA)

lmb.num1 <- mrClosed(lmb.dat, n=lmb.dat$Total, m=lmb.dat$Recaps, M=lmb.dat$At_Large, method="Schnabel")
summary(lmb.num1)

lmb.num2 <- mrClosed(lmb.dat, n=lmb.dat$Total, m=lmb.dat$Recaps, R=lmb.dat$Rsubi, method="Schnabel")
summary(lmb.num2)

Testing to do (II)

Still have these remaining from #23

  • mrOpen_VALIDATE ... Manly results

Will need to do a deeper dig after these are done.

Recode 1:length() type structures

Test changing 1:length() and 1:nrow() structures to seq_len() or seq_along(). These structures are in

  • addZeroCatch.R
  • ageComparisons.R
  • bcFuns.R
  • bootstrap.R
  • capHistConvert.R
  • capHistSum.R
  • comparisonsDVR.R
  • extraTests.R
  • FSA-internals.R
  • FSAUtils.R
  • knitUtil.R
  • lwCompPreds.R
  • mrClosed.R
  • psdAdd.R
  • residPlot.R
  • Summarize.R
  • wrAdd.R
  • test_capHst_OUT.R
  • test_dunnTest_OUT.R
  • test_FSAUtils_OUT.R
  • test_MRClosed_OUT.R
  • test_removal_VALIDATE.R

Add more precision metrics to `agePrecision()`

Add mean absolute deviation (MAD) to underlying details data.frame returned from agePrecision(). Also, report the average MAD and average standard deviation returned from summary() of an agePrecision() object.

Problme with iaxs= in hist.formula

A histogram that follows a histogram made with iaxs=TRUE does not seem to refresh the plot. Try this (from the examples in hist.formula()).

hist(~Sepal.Length,data=iris,xlab="Sepal Length (cm)",iaxs=FALSE)
hist(~Sepal.Length,data=iris,xlab="Sepal Length (cm)")

But then running this is OK ..

hist(~Sepal.Length,data=iris,xlab="Sepal Length (cm)")

Remove dependency on relax

The relax package is used in the dynamics plots of vbStarts() and srStarts(). However, relax relies on tcltk which causes all kinds of problems with some installations (Mac OS and Travis-CI). In addition, the dynamic plotting would be better done with manipulate() from the manipulate package. Finally, the dynamic plots can essentially be created with multiple uses of curve() as demonstrated in the IFAR book.

Move the dynamic plotting portions of vbStarts() and srStarts() to the FSASims package. Leave a note in vbStarts() and srStarts() in FSA that directs the user to alternatives (and FSAsims).

nlsTracePlot failure example

Add an example where nls() fails to converge and the user needs to use try() and capture.output() along with trace=TRUE to capture an object that can be used in nlsTracePlot(). This methodology is described in the documentation but it would be clearer if the documentation had an example. A bad stock-recruitment data set would be the easiest to use for the illustration.

NAs in agePrecision

Here is the super brief summary of my confusion:

  1. When what = “difference” or “absolute difference” the N used to calculate the % appears to exclude the NA values. This makes sense to me.
  2. When what = “precision”, the calculation appears to count NA values as “agreement”.

The long winded version of my confusion follows:

First, a simple example using the formula notation two different ways to achieve the same result (i.e., simply compare agerA1 and agerA2):

Example 1) ap.A<-agePrecision(~agerA1+agerA2,data=shad)
summary(ap.A,what="precision")

Example 2) ap.Ax<-agePrecision(agerA1~agerA2,data=shad)
summary(ap.Ax,what="precision")

How does agePrecision handled “NA” values? Did I miss something in the documentation? Anyway, in the shad data, agerA1 codes the values for two of the samples as “NA” and agerA2 codes the same two scales as NA on the second read leaving 51 samples with an estimated age. There is no difference between agerA1 and agerA2 for 26 scales. 26/51 = 50.9% (NA excluded). However, to achieve the results calculated using agePrecision it is necessary to include the two NA values and consider them as “agreement” ((26+2)/53) = 52.83%. I supposed that as long as the reader is consistent in assigning the scales as “NA” it make sense to include the NA values in the percent agreement calculation because NA is functionally serving as a value, and in this example, readerA agreed with both reads on which scales were NA.

What if I try comparing agerA1 and agerB1?

summary(ap.AB,what="difference")
-2 -1 0 1 2
18.182 15.152 45.455 12.121 9.091
To calculate 45.455 (15 agreed out of 33). Looks like the 20 NAs were excluded. Sound reasonable.

summary(ap.AB,what="precision")
n R ACV APE PercAgree
53 2 12.17 8.608 66.04
Now I am confused. To get 66.04% it’s necessary to count the 20 “NA’s” as agreed and divide by 53. (20 NAs+ 15 agreed)/53 * 100 = 66.04%

How do “NA” values work with >2 reads? At the bottom of page 84.

ap.ABC<-agePrecision(~agerA1 + agerB1 + agerC1, data = shad)
summary(ap.ABC, what = “difference”)
summary(ap.ABC,what= “precision”)

Footnote 7 says “The sample size is much smaller ...because Ager C did not estimate an age for several fish.” Should it be Ager “B” who did not estimate ages?. Either way, there are 33 samples where all 3 agers give an Age. The summary results for what = “difference” are easy to calculate and the row totals clearly show NA values are excluded. For example, ap.ABC$absdiff shows row totals of 33 (A vs B, 51 (A vs C), and 33 (B vs C). With these row totals, I can duplicate the results in summary(ap.ABS, what=”difference”). If what = “difference” or “absolute difference”, NA values appear to be excluded from the N.

However, I am again confused by the inclusion of the “NA” when calculating the PercAgree when what = “precision”. With reader B there are 20 instances where the scale age is NA and only 2 instances where all three readers agree. Therefore, PercAgree is calculated as = (20 of the NA values + 2 where everyone agrees)/53 * 100= 41.51%. Why is the NA being counted as an agreement when reader B essentially said “I can’t read the scale”? Should percent agreement be calculated on all 53 samples or only the 33 samples where everyone provided an age? If we only use the 33 samples with an age, then 2/33 = 6.01%.

Basically I am confused because it seems like NA values are not being treated the same when what is changed from “difference” to “precision”.

psdPlot() legend problem

psdPlot produces an error when placing the PSD values in a legend when PSD or PSD-P does not exist. See example of PKS Post in Goedded-Coble case study.

work on srStarts()

Need to create catches for when srStarts() gives really bad starting values (e.g., negatives for B-H).

Need to add the fixed= argument methodology as in vbStarts().

Labels in dunnTest()

Check to make sure that the labels on the results for dunnTest() are correct in a situation where not all levels of the factor variable exist in the data (as would happen if one used filter() instead of filterD()). See the relative weight example from Jeremy Risler as an example.

mode() v. class() issues

Consider changing the class()es returned by iHndlFormula() to mode()s. An example problem occurs with the following where hist.formula() does not recognize the result as numeric because it checks the class (i.e., array) and not the mode (i.e., numeric).

res <- combn(2:9,2,mean)
hist(~res,w=0.5)

Testing To Do

I recently updated the structure for testing and found the following testing needs:

  • capHistConvert() ... More tests that conversions do what they are supposed to. Was Issue #1 .
  • expandCounts_OUT
  • expandLenFreqs_OUT
  • metaM_OUT
  • mrClosed_OUT
  • mrClosed_XXX ... force some different types of CIs to be computed.
  • capHist_OUT
  • capHistConvert() ... More tests of warnings and errors. Was Issue #1 .
  • capHistConvert() ... Make tests for cols2use= and cols2ignore= here. Was Issue #1 .
  • ciDists_VALIDATE
  • ciDists_OUT ... hyperCI() and poiCI()
  • ciDists_VALIDATE ... hyperCI() and poiCI()
  • dunnTest_OUT
  • ksTest_OUT
  • sumTable_VALIDATE

Add a `w` argument to `hist.formula()`

A w= argument to hist, if as in lencat(), would allow the user to control the bins with one argument rather than having to use seq() in breaks=. This would be useful as most often the user wants to control the width of the bins but using seq() requires one to hardwire the minimum and maximum values of the bins as well.

Problems with Zippin method

Hi, droglenc,

I am working on performing some power simulations using the FSA package. I am running into problems with the Zippin method after a number of simulations. Coded as:

zippy1<-apply(y,MARGIN=c(1,2),FUN=removal,method="Zippin",just.ests=TRUE)

However, when I substitute the "CarleStrub" method I do not have these issues (i.e., the simulations run to completion).

I have attached the input data file (zipest2.xlsx) and copied and pasted the my R code below. Do you have any insight into what may be going on? If you need more information from me please do not hesitate to ask.

I truly appreciate any time you are willing to give to my problem.

Sincerely,
Daniel Hanks

zipEst2.xlsx

#simulation for multiple sites####
library(FSAdata)
library(FSA)
library(arm)
library(lme4)
zipEst2=read.csv(file="zipEst2.csv")
#below is simple model with no added covariates
#lme2=glmer(zipEst2$No~1+(1|zipEst2$Year)+(1|zipEst2$Site),family=poisson)
#summary(lme2)

abundance model

lme2=glmer(No~(1|Year)+(1|Site), data=zipEst2, # kanno: removed covariates
family=poisson, offset=log(Length/200)) # most sites sampled for 200 m
summary(lme2)
#not sure but I may need
#to add offest as

kanno: add it as a separate 'offset' statement with a log transformation

see page 112 of Gelmand & Hill (2007) - Kasey has a copy

detection model

zipEst2$p.logit=log(zipEst2$p/(1-zipEst2$p))
zipEst2$WidthStd <- scale(zipEst2$Width)
zipEst2$ElevationStd <- scale(zipEst2$Elevation)
zipEst2$DaysStd <- scale(zipEst2$Days)

models with 3 covariates

model.p1=lm(p.logit ~ WidthStd + ElevationStd + DaysStd, data=zipEst2)
summary(model.p1)

elevation is not important: remove it

model.p2=lm(p.logit ~ WidthStd + DaysStd, data=zipEst2)
summary(model.p2)

n.sites=20
n.years=5
n.passes=3
r=runif(n.sites, -0.01, 0) # kanno: draw from a range for multiple sites
trend=log(1+r)
p.logit=log(zipEst2$p/(1-zipEst2$p))#convert to logit scale
p.mu=mean(p.logit)#mean detection on logit scale
p.sd.site=sd(p.logit)#sd of detection on logit scale

#beginning to run simulations

n.sims=1000
library(arm); library(FSA)
simsN=sim(lme2,n.sims) # abundance
simsP=sim(model.p2, n.sims) # detection
p.vals=array(NA,c(n.sims,2)) # create an empty array to store p-values of trend
for(s in 1:n.sims){ # changed from k to s because k is used in the new function below
print(s)
mu=simsN@fixef[s,1]
site.sd=lme2@theta[1]
year.sd=lme2@theta[2]

#stochastic factors affecting abundance
year.ran=rnorm(n.years,0,year.sd)
site.ran=rnorm(n.sites,0,site.sd)

#factors affecting detection
beta.intercept=simsP@coef[s,1]
beta.width=simsP@coef[s,2]
beta.days=simsP@coef[s,3]

kanno: remove the following lines: we are using fiexf for covariates, thus no need for random draw

#width.ran=rnorm(n.sites,0,abs(width.sd))
#elev.ran=rnorm(n.sites,0,elev.sd)
#days.ran=(rnorm(n.sites,0,abs(days.sd)))
width.sd=array(rnorm(n.sitesn.years, 0,1), dim=c(n.sites,n.years))
days.sd=array(rnorm(n.sitesn.years, 0,1), dim=c(n.sites,n.years))

variation in p unexplained by covariates (noise)

p.resids.sd=sd(model.p2$residuals)
p.sample.ran=array(rnorm(n.sites*n.years,0,p.resids.sd),dim=c(n.sites,n.years))

N<-p<-lambda<-zippy<- array(NA,dim=c(n.sites,n.years),
dimnames=list(paste("site",1:n.sites),
paste("year",1:n.years)))
y=array(NA,dim=c(n.sites,n.years,n.passes),
dimnames=list(paste("site",1:n.sites),
paste("year",1:n.years),
paste("pass",1:n.passes)))

for(i in 1:n.sites){
for(j in 1:n.years){

#abundance
lambda[i,j]<-exp(mu+((trend[i])*(j-1))+
site.ran[i]+year.ran[j])
N[i,j]<-rpois(1,lambda[i,j])

#observed count
p[i,j]<-plogis(p.mu+
beta.widthwidth.sd[i,j] +
beta.days
days.sd[i,j] +
p.sample.ran[i,j]) #DH: Do width.ran, etc. go here and
#does is capture the overdispersion?

kanno: these are used as coef

y[i,j,1]<-rbinom(1,N[i,j],p[i,j])
y[i,j,2]<-rbinom(1,N[i,j]-y[i,j,1],p[i,j])
y[i,j,3]<-rbinom(1,N[i,j]-y[i,j,1]-y[i,j,2],p[i,j])
}
}
print("Finished up to line 208")

####################################################################################################################

The following code removes bad data that spit out an error message "Catch data results in Zippin model failure."

See https://github.com/droglenc/FSA/blob/c377e5a246f620e84860a4b35f7fc8c835105599/R/removal.R

iRemovalKTX <- function(catch) { # line 230 of the above github site

number of samples

k <- length(catch)

total catch

T <- sum(catch)

cumulative catch prior to ith sample

same as sum(cumsum(catch)[-k]) as used in Schnute (1983)

i <- 1:k
X <- sum((k-i)*catch)

return named vector

return (c(k=k,T=T,X=X))
}

for(i in 1:n.sites){
for(j in 1:n.years){
print("Running iRemovelKTX")
int <- iRemovalKTX(y[i,j,]) # line 433 of the above github site
k <- int[["k"]]
T <- int[["T"]]
X <- int[["X"]]
print("Ran iRemovelKTX")

This condition is from equation 6 in Carle & Strub (1978)

if (X <=(((T-1)*(k-1))/2)-1) {
print("Trying to get a new X")
repeat {
y[i,j,1]<-rbinom(1,N[i,j],p[i,j])
y[i,j,2]<-rbinom(1,N[i,j]-y[i,j,1],p[i,j])
y[i,j,3]<-rbinom(1,N[i,j]-y[i,j,1]-y[i,j,2],p[i,j])

  int <- iRemovalKTX(y[i,j,])    # line 433 of the above github site 
  k <- int[["k"]]
  T <- int[["T"]]
  X <- int[["X"]]
  
  if (X > (((T-1)*(k-1))/2)-1)      break
  #dh: this was X<= so I changed to > 
  #so that it would break the loop if
  #the condition was met.  However, this
  #hasn't seemed to make much difference
  #in the production of actual results 
  #p.vals.
}
print("Got new X")

}

}
}

print("Ran up to line 265")
####################################################################################################

Now run Zippin method

zippy1<-apply(y,MARGIN=c(1,2),FUN=removal,method="Zippin",just.ests=TRUE)

library(reshape2)
print("Finished apply on line 266")
zippy2 <- melt(zippy1[1,,], value.name="ZipEst", varnames=c('Site','Year'))
firstPass <- melt(y[,,1], value.name="firstPcount", varnames=c('Site','Year'))
result <- merge(zippy2, firstPass)

print("Finished merge on line 271")

result$YearNum <- as.numeric(substring(result$Year,5))
library(dplyr)
result <- arrange(result, Site, YearNum)

print("Running glmer:")
lm.sim.single=glmer(firstPcount ~ YearNum + (1|Site) + (1|Year), data=result, family=poisson)
lm.sim.zip=glmer(ZipEst ~ YearNum + (1|Site) + (1|Year), data=result, family=poisson)
print("Finished running glmers")
p.vals[s,1]=coef(summary(lm.sim.single))[2,4]
p.vals[s,2]=coef(summary(lm.sim.zip))[2,4]
}
p.vals
(length(p.vals[,1][p.vals[,1]<0.05])/n.sims)
(length(p.vals[,2][p.vals[,2]<0.05])/n.sims)

Slopes and Intercept Comparisons

Bring compSlopes() and compIntercepts() from NCStats into FSA. Need to make code consistent with other FSA functions, update help page, and add tests.

Add ability to use non-standard species to `psdCalc()`

Per request from Nick Heredia ... add ability to use psdCalc() with a species that does not exist in psdLit.

This should be a fairly quick addition ... just bypass call to psdVals() early in psdCalc() and make some catches if the user does not send the full compliment of length (i.e., one for each length category).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.