Coder Social home page Coder Social logo

mriml's Introduction

mrIML: Multivariate (multi-response) interpretable machine learning

GitHub R package version GitHub contributors GitHub all releases GitHub last commit R-CMD-check

This package aims to enable users to build and interpret multivariate machine learning models harnessing the tidyverse (tidy model syntax in particular). This package builds off ideas from Gradient Forests Ellis et al 2012), ecological genomic approaches Fitzpatrick and Keller, 2014 and multi-response stacking algorithms [Xing et al 2019].

This package can be of use for any multi-response machine learning problem, but was designed to handle data common to community ecology (site by species data) and ecological genomics (individual or population by SNP loci).

Recent mrIML publications

  1. Fountain-Jones, N. M., Kozakiewicz, C. P., Forester, B. R., Landguth, E. L., Carver, S., Charleston, M., Gagne, R. B., Greenwell, B., Kraberger, S., Trumbo, D. R., Mayer, M., Clark, N. J., & Machado, G. (2021). MrIML: Multi-response interpretable machine learning to model genomic landscapes. Molecular Ecology Resources, 21, 2766– 2781. https://doi.org/10.1111/1755-0998.13495

  2. Sykes, A. L., Silva, G. S., Holtkamp, D. J., Mauch, B. W., Osemeke, O., Linhares, D. C.L., & Machado, G. (2021). Interpretable machine learning applied to on-farm biosecurity and porcine reproductive and respiratory syndrome virus. Transboundary and Emerging Diseases, 00, 1– 15. https://doi.org/10.1111/tbed.14369

Installation

Install the stable version of the package:

#install.packages("devtools")
devtools::install_github('nfj1380/mrIML')
library(mrIML)

Quick start

mrIML is designed to be used with a single function call or to be used in an ad-hoc fashion via individual function calls. In the following section we give an overview of the simple use case. For more on using each function see the function documentation. The core functions for both regression and classification are: mrIMLpredicts, mrIMLperformance, and mrInteractions,for plotting and visualization mrVip, mrFlashlight, andplot_vi. Estimating the interactions alone can be substantially computationally demanding depending on the number of outcomes you want to test. The first step to using the package is to load it as follows.

Model component

Now all the data is loaded and ready to go we can formulate the model using tidymodel syntax. In this case we have binary data (SNP presence/absence at each loci) but the data could also be counts or continuous (the set_model argument would be “regression” instead of “classification”). The user can specify any model from the ‘tidymodel’ universe as ‘model 1’ (see https://www.tidymodels.org/find/ for details). However, we have done most of our testing on random forests (rf) and glms (generalized linear models). Here we will specify a random forest classification model as the model applied to each response.

model1 <- 
  rand_forest(trees = 10, mode = "classification", 
              mtry = tune(), 
              min_n = tune()) %>% #100 trees are set for brevity
              set_engine("randomForest")

This function represents the core functionality of the package and includes results reporting, plotting and optional saving. It requires a data frame of X t( the snp data for example) and Y represented by the covariates or features.

Load example data (cite) data from {mrIML}.

fData <- filterRareCommon (Responsedata,
                           lower=0.4,
                           higher=0.7) 
data <- fData[1:20]

Parallel processing

MrIML provides uses the flexible future apply functionality to set up multi-core processing. In the example below, we set up a cluster using 4 cores. If you don’t set up a cluster, the default settings will be used and the analysis will run sequentially.

# detectCores() #check how many cores you have available. We suggest keeping one core free for internet browsing etc.

cl <- parallel::makeCluster(4)

plan(cluster,
     workers=cl)
#Define set of features
fData <- filterRareCommon (Responsedata,
                           lower=0.4,
                           higher=0.7) 
Y <- fData #For simplicity when comparing
#Define set the outcomes of interest
str(Features) 
## 'data.frame':    20 obs. of  19 variables:
##  $ Grassland       : num  0.07 0.0677 0.1845 0.0981 0.1578 ...
##  $ Shrub.Scrub     : num  0.557 0.767 0.524 0.786 0.842 ...
##  $ Forest          : num  0.01072 0.030588 0.008615 0.000662 0.000616 ...
##  $ HighlyDev       : num  0 0 0.00225 0 0 ...
##  $ Urban           : num  0 0 0.00159 0 0 ...
##  $ Suburban        : num  0.00357 0.13268 0.01325 0.00119 0 ...
##  $ Exurban         : num  0.00622 0.03019 0 0.01906 0 ...
##  $ Altered         : num  0.441 0.182 0.114 0.12 0 ...
##  $ Distance        : num  1.321 0.492 3.231 5.629 4.739 ...
##  $ Latitude        : num  33.8 33.8 33.8 33.8 33.8 ...
##  $ Longitude       : num  -118 -118 -118 -118 -118 ...
##  $ Age             : int  3 0 3 2 3 3 2 3 3 3 ...
##  $ Sex             : int  1 1 1 1 0 0 0 1 1 1 ...
##  $ Relatedness.PCO1: num  -0.1194 -0.0389 -0.1618 -0.1811 -0.1564 ...
##  $ Relatedness.PCO2: num  -0.1947 -0.0525 -0.321 -0.0827 0.1 ...
##  $ Relatedness.PCO3: num  -0.191 -0.0874 0.0541 -0.0627 -0.0111 ...
##  $ Relatedness.PCO4: num  0.1117 0.2422 0.0974 0.2129 0.2259 ...
##  $ Relatedness.PCO5: num  0.06405 0.0706 0.03514 -0.00084 0.0894 ...
##  $ Relatedness.PCO6: num  -0.0432 0.0683 -0.0805 0.2247 -0.055 ...
#Remove NAs from the feature/predictor data.
FeaturesnoNA<-Features[complete.cases(Features), ]
X <- FeaturesnoNA #For simplicity
#For more efficient testing for interactions (more variables more interacting pairs)
X <- FeaturesnoNA[c(1:3)] #Three features only


yhats <- mrIMLpredicts(X=X, #Features/predictors 
                       Y=Y, #Response data
                       Model=model1, #Specify your model
                       balance_data='no', #Chose how to balance your data 
                       k=5, 
                       racing = F,
                       mode='classification', #Chose your mode (classification versus regression)
                       seed = 120) #Set seed
##   |                                                                              |                                                                      |   0%  |                                                                              |==                                                                    |   3%  |                                                                              |=====                                                                 |   7%  |                                                                              |=======                                                               |  10%  |                                                                              |==========                                                            |  14%  |                                                                              |============                                                          |  17%  |                                                                              |==============                                                        |  21%  |                                                                              |=================                                                     |  24%  |                                                                              |===================                                                   |  28%  |                                                                              |======================                                                |  31%  |                                                                              |========================                                              |  34%  |                                                                              |===========================                                           |  38%  |                                                                              |=============================                                         |  41%  |                                                                              |===============================                                       |  45%  |                                                                              |==================================                                    |  48%  |                                                                              |====================================                                  |  52%  |                                                                              |=======================================                               |  55%  |                                                                              |=========================================                             |  59%  |                                                                              |===========================================                           |  62%  |                                                                              |==============================================                        |  66%  |                                                                              |================================================                      |  69%  |                                                                              |===================================================                   |  72%  |                                                                              |=====================================================                 |  76%  |                                                                              |========================================================              |  79%  |                                                                              |==========================================================            |  83%  |                                                                              |============================================================          |  86%  |                                                                              |===============================================================       |  90%  |                                                                              |=================================================================     |  93%  |                                                                              |====================================================================  |  97%  |                                                                              |======================================================================| 100%
ModelPerf <- mrIMLperformance(yhats=yhats,
                              Model=model1,
                              Y=Y, mode='classification')
ModelPerf[[1]] #Predictive performance for individual responses 
##    response  model_name           roc_AUC                mcc       sensitivity
## 1   env_131 rand_forest 0.904761904761905  0.654653670707977 0.714285714285714
## 2   env_163 rand_forest 0.714285714285714  0.218217890235992 0.666666666666667
## 3   env_164 rand_forest 0.833333333333333  0.612372435695795                 1
## 4   env_167 rand_forest 0.395833333333333              -0.25               0.5
## 5   env_169 rand_forest 0.904761904761905  0.654653670707977 0.714285714285714
## 6   env_212 rand_forest            0.6875  0.102062072615966               0.5
## 7    env_23 rand_forest              0.34 -0.333333333333333                 0
## 8    env_24 rand_forest              0.52 -0.408248290463863               0.4
## 9    env_41 rand_forest 0.583333333333333                  0               0.5
## 10   env_47 rand_forest 0.895833333333333  0.816496580927726                 1
## 11   env_59 rand_forest 0.904761904761905  0.654653670707977 0.714285714285714
## 12    env_8 rand_forest               0.3 -0.408248290463863               0.2
## 13   env_84 rand_forest 0.904761904761905  0.654653670707977 0.714285714285714
## 14   env_85 rand_forest 0.904761904761905  0.654653670707977 0.714285714285714
## 15   env_86 rand_forest 0.395833333333333              -0.25               0.5
## 16  pol_105 rand_forest  0.80952380952381  0.218217890235992 0.142857142857143
## 17  pol_108 rand_forest 0.476190476190476  0.218217890235992 0.142857142857143
## 18  pol_111 rand_forest 0.833333333333333  0.612372435695795                 1
## 19  pol_117 rand_forest               0.4 -0.218217890235992               0.6
## 20  pol_132 rand_forest 0.895833333333333  0.816496580927726                 1
## 21  pol_159 rand_forest             0.375 -0.583333333333333              0.25
## 22  pol_258 rand_forest               0.3 -0.408248290463863               0.2
## 23   pol_30 rand_forest 0.895833333333333  0.816496580927726                 1
## 24  pol_340 rand_forest              0.24               -0.5                 0
## 25  pol_353 rand_forest 0.395833333333333              -0.25               0.5
## 26  pol_366 rand_forest 0.904761904761905  0.654653670707977 0.714285714285714
## 27   pol_87 rand_forest 0.895833333333333  0.816496580927726                 1
## 28   pol_88 rand_forest 0.895833333333333  0.816496580927726                 1
## 29   pol_89 rand_forest 0.895833333333333  0.816496580927726                 1
##                  ppv       specificity        prevalence
## 1                  1                 1 0.421052631578947
## 2  0.571428571428571               0.4 0.631578947368421
## 3                0.5              0.75 0.421052631578947
## 4               0.25               0.5 0.421052631578947
## 5                  1                 1 0.421052631578947
## 6              0.625              0.25 0.684210526315789
## 7                0.8                 0 0.631578947368421
## 8                0.2 0.333333333333333 0.421052631578947
## 9                0.5               0.4 0.473684210526316
## 10 0.833333333333333               0.8 0.473684210526316
## 11                 1                 1 0.421052631578947
## 12               0.4              0.25 0.473684210526316
## 13                 1                 1 0.421052631578947
## 14                 1                 1 0.421052631578947
## 15              0.25               0.5 0.421052631578947
## 16                 1                 1 0.473684210526316
## 17                 1                 1 0.421052631578947
## 18               0.5              0.75 0.421052631578947
## 19               0.2 0.428571428571429 0.473684210526316
## 20 0.833333333333333               0.8 0.473684210526316
## 21 0.166666666666667 0.166666666666667 0.473684210526316
## 22               0.4              0.25 0.473684210526316
## 23 0.833333333333333               0.8 0.473684210526316
## 24               0.6                 0 0.421052631578947
## 25              0.25               0.5 0.421052631578947
## 26                 1                 1 0.421052631578947
## 27 0.833333333333333               0.8 0.473684210526316
## 28 0.833333333333333               0.8 0.473684210526316
## 29 0.833333333333333               0.8 0.473684210526316
ModelPerf[[2]]#Overall predictive performance. r2 for regression and MCC for classification
## [1] 0.6690887

Plotting

VI <- mrVip(yhats, X=X, Y=Y) 

VI_plot <- interpret_Mrvi(VI=VI,  X=X,Y=Y, modelPerf=ModelPerf, cutoff= 0.0, mode='classification')

VI_plot[[4]]#plot

VI_plot[[1]] #list of outliers
## $PC1
## named integer(0)
## 
## $PC2
## env_167 pol_105 
##       4      16 
## 
## $PC3
## named integer(0)

Effect of a feature on genetic change

We also wrap some flashlight functionality to visualize the marginal (i.e. partial dependencies) or conditional (accumulated local effects) effect of a feature on genetic change. Partial dependencies take longer to calculate and are more sensitive to correlated features

flashlightObj <- mrFlashlight(yhats,
                              X=X,
                              Y=Y,
                              response = "single",
                              index=1,
                              mode='classification')

#plot prediction scatter for all responses. Gets busy with 
plot(light_scatter(flashlightObj,
                   v = "Forest",
                   type = "predicted"))

#plots everything on one plot (partial dependency, ALE, scatter)
plot(light_effects(flashlightObj,
                   v = "Grassland"),
                   use = "all")

#profileData_pd <- light_profile(flashlightObj,  v = "Grassland")

#mrProfileplot(profileData_pd , sdthresh =0.05) #sdthresh removes responses from the first plot that do not vary with the feature

profileData_ale <- light_profile(flashlightObj,
                                 v = "Grassland",
                                 type = "ale") #accumulated local effects

mrProfileplot(profileData_ale,
              sdthresh =0.01)

##  Press [enter] to continue to the global summary plot

#the second plot is the cumulative turnover function

Interacting predictors or features

Finally, we can assess how features interact overall to shape genetic change. Be warned this is memory intensive. Future updates to this package will enable users to visualize these interactions and explore them in more detail using 2D ALE plots for example.

#interactions <-mrInteractions(yhats,X,Y,mod='classification') #this is computationally intensive so multicores are needed. If stopped prematurely - have to reload things
#mrPlot_interactions(interactions,X,Y,top_ranking = 2,top_response=2)

References

Xing, L, Lesperance, ML and Zhang, X (2020). Simultaneous prediction of multiple outcomes using revised stacking algorithms. Bioinformatics, 36, 65-72. doi:10.1093/bioinformatics/btz531.

Fitzpatrick, M.C. & Keller, S.R. (2015) Ecological genomics meets community-level modelling of biodiversity: mapping the genomic landscape of current and future environmental adaptation. Ecology Letters 18, 1–16.doi.org/10.1111/ele.12376

Ellis, N., Smith, S.J. and Pitcher, C.R. (2012), Gradient forests: calculating importance gradients on physical predictors. Ecology, 93: 156-168. doi:10.1890/11-0252.1

mriml's People

Contributors

nfountainjones avatar nfj1380 avatar gustavo-etal avatar cpkoza avatar nicholasjclark avatar

Stargazers

Srikanth K S avatar Wytamma Wirth avatar Chris Jones avatar  avatar Joshua Stevenson avatar Tim Lucas avatar

Watchers

James Cloos avatar Andrea Manica avatar  avatar  avatar  avatar  avatar

mriml's Issues

cross validation mrIMLpredicts for regression

Hello

I am uncertain on how the k parameter works, your description here states
k | A numeric sets the number of folds in the 10-fold cross-validation. 10 is the default.

Does that mean the default is 10 repetitions of 10-fold cross validation? (100 total test runs) or one repetition of 10 fold cross validation?

Error in data.frame(sp, mod_name, rmse, rsq): arguments imply differing number of rows: 0, 1

Hello
I tried to run mrIML using this script:
cl <- parallel::makeCluster(20)
future::plan(cluster, workers=cl)
X<-as.data.frame(gfdf[,2:13])
Y<-gfdf[,seq(14,61822,100)]
model_rf <- rand_forest(trees = 10, mode = "regression", mtry = tune(), min_n = tune()) %>% set_engine("randomForest")
yhats_rf <- mrIMLpredicts(X=X,Y=Y, Model=model_rf, balance_data='no', mode='regression', tune_grid_size=5, seed = sample.int(1e8, 1) )
I get a lot of warning during the run:

! Fold05: preprocessor 1/1, model 1/5: The response has five or fewer unique value
! Fold04: internal: A correlation computation is required, but truth is constant

When I run the following code:

ModelPerf <- mrIMLperformance(yhats_rf, Model=model_rf, Y, mode='regression')
I get this error:

Error in data.frame(sp, mod_name, rmse, rsq): arguments imply differing number of rows: 0, 1
Traceback:

  1. mrIMLperformance(yhats_rf, Model = model_rf, Y, mode = "regression")
  2. data.frame(sp, mod_name, rmse, rsq)
  3. stop(gettextf("arguments imply differing number of rows: %s",
    . paste(unique(nrows), collapse = ", ")), domain = NA)

Thank you

Top variable importance VI and partial dependence are not alleyways the same.

Hello
I was following your vignette "Regression working example" with my own data. I was looking at one of my partial dependence plots (PD) and I wanted to find the order of importance in the selected SNP. However, looking at the VI object, some SNP that are among the top importance values are not represented in the PD plot while other SNP with a bit lower importance are represented. I wonder if this is expected. Which of the two estimates will better point to SNP involved in adaptation?
Thank you
Hanan

mrVip error

As discussed here, executing mrVip, after a successful model construction results in an error:

VI <- mrVip(models, X=X)
Error in mrIML::mrVip(models, X = X) : 
  argument "Y" is missing, with no default

Here both feature data and the constructed model:
model_data.zip
I am using R 4.3.2 and mrIML 2.0.0.

Could it be that you updated the function without updating its documentation?

Thanks for your assistance.

Vignette model error

Hello,

First, thank you providing mrIML to the scientific community. I have been trying to complete the Vignette for quite some time now. All packages were installed successfully. However, line 96 of the vignette

yhats_lm <- mrIMLpredicts(X=X,Y=Y, Model=model_lm, balance_data='no', mode='regression', seed = sample.int(1e8, 1))

fails with the prompt

Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : 
  contrasts can be applied only to factors with 2 or more levels

I guess this is not on purpose. Is there a masking problem with my packages?

The order in which I load them is:

library(BiocManager)
library(mrIML); library(here); library(devtools); library(vip); library(tidymodels); 
library(randomForest); library(gbm); library(tidyverse); library(parallel); 
library(doParallel); library(themis); library(viridis); library(janitor); 
library(hrbrthemes); library(vegan); library(flashlight); library(iml); 
library(ggrepel); library(ranger); library(future.apply); library(LEA); library(plyr)

Also, to exclude the data as a potential source of error, I have tried the step with my own data. Still, it fails.
I also tried to create a minimal example by just loading the packages mrIML, tidymodels, randomForest, tidyverse, future.apply, plyr and here with no success.

Thank you for providing assistance.

Added StackPredictions; need a function to generate yhats list?

Hi all, I've added a StackPredictions function that will take a list of model objects (one for each outcome variable) and stack the predictions together using the multivariate stacking algorithm from Xing et al (https://academic.oup.com/bioinformatics/article-abstract/36/1/65/5526872?redirectedFrom=fulltext). The example in the help file shows how to generate the necessary input list, but we could write a function to do this separately. I believe Nick FJ is working on a function to do this in tidymodel syntax

Test dataset is showing issue in number of rows

Getting the below error:

#Remove NAs from the feature/predictor data.
FeaturesnoNA<-Features[complete.cases(Features), ]
X <- FeaturesnoNA #For simplicity
#For more efficient testing for interactions (more variables more interacting pairs)
X <- FeaturesnoNA[c(1:3)] #Three features only
yhats <- mrIMLpredicts(X=X, #Features/predictors

  •                    Y=Y, #Response data
    
  •                    Model=model1, #Specify your model
    
  •                    balance_data='no', #Chose how to balance your data 
    
  •                    mode='classification', #Chose your mode (classification versus regression)
    
  •                    seed = 120) #Set seed
    

in vfold_cv(data_train, v = k) :
The number of rows is less than v = 10

Unit tests

Hi,

I was reviewer 3 on your paper submission. I've now been sent your resubmission for re-review. So first let me say congratulations on all the great work you've done since.

I thought I'd message you directly here about unit tests rather than via the editors. I hope that's ok. Everything except the unit test comment looks great, so I'm hoping I'll be able to just say "the authors have responded to all my queries, recommend accept" in my rereview.

However, I can't currently see any unit tests in this repo. I can see that you've set up CI and run R CMD check on the package (which is great). And there's possibly temp_pipeline.R which runs functions but doesn't test them (and it doesn't look like that script gets run by the github actions).

However, it may well just be that I've missed something somewhere or something like that. testthat is in the sessionInfo() in the README for example. The tests folder is stated in .gitignore as a folder to be ignored. So maybe there are tests and they just haven't been committed by accident.

So, if I have missed the tests could you possibly point me to where they are? To be completely explicit, by unit tests I mean code that runs functions and compares the output with prespecified expected outputs. As described here for example https://r-pkgs.org/tests.html

Again, I hope you're ok with me approaching you outside the review. Just, everything goes so slowly through editors that it'd be a total shame if I said "there's still no tests" and a month later I get an email back from you, via the editors saying "yes, it's just in some other folder".

Tim

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.