Coder Social home page Coder Social logo

blind-contours / cvtreemle Goto Github PK

View Code? Open in Web Editor NEW
5.0 3.0 1.0 130.5 MB

:deciduous_tree: :dart: Cross Validated Decision Trees with Targeted Maximum Likelihood Estimation

License: MIT License

R 79.36% TeX 18.45% Shell 2.18%
machine-learning targeted-learning statistics variable-importance causal-inference decision-trees robust-statistics causal-effects exposure-mixtures

cvtreemle's Introduction

CVtreeMLE

R-CMD-check Coverage Status CRAN CRAN downloads CRAN total downloads Project Status: Active – The project has reached a stable, usable state and is being actively developed. MIT license

Codecov test coverage

Discovery of Critical Thresholds in Mixed Exposures and Estimation of Policy Intervention Effects using Targeted Learning

Author: David McCoy


What is CVtreeMLE?

This package operationalizes the methodology presented here:

https://arxiv.org/abs/2302.07976

People often encounter multiple simultaneous exposures (e.g. several drugs or pollutants). Policymakers are interested in setting safe limits, interdictions, or recommended dosage combinations based on a combination of thresholds, one per exposure. Setting these thresholds is difficult because all relevant interactions between exposures must be accounted for. Previous statistical methods have used parametric estimators which do not directly address the question of safe exposure limits, rely on unrealistic assumptions, and do not result in a threshold based statistical quantity that is directly relevant to policy regulators.

Here we present an estimator that a) identifies thresholds that minimize/maximize the expected outcome controlling for covariates and other exposures; and which b) efficiently estimates a policy intervention which compares the expected outcome if everyone was forced to these safe levels compared to the observed outcome under observed exposure distribution.

This is done by using cross-validation where in training folds of the data, a custom g-computation tree-based search algorithm finds the minimizing region, and an estimation sample is used to estimate the policy intervention using targeted maximum likelihood estimation.

Inputs and Outputs

This package takes in a mixed exposure, covariates, outcome, super learner stacks of learners if determined (if not default are used), number of folds, minimum observations in a region, if the desired region is minimizer or maximizer and parallelization parameters.

The output are k-fold specific results for the region found in each fold with valid inference, a pooled estimate of the overall oracle parameter across all folds and pooled exposure sets if the region has some inconsistency across the folds.


Installation

Note: Because CVtreeMLE package (currently) depends on sl3 that allows ensemble machine learning to be used for nuisance parameter estimation and sl3 is not on CRAN the CVtreeMLE package is not available on CRAN and must be downloaded here.

There are many dependencies for CVtreeMLE so it’s easier to break up installation of the various packages to ensure proper installation.

CVtreeMLE uses the sl3 package to build ensemble machine learners for each nuisance parameter.

Install sl3 on devel:

remotes::install_github("tlverse/sl3@devel")

Make sure sl3 installs correctly then install CVtreeMLE

remotes::install_github("blind-contours/CVtreeMLE@main")

Example

First load the package and other packages needed

library(CVtreeMLE)
library(sl3)
library(kableExtra)
library(ggplot2)
seed <- 98484
set.seed(seed)

To illustrate how CVtreeMLE may be used to find and estimate a region that, if intervened on would lead to the biggest reduction in an outcome we use synthetic data from the National Institute of Environmental Health:

National Institute of Environmental Health Data

The 2015 NIEHS Mixtures Workshop was developed to determine if new mixture methods detect ground-truth interactions built into the simulated data. In this way we can simultaneously show CVtreeMLE output, interpretation and validity.

For detailed information on this simulated data please see:

https://github.com/niehs-prime/2015-NIEHS-MIxtures-Workshop

niehs_data <- NIEHS_data_1

head(niehs_data) %>%
  kableExtra::kbl(caption = "NIEHS Data") %>%
  kableExtra::kable_classic(full_width = FALSE, html_font = "Cambria")
NIEHS Data
obs Y X1 X2 X3 X4 X5 X6 X7 Z
1 7.534686 0.4157066 0.5308077 0.2223965 1.1592634 2.4577556 0.9438601 1.8714406 0
2 19.611934 0.5293572 0.9339570 1.1210595 1.3350074 0.3096883 0.5190970 0.2418065 0
3 12.664050 0.4849759 0.7210988 0.4629027 1.0334138 0.9492810 0.3664090 0.3502445 0
4 15.600288 0.8275456 1.0457137 0.9699040 0.9045099 0.9107914 0.4299847 1.0007901 0
5 18.606498 0.5190363 0.7802400 0.6142188 0.3729743 0.5038126 0.3575472 0.5906156 0
6 18.525890 0.4009491 0.8639886 0.5501847 0.9011016 1.2907615 0.7990418 1.5097039 0

Briefly, this synthetic data can be considered the results of a prospective cohort epidemiologic study. The outcome cannot cause the exposures (as might occur in a cross-sectional study). Correlations between exposure variables can be thought of as caused by common sources or modes of exposure. The nuisance variable Z can be assumed to be a potential confounder and not a collider. There are 7 exposures which have a complicated dependency structure. $X_3$ and $X_6$ do not have an impact on the outcome.

One issue is that many machine learning algorithms will fail given only 1 variable passed as a feature so let’s add some other covariates.

niehs_data$Z2 <- rbinom(nrow(niehs_data),
  size = 1,
  prob = 0.3
)

niehs_data$Z3 <- rbinom(nrow(niehs_data),
  size = 1,
  prob = 0.1
)

Run CVtreeMLE

ptm <- proc.time()

niehs_results <- CVtreeMLE(
  data = as.data.frame(niehs_data),
  w = c("Z", "Z2", "Z3"),
  a = c(paste("X", seq(7), sep = "")),
  y = "Y",
  n_folds = 10,
  seed = seed,
  parallel_cv = TRUE,
  parallel = TRUE,
  family = "continuous",
  num_cores = 8,
  min_max = "min",
  min_obs = 25
)
proc.time() - ptm
#>    user  system elapsed 
#>  16.809   1.084 465.339

Mixture Results

First let’s look at the k-fold specific estimates:

k_fold_results <- niehs_results$`V-Specific Mix Results`

k_fold_results %>%
  kableExtra::kbl(caption = "K-fold Results") %>%
  kableExtra::kable_classic(full_width = FALSE, html_font = "Cambria")
K-fold Results
are se lower_ci upper_ci p_val p_val_adj rmse mix_rule fold variables
-0.036 11.477 -22.530 22.458 0.997515 1 2.689 X2 \<= 0.42 1 X2
-0.173 7.816 -15.492 15.146 0.982327 1 2.759 X2 \<= 0.41 2 X2
0.486 15.208 -29.322 30.293 0.974526 1 2.906 X2 \<= 0.41 3 X2
0.969 15.328 -29.074 31.012 0.949580 1 3.469 X2 \<= 0.39 4 X2
0.543 13.659 -26.229 27.314 0.968304 1 3.358 X2 \<= 0.41 5 X2
0.537 15.434 -29.713 30.787 0.972228 1 3.110 X2 \<= 0.42 6 X2
0.129 15.465 -30.182 30.439 0.993363 1 3.191 X2 \<= 0.39 7 X2
-0.029 13.987 -27.443 27.386 0.998359 1 2.529 X2 \<= 0.39 8 X2
0.661 12.529 -23.895 25.217 0.957948 1 2.647 X2 \<= 0.39 9 X2
1.384 16.027 -30.029 32.797 0.931196 1 3.422 X2 \<= 0.39 10 X2

This indicates that the exposure X2 was found in every fold to have the most minimizing impact on endocrine disruption if all individuals were were forced to be exposed to levels less around 0.41. This resembles a policy where, if everyone were still exposed to the other exposures but we created a regulation that restricted individuals to only exposure of X2 less than 0.41.

The pooled estimates, leveraging all the folds for our estimates oracle target parameter looks like:

pooled_mixture_results <- niehs_results$`Oracle Region Results`

pooled_mixture_results %>%
  kableExtra::kbl(caption = "Oracle Mixture Results") %>%
  kableExtra::kable_classic(full_width = FALSE, html_font = "Cambria")
Oracle Mixture Results
Region ARE Standard Error Lower CI Upper CI P-value
0.248 4.351 -8.28 8.777 0.954462

Additional details for this and other features are given in the vignette.


Issues

If you encounter any bugs or have any specific feature requests, please file an issue. Further details on filing issues are provided in our contribution guidelines.


Contributions

Contributions are very welcome. Interested contributors should consult our contribution guidelines prior to submitting a pull request.


Citation

After using the CVtreeMLE R package, please cite the following:

@article{McCoy2023, 
doi = {10.21105/joss.04181}, 
url = {https://doi.org/10.21105/joss.04181}, 
year = {2023}, publisher = {The Open Journal}, 
volume = {8}, number = {82}, pages = {4181}, 
author = {David McCoy and Alan Hubbard and Mark Van der Laan}, 
title = {CVtreeMLE: Efficient Estimation of Mixed Exposures using Data Adaptive Decision Trees and Cross-Validated Targeted Maximum Likelihood Estimation in R}, 
journal = {Journal of Open Source Software} }

Related

  • R/sl3 - An R package providing implementation for Super Learner ensemble machine learning algorithms.

Funding

The development of this software was supported in part through grants from the NIH-funded Biomedical Big Data Training Program at UC Berkeley where I was a biomedical big data fellow.


License

© 2017-2024 David B. McCoy

The contents of this repository are distributed under the MIT license. See below for details:

MIT License
Copyright (c) 2017-2024 David B. McCoy
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

References

cvtreemle's People

Contributors

blind-contours avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

upulcooray

cvtreemle's Issues

Reduce number of dependencies

Eyeing a potential CRAN submission (or at least a welcome bloat-reduction), would you consider reducing the number of dependencies in your package 20+ is a lot, IMO)?

Comments on README file

  • Simulation plots
    In the plot that explains the simulation settings, please explain what does the plot on the left (with orange cube) represent (co-variables? then why three dimensions?), what is the relationship between A, W and M1, M2, M3? What is the difference between M1, M2,M3 in the left plot with those in the right plot?

  • number of folds in cross-validation
    It makes sense to use a higher value of n_folds instead of 2 to see how the results look like, as typically in cross-validation, higher number of folds(e.g., n_folds = 4, 8, 10, etc.) are used.

Reference to Zheng and van der Laan paper

This is an editorial comment on your JOSS submission.

When clicking the link to the Zheng and van der Laan reference, I find this note:

Comments
This material is published in: W. Zheng, M.J. van der Laan (2011). "Cross-Validated Targeted Minimum-Loss-Based Estimation." In M.J. van der Laan and S. Rose, Targeted Learning: Causal Inference for Observational and Experimental Data, Chapter 27. New York, Springer. This work is supported by NIH Targeted Empirical Super Learning in AIDS & Epidemiology grant # 5R01AI74345-5

Could you please update the citation so it cites this published book chapter rather than the preprint?

Issue with package manual: incomplete section Value

A large number of function miss the description for the section Value. This section simply contains the following sentence

#' @return Rules object. TODO: add more detail here.

The functions having the above issue include

calc_marginal_ate
calc_mixtures_ate
calc_v_fold_marginal_ate
calc_v_fold_mixtures_ate
common_mixture_rules
est_comb_exposure
est_marg_nuisance_params
evaluate_marginal_rules
evaluate_mixture_rules
find_common_marginal_rules
fit_mix_rule_backfitting
meta_mix_results
simulate_mixture_cube

Please add complete description in this section so the users know the details of returned value of a function.

Installation of dev version

README lists devel branch for development version of package, but there is no such branch and as such the installation fails via that method.

Is this sentence missing something?

Another editorial comment on your JOSS submission.

On lines 84-87 you write

In the case of mixtures, it is necessary to map a set of continuous mixture components into a lower dimensional representation of exposure using a pre-determined algorithm then estimate a target parameter on this more interpretable exposure.

This is hard to read. I suggest adding ", and" between "algorithm" and "then", so it becomes

In the case of mixtures, it is necessary to map a set of continuous mixture components into a lower dimensional representation of exposure using a pre-determined algorithm, and then estimate a target parameter on this more interpretable exposure.

Test coverage

the test coverage from covr is ~71% but it seems like the coverage is somewhat more accurately ~4%. tests/testthat/test_CVtreeMLE_breastcancer.R runs an example but doesn't include any assertions and doesn't inspect the output of the run (and in fact emits a warning on my system. due to how the test files are structured, the only function that seems to have substantial coverage right now is simulate_mixture_cube.

with the above, I find it very challenging to evaluate the functionality of the software. please expand the test coverage as necessary to ensure the majority of the functions are running not just end to end, but as expected when run.

Issue with NAMESPACE: extra space

Leave one space instead of multiple spaces between #' and @export when using package roxygen2 to label a function to be exported. Otherwise the function will not appear in NAMESPACE file. I checked the file bound.R in which I see two spaces between #' and @export for all functions in the file: bound_precision(), bound_propensity(), scale_to_unit(), and scale_to_original(). For example,

#' @return A \code{numeric} vector of the same length as \code{vals}, where
#'  the returned values are bounded to machine precision. This is intended to
#'  avoid numerical instability issues.
#'  @export

You can see those functions are not actually exported in NAMESPACE file. Please check if all function you intend to export are correctly generated in NAMESPACE file after running roxygen2::roxygenise()

Introducing packages in the order of their installation dependency in README

I encountered the following error when following instructions in README to run remotes::install_github("blind-contours/CVtreeMLE@main"):

ERROR: package installation failed
Error: Failed to install 'CVtreeMLE' from GitHub:
  Failed to install 'sl3' from GitHub:

Please check dependencies among the following packages introduced in README file, and describe them in the order that, if installation of A will fail because B hasn't been installed, introduce B before A.

remotes::install_github("blind-contours/CVtreeMLE@main")
remotes::install_github("tlverse/sl3@devel")
install.packages("partykit")
install.packages("pre")
install.packages(c("kableExtra", "ranger", "arm", "xgboost", "nnls", "hrbrthemes", "viridis"))

Why two licenses

I noticed there are two license files for the package. Can you explain besides MIT license, what's the license in the plain file LICENSE, and what's the purpose of having two licenses for the package?

Comments on the paper

above V-fold cross-validation
line 24-25: My understanding about cross-validation is we partition the whole data into V folds, and each time use data in V-1 folds to build a model (parameter generating sample), and apply the built model on the fold that is left out for scoring/validation (estimation sample). Is this how the package works? If so I think the following sentence needs to be rephrased as it means after we partition the data into K folds, for each fold we continue to partition the data into two parts.

CVtreeMLE uses V-fold cross-validation and partitions the full data
in each fold into a parameter-generating sample and an estimation sample

Here fold means group or partition. This page has a good explanation of cross-validation https://machinelearningmastery.com/k-fold-cross-validation/

The background section
line 58-59: As the author explained that in a lot of cases researchers are interested in studying a priori specified treatment or exposure, it is better to include a few typical such research examples/papers for readers to understand the context better.

line 61: more explanation is needed for what specifically high-dimensionality and sparsity refers to. It is the huge number of exposures compared to number of data points collected that makes the data high-dimensional? What is the sparsity about?

line 62-62: References are needed to support the following statement, so readers can understand why and how a target parameter is ill-defined, and therefore understand more about what this package wants to improve.

Even if this approach were possible, a target parameter that can inform public policy 
is still ill-defined

Remove R code that are no longer needed

To improve code readability, it is better to remove the R code that is no longer needed, instead of commenting out but retaining each line. The historical code can be easily tracked in the commit histories in Github.

For example,

In CVtreeMLE.R line 408-411,

      # mixed_comb_results <- est_comb_mixture_rules(At, Av, W, Y, rules, no_mix_rules, Q1_stack)

      # fold_results_mix_combo_data[[fold_k]] <- mixed_comb_results$data
      # fold_results_mix_Sls[[fold_k]] <- mixed_comb_results$learner

      results_list <- list(
        fold_results_mix_rules,
        mix_fold_data,
        mix_fold_directions,
        fold_results_marg_rules,
        fold_results_marg_directions,
        fold_results_marginal_data,
        # fold_results_marginal_additive_data,
        fold_results_marginal_combo_data,
        fold_results_marginal_Sls
        # fold_results_mix_combo_data,
        # fold_results_mix_Sls
      )

can be reduced to

      results_list <- list(
        fold_results_mix_rules,
        mix_fold_data,
        mix_fold_directions,
        fold_results_marg_rules,
        fold_results_marg_directions,
        fold_results_marginal_data,
        fold_results_marginal_combo_data,
        fold_results_marginal_Sls
      )

The same applies to other parts of the function and other functions.

Lightweight editing of the JOSS paper

line 10: glm --> GLM
line 30: Thus --> Therefore
line 39: glm's --> GLMs
line 48, line 50: semiparametric --> semi-parametric
line 58: unbold "a priori" and change it to italic format
line 69: average treatment effect --> average treatment effect (ATE). The reason is, when using an acronym, state it when it first appears in the paper and only use the acronym through the rest of the paper.
line 84: average treatment effect (ATE) --> ATE
line 87-88: Remove 'language and environment for statistical computing'
line 91: CVtreeMLE --> It
line 92: sl3package --> sl3 package
line 95-96: the hyperlink to CRAN is invalid as the package hasn't been published there.
line 102-103: To make sure the title of the reference matches with the title in DOI, sl3: Modern pipelines for machine learning and Super Learning --> sl3: Modern super learning with pipelines
line 105-106: Statistical Inference for Data Adaptive Target Parameters --> Statistical inference for data adaptive target parameters. This is to make all the references look consistent.

Missing dependencies: improve installation instructions?

Installation dependencies

Installation was straightforward from my development machine, but starting from a clean R session (using a Rocker image), running remotes::install_github("blind-contours/CVtreeMLE@main") took me down a rabbit hole of dependency-chasing which I just gave up on after a couple of days trying.

It's hard to say how many users would struggle with installing this package, but I would not assume they will have the full-blown machines software developers use. Given how long the installation takes (issue #23), I would try to make sure the user has all the dependencies set up before trying to install CVtreeMLE, or at the very least warn them. Either that, or explicitly restrict the OSs your package supports (desperate measure, not really recommended).

Conversely, since your CI checks are testing for Ubuntu, I guess installation on a clean Linux machine is still fine. I do, however, see some hard-coded dependency handling there that might hint about missing dependencies in the README file.

Example execution dependencies

Even from my development PC, I couldn't run the example straight off because I had to install the following packages:

  • kableExtra
  • ranger
  • xgboost
  • nnls (this one was particularly annoying since it made the CVtreeMLE() call fail only after almost 20 minutes of computing time)

I wish these extra dependencies would also have been mentioned somewhere (kableExtra is a suggested package, but still should be explicitly mentioned for running examples).

Editorial comment on JOSS submission

This is an editorial comment on the JOSS submission. I'll post my comments as separate issues while reading through the manuscript. This is the first one.

One lines 17-18, you write

more flexible methods [@Bobb2014] lack statistical inference

Could you please clarify what you mean by this? I briefly looked at the paper by Bobb et al., and it seems to me that their method yields posterior distributions which can be used for statistical inference. Are you thinking particularly about frequentist inference, or are there other issues which I'm not catching here?

Runtime performance guidelines

ML techniques are understandably pretty heavy to run, and even these test examples put some strain on the underpowered computer I used for package evaluation. I think it would be really helpful for the user to have a general understanding of the expected runtime and resource requirements for this tool with realistic dataset sizes. Does the user need this installed on an academic HPC or cloud ec2/etc. to get it to run? How big a dataset/complex a model space can it handle? These are the kinds of practical questions that I think could really help a potential user.

Multiple issues detected when running `devtools::check()`

Issues when running devtools::check():

  • Fix the duplicate chunk label issue in the vignette described in the following error message when running devtools::check(). This should be easy to fix.
E  creating vignettes (10.2s)
   --- re-buildingintro_CVtreeMLE.Rmdusing rmarkdown
   Error: processing vignette 'intro_CVtreeMLE.Rmd' failed with diagnostics:
   Duplicate chunk label 'plot sim_mixture_results', which has been used for the chunk:
   mixture_plots <- plot_mixture_results(
     v_intxn_results = niehs_results$`V-Specific Mix Results`, 
     hjust = 0.8)
   mixture_plots$X5X7
   Error: tangling vignette 'intro_CVtreeMLE.Rmd' failed with diagnostics:
   Duplicate chunk label 'plot sim_mixture_results', which has been used for the chunk:
   mixture_plots <- plot_mixture_results(
     v_intxn_results = niehs_results$`V-Specific Mix Results`, 
     hjust = 0.8)
   mixture_plots$X5X7
   --- failed re-buildingintro_CVtreeMLE.Rmd
  • Check and fix the following issue when running devtools::check() which is also related to vignette
E  creating vignettes (34.4s)
   --- re-buildingintro_CVtreeMLE.Rmdusing rmarkdown
   Quitting from lines 114-127 (intro_CVtreeMLE.Rmd) 
   Error: processing vignette 'intro_CVtreeMLE.Rmd' failed with diagnostics:
   could not find function "make_sl3_Task"
   --- failed re-buildingintro_CVtreeMLE.RmdSUMMARY: processing the following file failed:intro_CVtreeMLE.RmdError: Vignette re-building failed.
   Execution halted
Error in get_parentpid() : attempt to apply non-function
Calls: <Anonymous> ... rstudio_stdout -> rstudio_detect -> detect_new -> get_parentpid
Execution halted

Linting

This isn't absolutely required, but it would be really nice to have this package passed through full linting. I know it can be a bit of a pain, but it can really help encourage best practices. As an example, I use precommit with a config as follows:

repos:
  ## R support
- repo: https://github.com/lorenzwalthert/precommit
  rev: v0.2.2
  hooks:
  - id: style-files
  - id: parsable-R
  - id: no-browser-statement
  - id: lintr
    verbose: true
  - id: roxygenize
  - id: deps-in-desc
  - id: use-tidy-description

Simplify unit tests

Running devtools::test() required the installation of mlbench package. Also, I had to manually tweak the tests to use more CPUs to bring computing time to an acceptable state.

This is something that would only bother developers, possibly scaring away contributors, so not a big deal, but it would be nice for unit tests to be faster (perhaps by using smaller datasets, fewer iterations, more CPUs) and more complex (most of them just check output class or triggered errors).

Editorial comment on JOSS submission

Editorial comment on JOSS submission:

You write in Statement of Need

Current software tools for mixtures rarely report performance tests using data that reflect the complexities of real-world exposures.

Could you please provide references to these existing tools?

Long installation time

Not necessarily a deal-breaker for using this package, but an annoyance especially when one needs to install the package multiple times (for testing on a machine or for use on multiple ones): first installation on my 8th-gen i5 CPU took no less than 30 minutes.

I'm glad to see a CRAN release on the roadmap, hopefully this will reduce the amount of dependencies and compiled code involved.

Is there anything more short-term that could be done, though?

Adding tests

I think some tests would be good to look at the ATE parameter generated within the fold compared to the OOTB TMLE estimate on the same data.

Improve function documentation

The documentation for the functions (Rd files) needs to be improved. The issues I can identify include

  • \title{}: Be consistent in title capitalization by either capitalizing the first word or all words in the title. As an example, the current title for the function bound_propensity is Bound Generalized Propensity Score, while the title for the function calc_additive_ate is Evaluate mixture rules found during the rpart decision tree process

  • \description{}: Do not simply repeat the title but provide a short description of what the function(s) do(es) (one paragraph, a few lines only)

  • \value{}: Finish TODOs

  • \examples{}: Add one or more examples to demo how to use each function. It is also used for the purpose of function testing purpose. Please refer to https://cran.r-project.org/doc/manuals/R-exts.html#Rd-format for more details

No CRAN with SL3

CVtreeMLE relies on default sl3 super learner objects to estimate nuisance parameters. Using super learner ensures the CVtreeMLE estimator is asymptotically efficient. However, in situations where users may have domain knowledge or are doing exploratory analysis where computational capacities are low it may be more efficient to use one flexible learner. In this case, it may be best to use earth, or multivariate adaptive splines models as the g and Q models. This would also allow CVtreeMLE to be submitted to CRAN.

Issue with unit test

Please check unit test functions as when I run devtools::test() it returns the following error and warnings:

> devtools::test()
ℹ Testing CVtreeMLE| F W S  OK | Context| 1 3     0 | CVtreeMLE_breastcancer [7.0s]                                          
───────────────────────────────────────────────────────────────────────────────────────
Warning (test_CVtreeMLE_breastcancer.R:10:3): (code run outside of `test_that()`)
Strategy 'multiprocess' is deprecated in future (>= 1.20.0) [2020-10-30]. Instead, explicitly specify either 'multisession' (recommended) or 'multicore'. In the current R session, 'multiprocess' equals 'multisession'.
Backtrace:
 1. future::plan("multiprocess", workers = 2)
      at test_CVtreeMLE_breastcancer.R:10:2
 2. future (local) plan_set(newStack, skip = .skip, cleanup = .cleanup, init = .init)
 3. future (local) warn_about_multiprocess(newStack)
 4. future (local) warn_about_deprecated(...)
 5. base (local) dfcn(msg = msg, package = .packageName)

Warning (test_CVtreeMLE_breastcancer.R:10:3): (code run outside of `test_that()`)
[ONE-TIME WARNING] Forked processing ('multicore') is not supported when running R from RStudio because it is considered unstable. For more details, how to control forked processing or not, and how to silence this warning in future R sessions, see ?parallelly::supportsMulticore
Backtrace:
 1. future::plan("multiprocess", workers = 2)
      at test_CVtreeMLE_breastcancer.R:10:2
 2. future (local) plan_set(newStack, skip = .skip, cleanup = .cleanup, init = .init)
 3. future (local) plan_init()
 4. future (local) evaluator(NA, label = "future-plan-test", globals = FALSE, lazy = FALSE)
 5. future (local) strategy(..., workers = workers, envir = envir)
 6. parallelly::supportsMulticore(warn = TRUE)
 7. parallelly:::supportsMulticoreAndRStudio(...)

Warning (test_CVtreeMLE_breastcancer.R:30:1): (code run outside of `test_that()`)
NAs introduced by coercion
Backtrace:
 1. base::data.frame(lapply(data, function(x) as.numeric(as.character(x))))
      at test_CVtreeMLE_breastcancer.R:30:0
 2. base::lapply(data, function(x) as.numeric(as.character(x)))
 3. CVtreeMLE (local) FUN(X[[i]], ...)

Error (test_CVtreeMLE_breastcancer.R:39:1): (code run outside of `test_that()`)
Error in `make_sl3_Task(data = at, covariates = w, outcome = "y_scaled", 
    outcome_type = "continuous")`: could not find function "make_sl3_Task"
Backtrace:
  1. CVtreeMLE::CVtreeMLE(...)
       at test_CVtreeMLE_breastcancer.R:39:0
  2. furrr::future_map_dfr(...)
       at CVtreeMLE-main-2/R/CVtreeMLE.R:305:2
  3. furrr::future_map(...)
  4. furrr:::furrr_map_template(...)
  5. furrr:::furrr_template(...)
  7. future:::value.list(futures)
  9. future:::resolve.list(...)
 10. future (local) signalConditionsASAP(obj, resignal = FALSE, pos = ii)
 11. future:::signalConditions(...)
───────────────────────────────────────────────────────────────────────────────────────
✖ | 1       0 | CVtreeMLE_inputs                                                       
───────────────────────────────────────────────────────────────────────────────────────
Error (test_CVtreeMLE_inputs.R:69:1): (code run outside of `test_that()`)
Error in `eval(code, test_env)`: object 'na' not found
───────────────────────────────────────────────────────────────────────────────────────
✖ | 1       0 | marginal_thresholds_exclusion [500.5s]                                 
───────────────────────────────────────────────────────────────────────────────────────
Error (test_marginal_thresholds_exclusion.R:34:1): (code run outside of `test_that()`)
Error in `example_output[example_output$target_m == "M4", ]`: incorrect number of dimensions
Backtrace:
 1. testthat::expect_true(...)
      at test_marginal_thresholds_exclusion.R:34:0
 2. testthat::quasi_label(enquo(object), label, arg = "object")
 3. rlang::eval_bare(expr, quo_get_env(quo))
───────────────────────────────────────────────────────────────────────────────────────
✖ | 1       0 | mixture_thresholds_eight_beta [73.4s]                                  
───────────────────────────────────────────────────────────────────────────────────────
Failure (test_mixture_thresholds_eight_beta.R:30:1): (code run outside of `test_that()`)
... == "M3 > 2.50969709970333 & M1 > 0.947419263385584 & M2 > 1.99541667300285" is not TRUE

`actual`:   FALSE
`expected`: TRUE 
───────────────────────────────────────────────────────────────────────────────────────
✔ |         1 | mixture_thresholds_one_beta [81.7s]                                    
✔ |         2 | simulations [0.1s]                                                     

══ Results ════════════════════════════════════════════════════════════════════════════
Duration: 663.0 s

[ FAIL 4 | WARN 3 | SKIP 0 | PASS 3 ]

Editorial comment on JOSS submission

Editorial comment on JOSS submission.

On lines 75-76 you write

In most research scenarios, the analyst is interested in causal inference for an a priori specified treatment or exposure.

This seems to me a pretty strong statement. Could you perhaps write "In many research scenarios ..." instead?

Vignette

The vignette has two major issues:

  1. the first half of the document is basically an entirely separate white paper that goes into some presentation of the stats methodology of the package. however, it's a bit messy (markdown formatting off, variables and functions not always clearly defined, etc.), and also it seems like a strange place to put a ton of exposition about the package. it's not really a proof either, so it's a bit of a strange fit. if it's a vignette, then it should really be integrated with example content, so then for example it might make it easier for the reader to understand, for example, exactly where in the package they can see a representation of rule coverage being computed. standing alone, I'm not sure it exactly has its intended effect.

  2. in the second half, the worked examples, things are challenging in a different way. I think there is some desync between the example code and the text surrounding it. obviously, as mentioned in #9, the numbers change, but additionally there are some more straightforward mismatches: e.g. text says use 2fold but function call says n_folds=5. I also get a ton of warnings on my system when rendering the vignette, and that may just be something off about my setup, but without it prerendered on CRAN that's all I have to work with. Warning text was, for example, Warning in private$.train(processed_task, trained_sublearners): Lrnr_gam_NULL_NULL_GCV.Cp failed with message: Error in private$.train(processed_task): Specified outcome type is unsupported by Lrnr_gam. It will be removed from the stack for the section Run CVtreeMLE

I'll add additional specific comments to the issue.

Non reproducible results

When setting the same seed, for example, set.seed(429153), I get different results from those in README file, and for different runs with the same seed value, the results are different. Is this expected behavior of the function CVtreeMLE? Can you explain why this happens since we set the same seed value?

res1 = get_results(429153)
res2 = get_results(429153)
res3 = get_results(429153)

> res1$RMSE_results
# A tibble: 4 × 2
  `Var(s)`   RMSE
  <chr>     <dbl>
1 M1       0.0629
2 M2       0.0635
3 M3       0.0623
4 M1M2M3   0.0332
> res2$RMSE_results
# A tibble: 4 × 2
  `Var(s)`   RMSE
  <chr>     <dbl>
1 M1       0.0659
2 M2       0.0624
3 M3       0.0620
4 M1M2M3   0.0385
> res3$RMSE_results
# A tibble: 4 × 2
  `Var(s)`   RMSE
  <chr>     <dbl>
1 M1       0.0609
2 M2       0.0624
3 M3       0.0623
4 M1M2M3   0.0426

The function get_results is defined as below.

get_results = function(seed) {


  if(missing(seed)) seed = sample(100000, 1)
  print(seed)
  
  set.seed(seed)
   

  n_obs <- 500 
# split points for each mixture
splits <- c(0.99, 2.0, 2.5) 
# minimum values for each mixture
mins <- c(0, 0, 0) 
 # maximum value for each mixture
maxs <- c(3, 4, 5)
 # mu for each mixture
mu <- c(0, 0, 0)
# variance/covariance of mixture variables
sigma <- matrix(c(1, 0.5, 0.8, 0.5, 1, 0.7, 0.8, 0.7, 1), nrow = 3, ncol = 3) 
# subspace probability relationship with covariate W1
w1_betas <- c(0.0, 0.01, 0.03, 0.06, 0.1, 0.05, 0.2, 0.04) 
# subspace probability relationship with covariate W2
w2_betas <- c(0.0, 0.04, 0.01, 0.07, 0.15, 0.1, 0.1, 0.04) 
 # probability of mixture subspace (for multinomial outcome generation)
mix_subspace_betas <- c(0.00, 0.08, 0.05, 0.01, 0.05, 0.033, 0.07, 0.09)
# mixture subspace impact on outcome Y, here the subspace where M1 is lower and 
# M2 and M3 are higher based on values in splits
subspace_assoc_strength_betas <- c(0, 0, 0, 0, 0, 0, 6, 0) 
# marginal impact of mixture component on Y
marginal_impact_betas <- c(0, 0, 0) 
# random error
eps_sd <- 0.01 
# if outcome is binary
binary <- FALSE

sim_data <- simulate_mixture_cube(
  n_obs = n_obs, 
  splits = splits,
  mins = mins,
  maxs = maxs,
  mu = mu,
  sigma = sigma,
  w1_betas = w1_betas,
  w2_betas = w2_betas,
  mix_subspace_betas = mix_subspace_betas,
  subspace_assoc_strength_betas = subspace_assoc_strength_betas,
  marginal_impact_betas = marginal_impact_betas,
  eps_sd = eps_sd,
  binary = binary
)


lrnr_glm <- Lrnr_glm$new()
lrnr_bayesglm <- Lrnr_bayesglm$new()
lrnr_gam <- Lrnr_gam$new()
lrnr_lasso <- Lrnr_glmnet$new(alpha = 1)
lrnr_earth <- Lrnr_earth$new()
lrnr_ranger <- Lrnr_ranger$new()
# put all the learners together (this is just one way to do it)
learners <- c(lrnr_glm, lrnr_bayesglm, lrnr_gam, lrnr_ranger)

Q1_stack <- make_learner(Stack, learners)


lrnr_glmtree_001 <- Lrnr_glmtree$new(alpha = 0.5, maxdepth = 3)
lrnr_glmtree_002 <- Lrnr_glmtree$new(alpha = 0.6,  maxdepth = 4)
lrnr_glmtree_003 <- Lrnr_glmtree$new(alpha = 0.7, maxdepth = 2)
lrnr_glmtree_004 <- Lrnr_glmtree$new(alpha = 0.8, maxdepth = 1)

learners <- c( lrnr_glmtree_001, lrnr_glmtree_002, lrnr_glmtree_003, lrnr_glmtree_004)
discrete_sl_metalrn <- Lrnr_cv_selector$new()

tree_stack <- make_learner(Stack, learners)

discrete_tree_sl <- Lrnr_sl$new(
  learners = tree_stack,
  metalearner = discrete_sl_metalrn
)

ptm1 <- proc.time()

sim_results <- CVtreeMLE(data = sim_data,
                         W = c("W", "W2"),
                         Y = "y",
                         A = c(paste("M", seq(3), sep = "")),
                         back_iter_SL = Q1_stack,
                         tree_SL = discrete_tree_sl, 
                         n_folds = 2,
                         family = "gaussian")

ptm2 <- proc.time()

RMSE_results <- sim_results$`Model RMSEs`
mixture_results <- sim_results$`Pooled TMLE Mixture Results`
mixture_v_results <- sim_results$`V-Specific Mix Results`


return(list(seed=seed, RMSE_results= RMSE_results, mixture_results= mixture_results, mixture_v_results= mixture_v_results, time=ptm2-ptm1))
}

Add Windows to build CI workflow

I noticed the build CI file contains Linux and macOS builds. It would be nice to also include a Windows check in there (even if that's the platform you're developing on), so that the tests are more comprehensibe by including non-Unix OSs, and would benefit contributors not on Windows machines (there are dozens of us. Dozens!)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.