Coder Social home page Coder Social logo

molssi / cookiecutter-cms Goto Github PK

View Code? Open in Web Editor NEW
371.0 26.0 88.0 378 KB

Python-centric Cookiecutter for Molecular Computational Chemistry Packages

License: MIT License

Batchfile 4.34% Shell 2.34% Makefile 3.40% Python 89.92%
cookiecutter python-packages molecular-sciences conda-environment pytest conda-forge computational-chemistry github-actions

cookiecutter-cms's Introduction

Cookiecutter for Computational Molecular Sciences (CMS) Python Packages

GitHub Actions Build Status Documentation Status

A cookiecutter template for those interested in developing computational molecular packages in Python. Skeletal starting repositories can be created from this template to create the file structure semi-autonomously, so you can focus on what's important: the science!

The skeletal structure is designed to help you get started, but do not feel limited by the skeleton's features included here. Just to name a few things you can alter to suit your needs: change continuous integration options, remove deployment platforms, or test with a different suite.

Features

  • Python-centric skeletal structure with initial module files
  • Pre-configured pyproject.toml and setup.cfg for installation and packaging
  • Pre-configured Windows, Linux, and OSX continuous integration on GitHub Actions.
  • Choice of dependency locations through conda-forge, default conda, or pip
  • Basic testing structure with PyTest
  • Automatic git initialization + tag
  • GitHub Hooks
  • Automatic package version control with Versioningit
  • Sample data inclusion with packaging instructions
  • Basic documentation structure powered by Sphinx
  • Automatic license file inclusion from several common Open Source licenses (optional)

Requirements

Usage

With cookiecutter installed, execute the following command inside the folder you want to create the skeletal repository.

cookiecutter gh:molssi/cookiecutter-cms

Which fetches this repository from github automatically and prompts the user for some simple information such as package name, author(s), and licences.

The cookiecutter in action

Supported Python Versions

The MolSSI Cookiecutter will strive to support the current version of Python, two minor versions before. This philosophy is in align with Conda-Forge's guidelines and gives projects ample time to implement new features.

When to drop support for older Python versions?

Project developers can freely choose when to drop support for older versions of Python, or if they don't want to support as many. The general rules we recommend are:

  • Support at least two Python versions: The most recent and the preceding minor version. E.g. 3.9 and 3.8
  • Dropping Python versions should require a minor Project Version increment.
  • New Python versions have been supported for at least one minor revision. E.g Project X.Y supports Python 3.8 and 3.9; Project X.Y+1 supports Python 3.8, 3.9 and 3.10; Project X.Y+2 supports Python 3.9 and 3.10.
  • Add deprecation warnings if features will be removed.

Where is setup.py?

For a long time, many Python projects relied on one of the libraries distutils or setuptools and a corresponding meta-data defining file often called setup.py. These dependencies required python to run, and by its nature limited how much configuration could be done. setup.py has since been superseded by a new file called pyproject.toml, which is a build-system agnostic file which serves much of the same purpose, but can be extended to any number of tools, many of which can be retrieved from the internet simply by defining it in the pyproject.toml file. Many of the features which were in setup.py can be replaced by equivalent keys in the pyproject.toml. By default, the cookiecutter uses the setuptools backend anyways, just with the modernized install specification.

Next steps and web integrations

The repository contains a number of "hooks" that integrate with a variety of web services. To fully integrate the project with these web services and to get started developing your project please proceed through the following directions.

Local installation

For development work it is often recommended to do a "local" python install via pip install -e .. This command will insert your new project into your Python site-packages folder so that it can be found in any directory on your computer.

Setting up with GitHub

Upon creation, this project will initialize the output as a git repository. However, this does not automatically register the repository with GitHub. To do this, follow the instructions for Adding an existing project to GitHub using the command line . Follow the first step to create the repository on GitHub, but ignore the warnings about the README, license, and .gitignore files as this repo creates them. From there, you can skip to after the "first commit" instructions and proceed from there.

Testing

The Python testing framework was chosen to be pytest for this project. Other testing frameworks are available; however, the authors believe the combination of easy parametrization of tests, fixtures, and test marking make pytest particularly well suited for molecular software packages.

To get started additional tests can be added to the project/tests/ folder. Any function starting with test_* will automatically be included in the testing framework. While these can be added in anywhere in your directory structure, it is highly recommended to keep them contained within the project/tests/ folder.

Tests can be run with the pytest -v command. There are a number of additional command line arguments to explore.

Continuous Integration (GitHub Actions)

As of version 1.3, we provide preconfigured workflows for GitHub Actions, with support for Linux, MacOS and Windows. Conda support is possible thanks to the excellent @mamba-org's provision-with-micromamba action. We encourage you read its documentation for further details on GitHub Actions themselves.

The Cookiecutter's GitHub Actions does a number of things differently than the output Actions. We detail those differences below, but none of this is needed to understand the output GitHub Action Workflows, which are much simpler.

The Cookiecutter ability to test GitHub Actions it generates has some limitations, but are still properly tested. This repository has a multi-job GitHub Action Workflow to do a few things:

  • Run the Cookiecutter and generate outputs.
  • Compare the output CI's to references.
  • Run an approximate implementation of the generated CI files.

If the reference files need re-generated, there is a script to help with this.

Ideally, the Cookiecutter would run the generated output files in real time. However, that is currently impossible with GitHub Actions (as of October 14 2020). We Cookiecutter-CMS maintainers have also looked at reactive PR’s which implement on different branches and make new PR’s and setting up dummy repositories and pushing to those and then monitoring the test/return from that. This was all determined to be overly complicated, although we welcome suggestions and ideas for improvements.

Discontinued CI Strategies: Travis & AppVeyor

We no longer recommend projects to use Travis-CI or AppVeyor for CI services. We found the AppVeyor service to be notorious slow in practice, and Travis updated their billing model to charge for OSX testing and further limit their Linux concurrency, even for fully open source software. Given the rise of GitHub Actions, we feel it was appropriate to transition off these platforms as of the CMS Cookiecutter's 1.5 release.

The final version of the CMS-Cookiecutter with Travis and AppVeyor support can be found here: https://github.com/MolSSI/cookiecutter-cms/releases/tag/1.4 for legacy purposes.

Pre-caching common build data

Some continuous integration platforms allow for caching of build data, which you may, or may not, find advantageous. The general purpose of the caches are to store and fetch files and folders which may take a long time to either generate or download every time you want to run a CI build; often because build (and developer) time is limited. However, if the cached data changes any time during a build, then the whole targeted cache is updated and uploaded. So, you should only cache things you do not expect to change.

You may be tempted to cache the Conda installer or Python dependencies fetched from conda or pip, however, this is an ill advised idea for two main reasons:

  1. Your package's dependencies are constantly updating, so you want catch things which break due to dependencies before your user does. Having CI automatically trigger when you make changes and at scheduled intervals helps catch these things as soon as possible.
    • Because you should expect dependencies updating, you will have to upload a new cache each build anyways, somewhat invalidating one of the advantages of a cache.
  2. It is a good idea to make sure your testing emulates the most basic user of your code if possible. If your target users include people who will try to download your package and have it "just work" for their project, then your CI testing should try to do this as well. This would include getting newest, updated installer and dependencies. One example of this may be industry, non-developer users, who do not know all the nuances and inner workings of package dependencies or versions. It is not reasonable to expect them to know these nuances either, its why you are the developer.

There may be some times where the caching feature is helpful for you. One example: including test data which is too large to store on GitHub, but also has a slow mirror hosting it. A cache will help speed up the test since you will not have to download from the slower mirror. If you this sounds like a helpful feature, you can check out the links below. We do not implement them for this Cookiecutter, but they can be added to your package as needed.

There are caching capabilities for the mamba-org/provision-with-micromamba action, if you are using it as well.

Documentation

Make a ReadTheDocs account and turn on the git hook. Although you can manually make the documentation yourself through Sphinx, you can also configure ReadTheDocs to automatically build and publish the documentation for you. The initial skeleton of the documentation can be found in the docs folder of your output.

Static Code Analysis

Make a LGTM account and add your project. If desired you can add code review integration by clicking the large green button!

Static code analysis dramatically enhances the quality of your code by finding a large number of common mistakes that both novice and advanced programmers make. There are many static analysis codes on the market, but we have seen that LGTM is a delicate balance between verbosity and catching true errors.

Additional Python Settings in setup.cfg

This Cookiecutter generates the package, but there are a several package-specific Python settings you can tune to your package's installation needs. These are settings in the setup.cfg file, which contains instructions for Python on how to install your package. Each of the options in the file are commented with what it does and when it should be used.

Versioningit

Versioningit automatically provides a version string based on the git tag and commit hash, which is then exposed through a project.__version__ attribute in your project/__init__.py. For example, if you mint a tag (a release) for a project through git tag -a 0.1 -m "Release 0.1." (push to GitHub through git push origin 0.1), this tag will then reflect in your project: project.__version__ == 0.1. Otherwise, a per-commit version is available which looks like 0.3.0+81.g332bfc1. This string shows the current git (the "g") hash 332bfc1 is 81 commits beyond the version 0.3 tag.

Conda and PyPI (pip)

Should you deploy and/or develop on Conda (with the conda-build tool) or PyPI (with the pip tool)? Good question, both have their own advantages and weaknesses as they both are designed to do very different things. Fortunately, many of the features you will need for this Cookiecutter overlap. We will not advocate here for one or the other, nor will we cover all the differences. We can however recommend some additional resources where you can read and find out more at the end of this section.

We will cover the major differences that you the developer will see between the two as they relate to this Cookiecutter.

For testing purpose, the PyPi tool, pip, is much faster about building your packages than the Conda tool, conda-build, will be. Depending on the number of dependencies, you may have conditions where conda-build takes 10-20 min to resolve, download, configure, and install all dependencies before your tests start, whereas pip would do the same in about 5 min. It is also important to note that both pip and conda-build are not testing tools in and of themselves; they are deployment and dependency resolution tools. For pure testing, we include other packages like pytest.

From a deployment perspective, it is possible to deploy your package on both platforms, although doing so is beyond the scope of this Cookiecutter.

Lastly, these are optional features! You could choose to not rely on either Conda or PyPI, assuming your package does not need dependencies. We do highly recommend you pick one of them for dependency resolution so you (and your potential users) are not having to manually find and install all the dependencies you may have. To put some historical perspective on this, NumPy and SciPy used to ask the users to install the BLAS and LAPACK libraries on their own, and then also make sure they were linked correctly to use in Python. These hurdles are no longer required through the package managers, Huzzah!

Additional reading for Conda and PyPI

Conda Build vs. Conda Environments

We recommend creating Conda environments rather than relying on conda build for testing purposes, assuming you have opted for Conda as a dependency manager. Earlier versions of this Cookiecutter would conduct testing by first bundling the package for distribution through Conda Build, and then installing the package locally to execute tests on. This had the advantage of ensuring your package could be bundled for distribution and that all of its dependencies resolved correctly. However, it had the disadvantage of being painfully slow and rather confusing to debug should things go wrong on the build, even before the testing.

The replacement option to this is to pre-create the conda environment and then install your package into it with no dependency resolution for testing. This helps separate out the concepts of testing and deployment which are separate actions, even though deployment should only come after testing, and you should be ready to do both. This should simplify and accelerate the testing process, but does mean maintaining two, albeit similar, files since a Conda Environment file has a different YAML syntax than a Conda Build meta.yaml file. We feel these benefits outweigh the costs and have adopted this model.

Deploying your code

Simply testing your code is insufficient for good coding practices; you should be ready to deploy your code as well. Do not be afraid of deployment though; Python deployment over the last several years has been getting easier, especially when there are others to manage your deployment for you. There are several ways to handle this. We will cover a couple here, depending on the conditions which best suit your needs. The list below is neither exhaustive nor exclusive. There are times when you may want to build your packages yourself and upload them for developmental purposes, but we recommend letting others handle (and help you) with deployment. These are meant to serve as guides to help you get started.

Deployment should not get in the way of testing. You could configure the GitHub Action scripts to handle the build stage after the test stage, but this should only be done by advanced users or those looking to deploy themselves.

Deployment Method 1: Conda Forge

The Conda Forge community is great, and it is the recommended location to deploy your packages. The community is highly active and many scientific developers have been moving here to access not only Conda Forge's deployment tools, but also for easy access to all the other Python packages which have been deployed on the platform. Even though they provide the deployment architecture, you need to still test your program's ability to be packaged through conda-build. If you choose either Conda dependency option, additional tests will be added to GitHub Actions which only package through conda-build.

This method relies on the conda meta.yaml file.

Deployment Method 2: Conda through someone else's manager

This option is identical to the Conda Forge method, but relies on a different group's deployment platform such as Bioconda or Omnia. Each platform has their own rules, which may include packaging your program yourself and uploading. Check each platform's instructions and who else deploys to them before choosing this option to ensure its right for you.

This method relies on the conda meta.yaml file.

Deployment Method 3: Upload package to PyPi

The Python Package Index (PyPi) is another place to manage your package and have dependencies resolve. This option typically relies on pip to create your packages and dependencies must be specified in your pyproject.toml file to resolve correctly.

Deployment Method 4: Manually upload your package to some source

Sometimes, your package is niche enough, developmental enough, or proprietary enough to warrant manually packaging and uploading your program. This may also apply if you want regular developmental builds which you upload manually to test. In this case, you will want to change your CI scripts to include a build, and optional upload step on completion of tests.

Output Skeleton

This will be the skeleton made by this cookiecutter, the items marked in {{ }} will be replaced by your choices upon setup.

.
├── CODE_OF_CONDUCT.md              <- Code of Conduct for developers and users
├── LICENSE                         <- License file
├── MANIFEST.in                     <- Packaging information for pip
├── README.md                       <- Description of project which GitHub will render
├── {{repo_name}}                   <- Basic Python Package import file
│   ├── {{first_module_name}}.py    <- Starting packge module
│   ├── __init__.py                 <- Basic Python Package import file
│   ├── _version.py                 <- Generated file from VersionInGit. Created on package install, not initilization.
│   ├── data                        <- Sample additional data (non-code) which can be packaged. Just an example, delete in production
│   │   ├── README.md
│   │   └── look_and_say.dat
│   ├── py.typed                    <- Marker file indicating PEP 561 type hinting.
│   └── tests                       <- Unit test directory with sample tests
│       ├── __init__.py
│       └── test_{{repo_name}}.py
├── devtools                        <- Deployment, packaging, and CI helpers directory
│   ├── README.md
│   ├── conda-envs                  <- Conda environments for testing
│   │   └── test_env.yaml
│   └── scripts
│       └── create_conda_env.py     <- OS agnostic Helper script to make conda environments based on simple flags
├── docs                            <- Documentation template folder with many settings already filled in
│   ├── Makefile
│   ├── README.md                   <- Instructions on how to build the docs
│   ├── _static
│   │   └── README.md
│   ├── _templates
│   │   └── README.md
│   ├── api.rst
│   ├── conf.py
│   ├── getting_started.rst
│   ├── index.rst
│   ├── make.bat
│   └── requirements.yaml           <- Documenation building specific requirements. Usually a smaller set than the main program
├── pyproject.toml                  <- Generic Python build system configuration (PEP-517).
├── readthedocs.yml
├── setup.cfg                       <- Near-master config file to make house INI-like settings for Coverage, Flake8, YAPF, etc.
├── .codecov.yml                    <- Codecov config to help reduce its verbosity to more reasonable levels
├── .github                         <- GitHub hooks for user contribution, pull request guides and GitHub Actions CI
│   ├── CONTRIBUTING.md
│   ├── PULL_REQUEST_TEMPLATE.md
│   └── workflows
│       └── CI.yaml
├── .gitignore                      <- Stock helper file telling git what file name patterns to ignore when adding files
├── .gitattributes                  <- Stock helper file telling GitHub how to bundle files in the tarball, should not need to be touched most times
└── .lgtm.yml

Acknowledgments

This cookiecutter is developed by Levi N. Naden and Jessica A. Nash from the Molecular Sciences Software Institute (MolSSI); and Daniel G. A. Smith of ENTOS. Additional major development has been provided by M. Eric Irrgang. Copyright (c) 2022.

Directory structure template based on recommendation from the Chodera Lab's Software Development Guidelines .

Original hosting of repository owned by the Chodera Lab

Elements of this repository drawn from the cookiecutter-data-science by Driven Data and the MolSSI Python Template

cookiecutter-cms's People

Contributors

amjames avatar bennybp avatar berquist avatar btjanaka avatar dgasmith avatar eirrgang avatar ethanholz avatar fanwangm avatar j-wags avatar jaimergp avatar janash avatar lnaden avatar loriab avatar malramsay64 avatar mattwthompson avatar mikemhenry avatar radifar avatar rmeli avatar sjayellis avatar testpersonal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cookiecutter-cms's Issues

Optional git repository creation

It is very common for people to want to use the cookie cutter to package existing code or scripts. However, the fact that it initializes a new repository become confusing for those who are not as well versed with git (if their project is already using version control) and causes a lot of problems through conflicting git histories, etc

It seems to me that the Cookiecutter should be able to init if not in a repo, but otherwise only add and commit the created files.

Versioneer description

There is some confusion that versioneer is automatic based off releases, this can be fixed with a small paragraph description.

Change first module name to something other than package name

Suggestion by @bas-rustenburg

the package and first submodule would have the same name. I wonder if that could be a confusing start of a package.

Also a suggestion from @jchodera

I’d choose whimsical package and submodule names

Personally: I’d like to avoid just using whimsical names, the whole point of the cookiecutter is to have it fill in things. I could add another option on init to set the first module name, defaulting to the package name.

Sphinx Theme

Currently the Sphinx theme is Alabaster which I have always found... difficult. Any object to changing this to the RTD theme?

devtools/scripts/create_conda_env.py does not report which channels packages are installed from

We've been converting some of our legacy projects over to the CMS cookiecutter, but I've noticed that the new travis scheme can make debugging harder because the devtools/scripts/create_conda_env.py mechanism does not emit information on which channels each package is installed from. When tracking down issues where pulling a package from a different channel causes failures, this can be immensely valuable.

What would you think about one of these options?

  1. Having devtools/scripts/create_conda_env.py generate a list of which channels each package version is coming from (the same way conda install <packagename> does), or
  2. Adding a conda list to the .travis.yml?

Codecov defaults

I have found Codecov quite verbose and annoying without some modification. To automatically tone down the issues I usually setup a .codecov.yml with the following options:

coverage:
  status:
    patch: false
    project:
      default:
        threshold: 50%
comment:
  layout: "header"
  require_changes: false
  branches: null
  behavior: default
  flags: null
  paths: null

Happy to discuss and/or modify them.

Using more providers: Azure Pipelines / GitHub Actions

I have been testing Azure Pipelines while trying to build openmm in conda-forge and I have really liked the UI and the added consistency due to the fact that all OS run on the same platform. Will you consider migrating from Travis/AppVeyor to an unified Azure CI?

Replacing Versioneer

The versioneer repository appears to be dead; however, this isn't necessarily a bad thing since versioneer works for all use cases and examples that we can find. In addition, versioneer is static and installed so there are no dependance issues. However, this likely will not be the case forever and watching for replacements like setuptools_scm is something that we should continue to evaluate.

Add docs for what all the settings are

Suggestion from @bas-rustenburg

I think in general, this is looking good. It might need some documentation for people who havent ever used travis, appveyor, conda, pip, et cetera. Otherwise, you can install the cookie cutter, and the next question is "so now what?"

Good idea, may just need to be link-outs. And probably some guide on "what now?" which may just link to the software dev

Python auto-formatters

It might be good to also add support for YAPF or similar. This can be added in a .style.yapf file in the base folder. I typically use:

[style]
COLUMN_LIMIT = 119
INDENT_WIDTH = 4
USE_TABS = False

However, this might convolve with the usage of pyflakes or not.

Duplicate Python versioning in Travis CI

Current in Travis we set:

    - os: linux
      python: 3.7
      env: PYTHON_VER=3.7

This will both build a conda env and use a system 3.7 python. There is no need to set the Python version I believe, we may want to set the name= variable instead so that the display is correct.

We might also want to unpin the xenial container.

C/C++ project integrations.

This cookie cutter is great for pure Python projects, but a concern we will likely run into is how to integrate hybrid Python/C/C++ projects. Perhaps not recommended for this cookie cutter (we might consider another cookie cutter), but I think a reasonable place to open discussions.

Personally I try to adhere to the following rules:

  • If pure C, use Python ctypes and the wonderful numpy.ctypeslib.
  • If C++, use PyBind11. Note that mixing PyBind11 versions can be problematic.
  • If only called via Python try to use the native distutils/setuptools compilation options which work ok as long as nothing complex is used. This fails when binding complex C++/CMake ecosystems however (Psi4 as an example)
  • If called via Python and also linked at the shared object level write a setup.py that calls CMake. Example here.

We have a number of solutions depending on the requirements and things get quite messy depending if CMake is in the mix or not. I have tried to follow scikit-build without too much success. Opinions and discussion are most welcome.

Dependency source choices

During the setup users are greeted with the following option:

Select dependency_source:
1 - conda-forge
2 - conda
3 - pip
Choose from 1, 2, 3 [1]:

Users may become slightly confused if they need to pull from both conda and pip for example (or if they dont know where they need to pull from). Can this either be reworded or become a multiple choice selection?

Double check the Versioneer docs

We should consider if we should make a note about the versioneer description to note that it is not really updated, but does still just work. Also double check the LGTM issue (#52) and Flake to ensure the Versioneer files are correctly ignored.

Possible alternate in the future: https://github.com/pypa/setuptools_scm/ (a la @dgasmith)

Notes of when Versioneer might fail in the future:

  • Python drops the C-style string formatting (e.g. "%d" % 10). No noted plans to do so at the moment.
  • git changes the syntax which Versioneer reads. Very unlikely for the foreseeable future
  • setuptools changes the way it accepts additional modules. No sense of the likelihood of this.

Environment builds but some packages have libstdc++.so.6 errors

I'm migrating one of my repositories to this cookie cutter and I've run into an error that has me stymied. I've been testing the cookiecutter branch of my repository on a few systems before merging into master; they all behave as expected, except in one case: on the head node of a large cluster. My general strategy is to clone my repository, build the conda environment, activate it, run pip install -e ., and then run the test suite, just like what happens on Travis. (I have made one change to the default conda environment YAML, which is to specify python=3.6 for pytraj.) When I do this on TSCC/SDSC, I have no problems building the environment or installing my module, but my tests fail with libstdc++.so.6 problems (see below). Does this suggest that something went wrong building the dynamically linked (?) C code in scipy and mdtraj? I don't modify any paths in my .bashrc. Any suggestions for how to debug things? I can manually import pymbar and mdtraj despite these errors.

(base) [davids4@tscc-login1 pAPRika]$ conda env create -f devtools/conda-envs/test_env.yaml 
[...]
#
# To activate this environment, use
#
#     $ conda activate paprika-debug-tleap-dummy
#
(base) [davids4@tscc-login1 pAPRika]$ conda activate paprika-debug-tleap-dummy

(paprika-debug-tleap-dummy) [davids4@tscc-login1 pAPRika]$ pip install -e .
Obtaining file:///home/davids4/paprika-debug-tleap-dummy/pAPRika
Installing collected packages: paprika
  Running setup.py develop for paprika
Successfully installed paprika

(paprika-debug-tleap-dummy) [davids4@tscc-login1 pAPRika]$ pytest -v paprika/tests/
=========================================================== test session starts ============================================================
platform linux -- Python 3.6.7, pytest-4.2.1, py-1.7.0, pluggy-0.8.1 -- /home/davids4/anaconda3/envs/paprika-debug-tleap-dummy/bin/python
cachedir: .pytest_cache
rootdir: /home/davids4/paprika-debug-tleap-dummy/pAPRika, inifile:
plugins: cov-2.6.1
collected 18 items / 2 errors / 16 selected                                                                                                

================================================================== ERRORS ==================================================================
_____________________________________________ ERROR collecting paprika/tests/test_analysis.py ______________________________________________
ImportError while importing test module '/home/davids4/paprika-debug-tleap-dummy/pAPRika/paprika/tests/test_analysis.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
paprika/tests/test_analysis.py:7: in <module>
    from paprika import analysis
paprika/analysis.py:6: in <module>
    import pymbar
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/pymbar/__init__.py:31: in <module>
    from pymbar import timeseries, testsystems, confidenceintervals, version
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/pymbar/confidenceintervals.py:25: in <module>
    import scipy.stats
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/__init__.py:367: in <module>
    from .stats import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/stats.py:173: in <module>
    from . import distributions
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/distributions.py:10: in <module>
    from ._distn_infrastructure import (entropy, rv_discrete, rv_continuous,
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/stats/_distn_infrastructure.py:16: in <module>
    from scipy.misc import doccer
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/misc/__init__.py:68: in <module>
    from scipy.interpolate._pade import pade as _pade
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/interpolate/__init__.py:175: in <module>
    from .interpolate import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/interpolate/interpolate.py:32: in <module>
    from .interpnd import _ndim_coords_from_arrays
interpnd.pyx:1: in init scipy.interpolate.interpnd
    ???
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/spatial/__init__.py:98: in <module>
    from .kdtree import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/spatial/kdtree.py:8: in <module>
    import scipy.sparse
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/sparse/__init__.py:231: in <module>
    from .csr import *
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/sparse/csr.py:15: in <module>
    from ._sparsetools import csr_tocsc, csr_tobsr, csr_count_blocks, \
E   ImportError: /opt/gnu/gcc/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/davids4/anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/scipy/sparse/_sparsetools.cpython-36m-x86_64-linux-gnu.so)
______________________________________________ ERROR collecting paprika/tests/test_openmm.py _______________________________________________
ImportError while importing test module '/home/davids4/paprika-debug-tleap-dummy/pAPRika/paprika/tests/test_openmm.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
paprika/tests/test_openmm.py:7: in <module>
    from paprika.openmm_simulate import *
paprika/openmm_simulate.py:9: in <module>
    from mdtraj.reporters import NetCDFReporter
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/mdtraj/__init__.py:29: in <module>
    from .formats.registry import FormatRegistry
../../anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/mdtraj/formats/__init__.py:15: in <module>
    from .dtr import DTRTrajectoryFile
E   ImportError: /opt/gnu/gcc/lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/davids4/anaconda3/envs/paprika-debug-tleap-dummy/lib/python3.6/site-packages/mdtraj/formats/dtr.cpython-36m-x86_64-linux-gnu.so)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
========================================================= 2 error in 4.71 seconds ==========================================================

Appveyor Failure on Python 3.7

create_conda_env.py fails for an appveyor build on python 3.7.
https://ci.appveyor.com/project/Olllom/pyworkdir/builds/26263346/job/iw36fc15qq5kh5dt

Python 3.6 build is OK
https://ci.appveyor.com/project/Olllom/pyworkdir/builds/26263346/job/45wft6qedcwoao05

The issue is known
conda/conda-build#3220
and can be fixed easily
https://github.com/matplotlib/matplotlib/pull/14649/files

The appveyor error message is.

python devtools\\scripts\\create_conda_env.py -n=test -p=%PYTHON_VERSION% devtools\\conda-envs\\test_env.yaml
173Traceback (most recent call last):
174  File "C:\Miniconda37-x64\Scripts\conda-env-script.py", line 5, in <module>
175    from conda_env.cli.main import main
176  File "C:\Miniconda37-x64\lib\site-packages\conda_env\cli\main.py", line 39, in <module>
177    from . import main_create
178  File "C:\Miniconda37-x64\lib\site-packages\conda_env\cli\main_create.py", line 12, in <module>
179    from conda.cli import install as cli_install
180  File "C:\Miniconda37-x64\lib\site-packages\conda\cli\install.py", line 19, in <module>
181    from ..core.index import calculate_channel_urls, get_index
182  File "C:\Miniconda37-x64\lib\site-packages\conda\core\index.py", line 9, in <module>
183    from .package_cache_data import PackageCacheData
184  File "C:\Miniconda37-x64\lib\site-packages\conda\core\package_cache_data.py", line 15, in <module>
185    from conda_package_handling.api import InvalidArchiveError
186  File "C:\Miniconda37-x64\lib\site-packages\conda_package_handling\api.py", line 3, in <module>
187    from libarchive.exception import ArchiveError as _LibarchiveArchiveError
188  File "C:\Miniconda37-x64\lib\site-packages\libarchive\__init__.py", line 1, in <module>
189    from .entry import ArchiveEntry
190  File "C:\Miniconda37-x64\lib\site-packages\libarchive\entry.py", line 6, in <module>
191    from . import ffi
192  File "C:\Miniconda37-x64\lib\site-packages\libarchive\ffi.py", line 27, in <module>
193    libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
194  File "C:\Miniconda37-x64\lib\ctypes\__init__.py", line 434, in LoadLibrary
195    return self._dlltype(name)
196  File "C:\Miniconda37-x64\lib\ctypes\__init__.py", line 356, in __init__
197    self._handle = _dlopen(self._name, mode)
198TypeError: LoadLibrary() argument 1 must be str, not None
199CONDA ENV NAME  test
200PYTHON VERSION  3.7
201CONDA FILE NAME devtools\\conda-envs\\test_env.yaml
202CONDA PATH      C:\Miniconda37-x64\Scripts\conda.EXE
203activate test
204Could not find conda environment: test

Test CI builds

Find a way to compile cookiecutter for all dependency options then test the CI builds. Not sure how to chain CI builds, but its something to ask

Simple method for keeping repos in sync with current cookiecutter scheme?

I wonder if there might be some way to help automate the update process for ensuring that repos created with the cookiecutter remain up to date with the latest best practices. Right now, the process is manual and somewhat tedious, with the potential for missing updates to some files.

Any thoughts on how we might be able to help make this process simpler, or automate the update step?

Cookiecutter failing if Windows Continuous Integration is declined

Hi there,

Just in the last few weeks using cookiecutter with the MolSSI template fails for me when I decline Windows Continuous Integration (last step). Specific error is below;

Traceback (most recent call last):
  File "/var/folders/2n/xtzsyspd32v6vglg_pd5gmw80000gn/T/tmpvfncakrz.py", line 52, in <module>
    remove_windows_ci()
  File "/var/folders/2n/xtzsyspd32v6vglg_pd5gmw80000gn/T/tmpvfncakrz.py", line 49, in remove_windows_ci
    os.remove(os.path.join("devtools", "conda-recipe", "bld.bat"))
FileNotFoundError: [Errno 2] No such file or directory: 'devtools/conda-recipe/bld.bat'
ERROR: Stopping generation because post_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)

I can provide more info if needed, but I believe this is reproducible. If I accept Windows continuous integration everything works fine.

Pip install fails with no README file

@dgasmith and I encountered this error with students at the summer school. The readme in their directory was accidentally deleted, leading to failures in building. It's very hard for students to troubleshoot.

Release cycle and automated upgrades?

As the cookiecutter is continually being updated, it might make sense to start cutting releases so that projects using the cookiecutter have a clear cookiecutter version they are based on, and to think about how to provide a semi- or fully automated upgrade path to update repos that use the cookiecutter to stay current with the latest best practice.

Move away from CI with conda-build

This is something that we discuss a bit internally but could use some additional discussion. In general, conda-build is fairly complicated and errors can be quite opaque. With the rise of host, run, requires, import, build, etc the meta.yaml files are becoming increasingly complex and not particularly sustainable for the average user even though they make good sense for CD integrity.

In addition, there seems to be a continuous movement towards deploying with conda-forge rather than custom channels. This has the benefit that meta.yaml's can be templated off the continuously updated c-f templates, a large community of reviewers can examine the meta.yaml's before merging, and c-f's bots can update out of date meta.yaml's automatically. In addition, c-f's build, deployment, and auto update on release technology will provide a far easier experience than someone trying to deploy on their own.

I would propose using conda install for travis/appveyor instead where we can either have users list dependencies in the travis.yaml or use environment files using a script similar to this. This has the benefit that builds are much quicker often (3-4x) due to less redundancy in the build cycles and the overall cognitive overhead is much lower as cookiecutter users can use canonical conda commands.

Using straight conda install will increase the dependency duplication slightly with setup.py, appveyor.yaml, and travis.yaml. The environments have the benefit of keeping duplication the same and providing developers and users a clean and reusable development/execution environment with the downside the complexity is slightly increased I would think.

Thoughts?

Codecov may not be finding configuration file

Repost of https://github.com/openforcefield/openforcefield/pull/432/files#r331088966

I haven't verified this directly (by making a new repo using the cookiecutter), but will update this Issue when I do so.

Basically, I removed our old coverage version pin, and codecov dropped by 30%. This was due to the code lines in the tests themselves becoming a part of the denominator for our coverage % (but oddly, not the numerator).

Per the docs here, this can be avoided using the [omit] keyword in a config file. This is present both in the OFFTK repo and in the cookiecutter, in setup.cfg. However, our pytest commands (and the cookiecutter's) don't specifically point to this file (using the --cov-config commandline argument).

I suspect that new versions of coverage or pytest-cov have changed the way that config files are found, such that setup.cfg is no longer found by default. Adding --cov-config=setup.cfg to the list of pytest args should fix that if it's also an issue in the cookiecutter.

Default Linux Travis py3.7 build for cookiecutter-derived repo fails

Example of cookiecutter-derived package travis build failing: https://travis-ci.org/MSchauperl/resppol/jobs/477670412

Official announcement seems to say: "Move to xenial dist" https://travis-ci.community/t/unable-to-download-python-3-7-archive-on-travis-ci/639/2?u=kacperduras

Same issue was found and solved here: https://github.com/mediascopegroup/light-rest-client/issues/2

I was able implement the above solution for openforcefield by adding the sudo: required dist: xenial keywords. openforcefield/openff-toolkit@47f74e8#diff-354f30a63fb0907d4ad57269548329e3

Stamp output with CMS-Cookiecutter version

We should stamp the output directory somewhere with the version of the CMS cookiecutter that was used to make it; either a formal version system or a the git hash. This may take some additional engineering.

Based on suggestion from @loriab

Revisiting conda caching

@robertodr hit on a simple, yet effective way to cache conda envs. I normally push back against caching due to the chances of issues which can be substantial with confusing errors, but this way we use all of conda's native caching to handle this which should reduce the chance of problems. Setting the timeout to something reasonable short as shown below will help during peak access on a repository. Something that we should try out to make sure it is robust before deploying here.

before_cache:
  - conda deactivate
  - conda remove --name test --all
cache:
  timeout: 1000
  directories:
    - $HOME/miniconda

Badges!

Since the cookiecutter sets up .travis.yml and codecov, it would be cool if it added the badges to the top of README.md, which will also serve as a reminder to enable those services to get the badges to work.

Automating package and data discovery

Hello :)

Is there any reason the template recommends manually specifying packages and subpackages instead of using setuptools.find_packages()?

Also, some people recommend against using package_data, suggesting include_package_data=True and MANIFEST.in as a cleaner replacement.

Would you consider changing the current behavior?

Drop support for python 2.7

I see the .travis.yml includes support for testing with python 2.7.

No new projects should include support for python 2.7, and since this cookiecutter is intended to be used for new projects, we should drop this branch.

Conda Codecov

Apparently codecov is on conda-forge and I missed it. Looking at the time stamps I think I started using codecov over two years ago and never checked again!

It would be good to leave pip stubs commented out in the conda env for demonstration however.

Update repository docs from migration

With the recent migration from choderalab/cookiecutter-compchem to molssi/cookiecutter-cms, several links are likely to have broken and the docs will need updated.

I'll work through this

Improve deployment documentations

#56 introduced changes to the Cookiecutter to move away from having the repo handle deployment tasks (i.e. conda-build). I tried to make some changes to the documentation on deployment and what we recommend, but we still need a better "here is how we recommend to deploy AND here is where you can go to get help."

Module name hyphenation error is raised, even though module name doesn't contain hyphens

I'm trying to make a new repo where the repo name has hyphens, but not the module name. This is raising the following error:

(openforcefield) jwagner@MBP-S$ cookiecutter gh:molssi/cookiecutter-cms
You've downloaded /Users/jwagner/.cookiecutters/cookiecutter-cms before. Is it okay to delete and re-download it? [yes]: 
project_name [ProjectName]: NistDataSelection
repo_name [nistdataselection]: nist-data-selection    
first_module_name [nist-data-selection]: NistDataSelection
author_name [Your name (or your organization/company/team)]: Open Force Field Consortium
author_email [Your email (or your organization/company/team)]: [email protected]
description [A short description of the project.]: Records the tools and decisions used to select NIST data for curation.
Select open_source_license:
1 - MIT
2 - BSD-3-Clause
3 - LGPLv3
4 - Not Open Source
Choose from 1, 2, 3, 4 (1, 2, 3, 4) [1]: 
Select dependency_source:
1 - Prefer conda-forge over the default anaconda channel with pip fallback
2 - Prefer default anaconda channel with pip fallback
3 - Dependencies from pip only (no conda)
Choose from 1, 2, 3 (1, 2, 3) [1]: 
Select Include_Windows_continuous_integration:
1 - y
2 - n
Choose from 1, 2 (1, 2) [1]: 
nist-data-selection None
ERROR: "nist-data-selection" is not a valid Python module name!
ERROR: Stopping generation because pre_gen_project hook script didn't exit successfully
Hook script failed (exit status: 1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.