A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.
- Python 2.7 or 3.5
- Cookiecutter Python package >= 1.4.0: This can be installed with pip by or conda depending on how you manage your Python packages:
pip install cookiecutter
or
conda config --add channels conda-forge
conda install cookiecutter
cookiecutter https://github.com/Sysvale/cookiecutter-data-science.git
The directory structure of your new project looks like this:
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docker-compose.yml <- The docker-compose file to manage environments as services.
├── docker
│ ├── dev.Dockerfile <- Dockerfile to the project development environment container.
│ ├── jupyter.Dockerfile <- Dockerfile to the project jupyter environment container.
│ ├── prod.Dockerfile <- Dockerfile to the project production environment container.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model
│ summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for
│ ordering), the creator's initials, and a short `-` delimited
│ description, e.g. `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory
│ materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements_dev.txt <- The requirements file for reproducing the analysis environment
│ and the development routines such as tests.
├── requirements.txt <- The requirements file for reproducing the analysis environment,
│ e.g. generated with `pip freeze > requirements.txt`
│
├── {{cookiecutter.repo_name}} <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results oriented
│ │ visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see
tox.readthedocs.io
We welcome contributions! See the docs for guidelines.
pip install -r requirements.txt
py.test tests