Coder Social home page Coder Social logo

adelta's Introduction

Adelta

Adelta is a domain specific language for automatically differentiating programs with discontinuities. It is a research artifact for the SIGGRAPH 2022 paper "Aฮด: Autodiff for Discontinuous Programs โ€“ Applied to Shaders".

Package dependencies

The source code is developed and tested under python 3.6, Halide 10.0, TensorFlow 2.6.2 and Pytorch 1.10.2. A full list of python environment can be found in environment.yml.

Basic Introduction and Tutorial

If you would like a more gentle introduction to automatic differentation for discontinuous programs, these two Medium articles introduce the basics math and the DSL for the Adelta framework:

The tutorial below introduces how to author shaders programs and differentiate/optimize them under the Adelta framework:

  • Part 1: Differentiating a Simple Shader Program
  • Part 2: Raymarching Primitive

Reproducing figures and tables in the paper

Download optimization results for ours and baselines from the following Google Drive link:

https://drive.google.com/file/d/1Q8K2QhtlFheiyFfQN4Zh1Nk-R2AUkdNO/view?usp=sharing

In the Adelta directory, run

python generate_result.py <optimization_path> generate

Figures and tables will be generated at optimization_path/restul.

Using our compiler

A note on backends

We provide three different backend for generating the gradient program (Halide, TensorFlow, Pytorch). \

Our default and most efficient backend is Halide. It also supports comparison with baselines finite difference and SPSA. To use it, install Halide from here, then use the following command generate pre-compiled kernels.

mkdir hl_tmp
python script_compile_halide.py <halide_install_path> hl_tmp

Both TensorFlow and Pytorch backends are used for our gradient only (not for FD and SPSA). We additionally include the 1D and 3D examples for these two backends. Note the TensorFlow backend allows eager execution under "visualize_gradient" mode, but resorts to the legacy 1.x style compute graph implementation under "optimization" mode.

Reproducing experiments in the paper

To reproduce our result, you need to first set up the Halide backend. Because we compare with Teg and diffvg, these two libraries needs to be installed as well. Run all the experiments using the follwing command.

python generate_result.py <optimization_path> collect

Exploring shader examples

apps/run_shader.py provides an interface to explore our shader examples. It can be used to visualize the gradient of each shader, and apply optimization tasks to these shaders in all three backends: Halide, Tensorflow and Pytorch. For example, visualize the gradient of a rectangle shader using Pytorch backend:

cd apps
python run_shader.py render_test_finite_diff_rectangle_2step.py --dir <dir_rectangle> --backend torch --mode visualize_gradient

Because our compiler write the compiled program to the save directory with the same naming scheme, different shaders should specify different dir argument. Argument details for run_shader.py is listed below.

usage: run_shader.py [-h] [--dir DIR] [--mode {visualize_gradient,optimization}] [--backend {hl,tf,torch}]
                     [--gradient_method {ours,fd,spsa}] [--finite_diff_h FINITE_DIFF_H] [--spsa_samples SPSA_SAMPLES]
                     [--use_random] [--use_autoscheduler]
                     shader_file

run shader

positional arguments:
  shader_file           shader file to run

optional arguments:
  --backend {hl,tf,torch}
                        specifies backend
  --dir DIR             directory to save result
  --finite_diff_h FINITE_DIFF_H
                        step size for finite diff
  --gradient_method {ours,fd,spsa}
                        specifies what gradient approximation to use
  --mode {visualize_gradient,optimization}
                        what mode to run the shader, empty means execute all the command written at the beginning of the
                        shader file
  --spsa_samples SPSA_SAMPLES
                        number of samples for spsa
  --use_autoscheduler   if in hl backend, use autoscheduler instead of naive schedule
  --use_random          if running optimization cmd, apply random variables to parameters
  -h, --help            show this help message and exit

1D and 3D examples

We additionally provide toy examples for a 1D pulse and a 3D sphere. The 1D or 3D shaders can only be compiled to TensorFlow and Pytorch backends. To run simple optimization task on these examples, run

cd apps
python run_shader.py render_test_finite_diff_1D_pulse.py <dir_1D_pulse> --backend torch --mode optimization
python run_shader.py render_test_finite_diff_3D_sphere.py <dir_3D_sphere> --backend torch --mode optimization

License

The research materials presented in this repository are licensed for non-commercial use under Adobe Research License Terms For Redistributable Adobe Materials. Please see the license for the full legal text.

Citation and Bibtex

Yuting Yang, Connelly Barnes, Andrew Adams, and Adam Finkelstein. "A๐›ฟ: Autodiff for Discontinuous Programs โ€“ Applied to Shaders." ACM SIGGRAPH, to appear, August 2022.

@inproceedings{Yang:2022:AAF, author = "Yuting Yang and Connelly Barnes and Andrew Adams and Adam Finkelstein", title = "A$\delta$: Autodiff for Discontinuous Programs โ€“ Applied to Shaders", booktitle = "ACM SIGGRAPH, to appear", year = "2022", month = aug }

adelta's People

Contributors

yyuting avatar

Stargazers

Lejun Min avatar Albert Kuang avatar Pengfei Shen avatar Daxuan avatar Wojciech Nawrocki avatar Sharon Zhang avatar jkxing avatar Wenbo Chen avatar ze-ming avatar Daoye Wang avatar Yucheol Jung avatar  avatar Ruan Liangwang avatar Zain Raza avatar  avatar Richy Leong avatar Lin Hsu avatar Qi Wu avatar SMY1999 Samuel Young avatar Markus Worchel avatar Zhehao Li  avatar Delio Vicini avatar Tzu-Mao Li avatar

Watchers

 avatar Wenbo Chen avatar  avatar

adelta's Issues

No such file or directory compiler_problem_torch.py

Hi, sorry to bother you, I am trying to use Adelta with the following command in readme:
python run_shader.py render_test_finite_diff_rectangle_2step.py --dir rectangle_2step/ --backend torch --mode visualize_gradient

And I got an error say such a file is not exists:
FileNotFoundError: [Errno 2] No such file or directory: '/home/zeming/study/code/Adelta/rectangle_2step/compiler_problem_torch.py'

I actually did not find any way to generate it and nothing found in source code repository, but how to solve it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.