Coder Social home page Coder Social logo

tensorflow / addons Goto Github PK

View Code? Open in Web Editor NEW
1.7K 1.7K 605.0 6.04 MB

Useful extra functionality for TensorFlow 2.x maintained by SIG-addons

License: Apache License 2.0

Python 84.36% Shell 0.65% C++ 8.94% Smarty 4.62% Starlark 1.02% Dockerfile 0.40%
deep-learning machine-learning neural-network python tensorflow tensorflow-addons

addons's Introduction

Python PyPI DOI CII Best Practices OpenSSF Scorecard Fuzzing Status Fuzzing Status OSSRank Contributor Covenant TF Official Continuous TF Official Nightly

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well.

TensorFlow provides stable Python and C++ APIs, as well as a non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

Other devices (DirectX and MacOS-metal) are supported using Device plugins.

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development.

Patching guidelines

Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:

  • Clone the TensorFlow repo and switch to the corresponding branch for your desired TensorFlow version, for example, branch r2.8 for version 2.8.
  • Apply (that is, cherry-pick) the desired changes and resolve any code conflicts.
  • Run TensorFlow tests and ensure they pass.
  • Build the TensorFlow pip package from source.

Continuous build status

You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux GPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows GPU Status Temporarily Unavailable Nightly Binary Official GCS

Resources

Learn more about the TensorFlow community and how to contribute.

Courses

License

Apache License 2.0

addons's People

Contributors

aakashkumarnain avatar aaronmondal avatar abhichou4 avatar armando-fandango avatar ashutosh1919 avatar autoih avatar bhack avatar chenmoneygithub avatar facaiy avatar failure-to-thrive avatar fsx950223 avatar gabrieldemarmiesse avatar guillaumekln avatar howl-anderson avatar hyang0129 avatar lc0 avatar lgeiger avatar markdaoust avatar marload avatar mels630 avatar nicolaspi avatar pkan2 avatar qlzh727 avatar rushabh-v avatar seanpmorgan avatar shun-lin avatar squadrick avatar ssaishruthi avatar susmit-a avatar windqaq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

addons's Issues

Implement LazyAdamOptimizer

Per the RFC, we need to move LazyAdamOptimizer from contrib to addons:

This will involve inheriting from base Keras optimizer, modifying the code to match those APIs, and modifying test cases to run in TF2.

Request for function that randomly augments sets of images

Several operations for image data augmentation have been implemented in tensorflow, however most of these work in single examples.

As such, to achieve the same rotation between an input image and a set of different-sized masks (as in a feature pyramid network for region proposal, for example) you have to go through a process of conditionals (through tf.cond).

I have tried to do this to no avail (i.e. https://stackoverflow.com/questions/51951039/different-results-with-image-flipping-in-tensorflow) and was wondering whether any solution had been implemented for this and, if not, to request a function for data augmentation in sets of images

ImageProjectiveTransform should be named ImageProjectiveTransformV2

Describe the bug

  • It looks like ImageProjectiveTransform is based on ImageProjectiveTransformV2 present in tf.contrib in TF-1.x.

Describe the expected behavior

  • Op names should not colide between 1.x and 2.x environments or saved models won't be shareable across them.

Implement DenseToSparse

Per the RFC, we need to move dense_to_sparse from contrib to addons:

This will involve restructuring in an OO format by inheriting from base Keras Layer, modifying the code to match those APIs, and modifying test cases to run in TF2.

Discussion: Does it make more sense to consider this a low level nn block instead of a Layer? Or just an array op?

Feature request: add parametric ELU (PELU) activation function

Proposal

The exponential linear unit (ELU) is already in TensorFlow as tf.nn.elu which is great. The new parametric version (called PELU) shows very promising experimental results so I wonder if it could be added in to TensorFlow too in order to encourage more widespread experimentation with it by the deep learning community. One problem with it though is that it's stateful (e.g. tf.Variable), meaning it's not clear to me where in TensorFlow it fits in.

Implementation

Here's an implementation of the PELU that I've been using lately (I'm assuming batch_size is the first dimension in x):

def pelu(x):
  """Parametric Exponential Linear Unit (https://arxiv.org/abs/1605.09332v1)."""
  with tf.variable_scope(x.op.name + '_activation', initializer=tf.constant_initializer(1.0)):
    shape = x.get_shape().as_list()[1:]
    alpha = tf.get_variable('alpha', shape)
    beta = tf.get_variable('beta', shape)
    positive = tf.nn.relu(x) * alpha / (beta + 1e-9)
    negative = alpha * (tf.exp((-tf.nn.relu(-x)) / (beta + 1e-9)) - 1)
    return negative + positive

Reference

https://arxiv.org/abs/1605.09332v1

Resize image for nd

Is there a functionality of resize image for n-d where n > 2? It seems currently tensorflow only support resize images for 2d images (4D tensor). Could you add the functionality?

[Bug] skip_gram_sample assert rank is 1

While converting the tests for skip_gram_ops, I noticed a test that was failing in TF2 eager execution, but I'm not sure why.

This test:

with self.assertRaises(ValueError):
    invalid_tensor = constant_op.constant([[b"the"], [b"quick"], [b"brown"]])
    text.skip_gram_sample(invalid_tensor)

Should raise an error because of this check in the op definition. When decorating with @test_util.run_deprecated_v1 the test correctly passes (error is raised), but otherwise it fails to raise an Error.

Is this related to inspecting the rank of EagerTensors? Or is something else at play?

Automate Build Process

Currently we have no automated process for building Addons across python version and operating systems. Going forward we'll want this process to be automated.. but it may be challenging for us to start builds without access to the Google internal tooling.

We could conceivably use Travis... but if we can keep consistent CI that would be ideal.

Feature Request : PathNorm and PathSGD

PathSGD was introduced in this paper. Is there existing support for this? If not, this is a feature request for:

  • PathNorm computation (Equation 5 in the paper)
  • PathSGD using the PathNorm

For the first part, the interface can be to provide a function path_norm(a, b, p=2) where a and b are tensors, p is a scalar. The function returns the p-PathNorm for the "path" between the tensors a and b (assuming that b depends on a and some weights. If not, there can be an exception or simply return 0).

Package Addons for python3.7

Using the current docker image, we're unable to install / build TensorFlow in python3.7 with the following error:

ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /root/miniconda2/envs/py37/lib/python3.7/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

It seems like we need to compile using Ubuntu 16.04 libs, which will require a bump in the custom-op docker image. @yifeif would it be possible to bump the base image in the container? If so we can rebuild our release as 0.1.1 and include py37.

Alternatively, we could create a new docker image for builds.

[Discussion] When to utilize tf.function

Are there any best practices for when to decorate functions with tf.function? The TF2 documentation has a nice example where a keras Model is decorating the call method.

@karmel @martinwicke Should we use this extensively in our repo anytime there is a method without python side-effects? Is it safer to only utilize it when there are clear performance gains by exploiting parallelism or other optimizations? The latter seems like a vague criteria without specifying what patterns to look for.

Packaged Addons don't run on tf-nightly-2.0

While packaging 0.2.0 release of addons, I realized that if the pip package was built against tensorflow==2.0.0-alpha0 the installed package would not correctly work on the tf2-nightly install.

The error is:

NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_addons/custom_ops/image/python/_distort_image_ops.so: undefined symbol: _ZN10tensorflow12OpDefBuilder10SetShapeFnEPFNS_6StatusEPNS_15shape_inference16InferenceContextEE

Digging in further I thought that something may have gone awry in the build for this version, but when I checked a previous version (0.1.1) I noticed it also failed to load on the current nightly. Typically I've seen this type of error when using gcc>=5 but the solution provided is to set D_GLIBCXX_USE_CXX11_ABI=0 which we do

Some things to note in the investigation:

  • 0.1.1 and 0.1.0 were built against the tf2-nightly at the time of packaging and did import fine with that package at the time (It also works on tf2-alpha)
  • If I build 0.2.0 against the current nightly then it does successful import against tf2-nightly (This is why our nightly builds succeed). However, it'll then fail to import against TF2-alpha

Has there been a recent change in how tf2-nightly is packaged compared to tf2-alpha? Going forward we can mention that addons should be used with tf2-alpha, and just use nightly for testing... but there seems to be an underlying issue that needs to be fixed.

Colab Notebooks:

Addons-0.1.1 + Alpha
Addons-0.1.1 + Nightly

Addons-0.2.0 + Alpha
Addons-0.2.0 + Nightly

cc @gunan @yifeif

Feature Request : Stochastic Depth

Stochastic Depth (aka layer dropout) has been shown to speed up and improve training in ResNets, as well as overall accuracy on testing sets. Essentially, every training step a random subset of residual layers are entirely removed from the network, and training proceeds on the remaining layers. Direct connections are made between the missing layers.

It is described in this paper: https://arxiv.org/pdf/1603.09382.pdf. (Deep Networks with Stochastic Depth by Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, Kilian Q. Weinberger)

I can't think of a way to implement this with the python API without reconstructing the model every training iteration, and I'm not familiar the with the C++ API / Cudnn to try to write the op myself.

Of course I'm willing to try any python-only suggestions.

Thanks in advance,

Alex

Remove numpy implementation in test case?

I find, in most of test cases of Addons / tf.contrib, we are used to write a duplicated numpy function to verify the correction of our implementations. It's straightforward, but the drawbacks are:

  1. both tf and numpy implications could be incorrect;
  2. numpy might have a bug, or will introduce a bug in the future;
  3. difficult to maintain test cases, and might also too complex to understand it.

Hence I propose to remove all numpy implementations, and replace them by explicit numerical value (and add necessary comments to describe how to figure it out, say intermediate result of calculation) in test cases.

# No
def np_add(a, b):
    return a + b

self.assertEqual(np_add(1, 2), tf_add(1, 2)) 

# Yes
# Add some comments if necessary:
# c = a + b = 1 + 2 = 3
self.assertEqual(3, tf_add(1, 2))

cc @seanpmorgan @karmel

Implement External Optimizers

Per the RFC, we need to move external optimizers from contrib to addons:

This will involve inheriting from base Keras optimizer, modifying the code to match those APIs, and modifying test cases to run in TF2.

  • ExternalOptimizerInferace
  • ScipyOptimizerInterface

Add contribution guideline for moving from tf.contrib

@dynamicwebpaige wisely suggested in #38 that we put together a guideline on moving from tf.contrib to addons. A majority of our code is currently ported from tf.contrib and there are many open issues for bringing over the rest of our addons RFC.

Some things to include in the guideline:

  • TF2.0 symbols mapping
  • Guidelines on when to use tf.function
  • Common code changes (remove variable_scope, etc.)
  • Suggestions on adding aditional test cases

I imagine we would hold this document in the repo for a substantial amount of time, but ultimately will want to remove it as we grow away from our original codebase of ported tf.contrib code

Implement Image functions

Per the RFC, we will be moving several of the image modules from contrib to addons:

  • dense_image_warp
  • random_hsv_in_yiq
  • adjust_hsv_in_yiq
  • rotate
  • translate
  • angle_to_projective_transforms
  • translations_to_projective_transforms
  • transform
  • compose_transforms
  • flat_transforms_to_matricies
  • connected_components
  • sparse_image_warp

This will involve moving the code base and modifying test cases to run in TF2.

Create Issue Templates

It would be beneficial to the repo's maintianability if we had some templates for common issues. Possible templates

  • Issue to request new addition to addons (option for migration from tf.contrib)
  • Bug report

Do you accept tf.keras.callbacks?

Seeing as keras is going to become a first citizen inside of tensorflow, are you also accepting submission for callbacks? Are there guidelines for this (writing tests/docs)?.

Build out contributing guidelines

In an effort to encourage involvement, and standardize contributions we should make a concerted effort to build thorough docs on how to contribute for each sub-package (layers, crf, etc.)

For example:
Building from #19 dicussion. We should create standards for adding custom keras Layers. Some things to include:

Rename LazyAdamOptimizer to LazyAdam

All core optimizers in TensorFlow 2.0 dropped the "Optimizer" suffix: "Adam", "Adadelta", etc. (see the module overview). For consistency, optimizers defined in tensorflow/addons should probably follow the same naming convention.

So I propose to rename LazyAdamOptimizer to LazyAdam.

Entity could not be transformed

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.0.0-dev20190317 (gpu nightly)
  • TensorFlow Addons installed from (source, PyPi): source
  • TensorFlow Addons version: master
  • Python version and type (eg. Anaconda Python, Stock Python as in Mac, or homebrew installed Python etc): anaconda (conda-forge) python 3.7.1
  • Bazel version (if compiling from source): 0.20.0
  • GCC/Compiler version (if compiling from source): 7.3.0
  • Is GPU used? (yes/no): yes
  • GPU model (if used): RTX 2070

I'm seeing the following warning appear when trying to use the transform op. The simplest way to reproduce is to simply run the tests via python:

python ~/src/addons/tensorflow_addons/image/transform_test.py
[...]
W0317 18:03:00.753756 139906711668544 tf_logging.py:161] Entity <function image_projective_transform_v2 at 0x7f3e73a05a60> could not be transformed and will be staged without change. Error details can be found in the logs when running with the env variable AUTOGRAPH_VERBOSITY >= 1. Please report this to the AutoGraph team. Cause: Unexpected error transforming <function image_projective_transform_v2 at 0x7f3e73a05a60>. If you believe this is due to a bug, please set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output when filing the bug report. Caused by: Unable to locate the source code of <function image_projective_transform_v2 at 0x7f3e73a05a60>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code

export AUTOGRAPH_VERBOSITY=10
python ~/src/addons/tensorflow_addons/image/transform_test.py
[...]
I0317 18:11:38.742995 140338201401152 ag_logging.py:132] Error transforming <function image_projective_transform_v2 at 0x7fa2ea6b2a60>
Traceback (most recent call last):
  File "~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/autograph/pyct/parser.py", line 51, in parse_entity
    source = tf_inspect.getsource_no_unwrap(entity)
  File "~/miniconda3/envs/tf2/lib/python3.7/site-packages/tensorflow/python/util/tf_inspect.py", line 408, in getsource_no_unwrap
    lines, lnum = _inspect.findsource(obj)
  File "~/miniconda3/envs/tf2/lib/python3.7/inspect.py", line 786, in findsource
    raise OSError('could not get source code')
OSError: could not get source code

Is this expected? I expect this is likely an artifact of a broken source build on my end, but I imagine others may experience similar issues until the CI/CD system is building nightly versions for all platforms and python versions.

Release first version of tensorflow-addons

As we discussed in the February monthly meeting... we are targeting our first release by early next week. I wanted to get this issue up so we can discuss versioning and outline what else needs to get done.

Versioning

I think there are two workable methods here:

  1. Release on our own cadence and mention in the release which TF version it was tested against.
    • This is how it's done for tensorflow_probability
    • We could monitor our nightly tests and catch things like #37 which require us to rebuild
  2. Release in step with core TensorFlow
    • This is how it's done for tensorboard.
    • This is more intuitive for the end user IMO
    • Does require more of a commitment from the SIG

Remaining blockers

  • Some examples (use tensorflow/examples as template)
  • Decide on version naming

Build out examples with working colab notebooks

Would be nice to mirror the TF2 doc style of colab notebooks being first-class tutorials.

I think it's also a good idea to require colab notebooks for new contributions in order to showcase their benefits and best practices. Certainly won't be possible/necessary for everything.

Add new API type: Keras Callback

Per #58 and our monthly discussion in March, Keras Callbacks seem like they are within scope for TF Addons. This new addition will require:

  • Adding a new directory in tensorflow_addons which will hold keras callbacks
  • Some initial contribution guidelines for adding new callbacks
    • e.g. sub-class the abstract base class
  • Addition to the projects central BUILD file
  • Updating the README/CONTRIBUTINg docs

This will ideally include a new callback as an example, but not necessarily.

Implement Weight Decay Optimizers

Per the RFC, we need to move Weight Decay Optimizers from contrib to addons:

  • DecoupledWeightDecayExtension
  • AdamWOptimizer
  • MomentumWOptimizer

This will involve inheriting from base Keras optimizer, modifying the code to match those APIs, and modifying test cases to run in TF2.

Remove use of internal tensorflow API

The current implementations use the internal tensorflow.python interface. I know this was how it was done in tf.contrib. However, since addons are not an integrated part of tensorflow but rather decoupled from the main tensorflow module, using that internal API seams highly wrong.

Implement Metric Losses

Per the RFC, we need to move metric_losses from contrib to addons:

  • pairwise_distance
  • contrastive_loss
  • triplet_semihard_loss
  • npairs_loss
  • npairs_loss_multilabel
  • lifted_struct_loss

This will involve inheriting from base Keras Loss, modifying the code to match those APIs, and modifying test cases to run in TF2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.