Coder Social home page Coder Social logo

unifyai / ivy Goto Github PK

View Code? Open in Web Editor NEW
14.0K 73.0 5.8K 167.25 MB

Convert ML Code Between Frameworks

Home Page: https://ivy.dev

License: Other

Dockerfile 0.02% Shell 0.10% Python 98.18% Rust 1.18% C++ 0.49% Jupyter Notebook 0.03%
python machine-learning deep-learning neural-network gpu autograd ivy abstraction template tensorflow pytorch mxnet numpy jax

ivy's Introduction




Convert ML Models Between Frameworks

Ivy is an open-source machine learning framework that enables you to:

  • Use your ML models and/or functions in any framework by converting any code from one framework to another using ivy.transpile
  • Convert entire ML models and libraries between frameworks by generating identical source code in any frameworkusing ivy.source_to_source (currently in private beta)

Installing ivy

The easiest way to set up Ivy is to install it using pip:

pip install ivy
Docker Images

Given the challenges of maintaining installations of various frameworks in a single environment, users who would want to test ivy with multiple frameworks at once can use our Docker images for a seamless experience. You can pull the images from:

docker pull transpileai/ivy:latest      # CPU
docker pull transpileai/ivy:latest-gpu  # GPU
From Source

You can also install Ivy from source if you want to take advantage of the latest changes, but we can't ensure everything will work as expected 😅

git clone https://github.com/ivy-llc/ivy.git
cd ivy
pip install --user -e .

If you want to set up testing and various frameworks it's probably best to check out the Setting Up page, where OS-specific and IDE-specific instructions and video tutorials to do so are available!


Getting started

Ivy's transpiler allows you convert code between different ML frameworks. Have a look at our Quickstart notebook to get a brief idea of the features!

The most important notebooks are:

Beyond that, based on the frameworks you want to convert code between, there are a few more examples further down this page 👇 which contain a number of models and libraries transpiled between PyTorch, JAX, TensorFlow and NumPy.


Using ivy

After installing ivy, you can start using it straight away, for example:

Transpiling any code from one framework to another
import ivy
import torch
import jax

def jax_fn(x):
    a = jax.numpy.dot(x, x)
    b = jax.numpy.mean(x)
    return x * a + b

jax_x = jax.numpy.array([1., 2., 3.])
torch_x = torch.tensor([1., 2., 3.])
torch_fn = ivy.transpile(jax_fn, source="jax", to="torch", args=(jax_x,))
ret = torch_fn(torch_x)
Running your code with any backend
 import ivy
 import torch
 import jax

 ivy.set_backend("jax")

 x = jax.numpy.array([1, 2, 3])
 y = jax.numpy.array([3, 2, 1])
 z = ivy.add(x, y)

 ivy.set_backend('torch')

 x = torch.tensor([1, 2, 3])
 y = torch.tensor([3, 2, 1])
 z = ivy.add(x, y)


The Examples page features a wide range of demos and tutorials showcasing the functionalities of Ivy along with multiple use cases, but feel free to check out some shorter framework-specific examples here ⬇️

I'm using PyTorch 
You can use Ivy to get PyTorch code from:
Any model
From TensorFlow
import ivy
import torch
import tensorflow as tf

# Get a pretrained keras model
eff_encoder = tf.keras.applications.efficientnet_v2.EfficientNetV2B0(
    include_top=False, weights="imagenet", input_shape=(224, 224, 3)
)

# Transpile it into a torch.nn.Module with the corresponding parameters
noise = tf.random.normal(shape=(1, 224, 224, 3))
torch_eff_encoder = ivy.transpile(eff_encoder, source="tensorflow", to="torch", args=(noise,))

# Build a classifier using the transpiled encoder
class Classifier(torch.nn.Module):
    def __init__(self, num_classes=20):
        super().__init__()
        self.encoder = torch_eff_encoder
        self.fc = torch.nn.Linear(1280, num_classes)

    def forward(self, x):
        x = self.encoder(x)
        return self.fc(x)

# Initialize a trainable, customizable, torch.nn.Module
classifier = Classifier()
ret = classifier(torch.rand((1, 244, 244, 3)))
From JAX
import ivy
import jax
import torch

# Get a pretrained haiku model
# https://github.com/unifyai/demos/blob/15c235f/scripts/deepmind_perceiver_io.py
from deepmind_perceiver_io import key, perceiver_backbone

# Transpile it into a torch.nn.Module with the corresponding parameters
dummy_input = jax.random.uniform(key, shape=(1, 3, 224, 224))
params = perceiver_backbone.init(rng=key, images=dummy_input)
ivy.set_backend("jax")
backbone = ivy.transpile(
    perceiver_backbone, source="jax", to="torch", params_v=params, kwargs={"images": dummy_input}
)

# Build a classifier using the transpiled backbone
class PerceiverIOClassifier(torch.nn.Module):
    def __init__(self, num_classes=20):
        super().__init__()
        self.backbone = backbone
        self.max_pool = torch.nn.MaxPool2d((512, 1))
        self.flatten = torch.nn.Flatten()
        self.fc = torch.nn.Linear(1024, num_classes)

    def forward(self, x):
        x = self.backbone(images=x)
        x = self.flatten(self.max_pool(x))
        return self.fc(x)

# Initialize a trainable, customizable, torch.nn.Module
classifier = PerceiverIOClassifier()
ret = classifier(torch.rand((1, 3, 224, 224)))
Any library
From Tensorflow
import ivy
import torch
import os
os.environ["SM_FRAMEWORK"] = "tf.keras"
import segmentation_models as sm

# transpile sm from tensorflow to torch
torch_sm = ivy.transpile(sm, source="tensorflow", to="torch")

# get some image-like arrays
output = torch.rand((1, 3, 512, 512))
target = torch.rand((1, 3, 512, 512))

# and use the transpiled version of any function from the library!
out = torch_sm.metrics.iou_score(output, target)
From JAX
import ivy
import rax
import torch

# transpile rax from jax to torch
torch_rax = ivy.transpile(rax, source="jax", to="torch")

# get some arrays
scores = torch.tensor([2.2, 1.3, 5.4])
labels = torch.tensor([1.0, 0.0, 0.0])

# and use the transpiled version of any function from the library!
out = torch_rax.poly1_softmax_loss(scores, labels)
From NumPy
import ivy
import torch
import madmom

# transpile madmon from numpy to torch
torch_madmom = ivy.transpile(madmom, source="numpy", to="torch")

# get some arrays
freqs = torch.arange(20) * 10

# and use the transpiled version of any function from the library!
out = torch_madmom.audio.filters.hz2midi(freqs)
Any function
From Tensorflow
import ivy
import tensorflow as tf
import torch

def loss(predictions, targets):
    return tf.sqrt(tf.reduce_mean(tf.square(predictions - targets)))

# transpile any function from tf to torch
torch_loss = ivy.transpile(loss, source="tensorflow", to="torch")

# get some arrays
p = torch.tensor([3.0, 2.0, 1.0])
t = torch.tensor([0.0, 0.0, 0.0])

# and use the transpiled version!
out = torch_loss(p, t)
From JAX
import ivy
import jax.numpy as jnp
import torch

def loss(predictions, targets):
    return jnp.sqrt(jnp.mean((predictions - targets) ** 2))

# transpile any function from jax to torch
torch_loss = ivy.transpile(loss, source="jax", to="torch")

# get some arrays
p = torch.tensor([3.0, 2.0, 1.0])
t = torch.tensor([0.0, 0.0, 0.0])

# and use the transpiled version!
out = torch_loss(p, t)
From NumPy
import ivy
import numpy as np
import torch

def loss(predictions, targets):
    return np.sqrt(np.mean((predictions - targets) ** 2))

# transpile any function from numpy to torch
torch_loss = ivy.transpile(loss, source="numpy", to="torch")

# get some arrays
p = torch.tensor([3.0, 2.0, 1.0])
t = torch.tensor([0.0, 0.0, 0.0])

# and use the transpiled version!
out = torch_loss(p, t)
I'm using TensorFlow 
You can use Ivy to get TensorFlow code from:
Any model
From PyTorch
import ivy
import torch
import timm
import tensorflow as tf

# Get a pretrained pytorch model
mlp_encoder = timm.create_model("mixer_b16_224", pretrained=True, num_classes=0)

# Transpile it into a keras.Model with the corresponding parameters
noise = torch.randn(1, 3, 224, 224)
mlp_encoder = ivy.transpile(mlp_encoder, to="tensorflow", args=(noise,))

# Build a classifier using the transpiled encoder
class Classifier(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.encoder = mlp_encoder
        self.output_dense = tf.keras.layers.Dense(units=1000, activation="softmax")

    def call(self, x):
        x = self.encoder(x)
        return self.output_dense(x)

# Transform the classifier and use it as a standard keras.Model
x = tf.random.normal(shape=(1, 3, 224, 224))
model = Classifier()
ret = model(x)
From JAX
import ivy
import jax
import tensorflow as tf

# Get a pretrained haiku model
# https://ivy.dev/demos/scripts/deepmind_perceiver_io.py
from deepmind_perceiver_io import key, perceiver_backbone

# Transpile it into a tf.keras.Model with the corresponding parameters
dummy_input = jax.random.uniform(key, shape=(1, 3, 224, 224))
params = perceiver_backbone.init(rng=key, images=dummy_input)
backbone = ivy.transpile(
    perceiver_backbone, to="tensorflow", params_v=params, args=(dummy_input,)
)

# Build a classifier using the transpiled backbone
class PerceiverIOClassifier(tf.keras.Model):
    def __init__(self, num_classes=20):
        super().__init__()
        self.backbone = backbone
        self.max_pool = tf.keras.layers.MaxPooling1D(pool_size=512)
        self.flatten = tf.keras.layers.Flatten()
        self.fc = tf.keras.layers.Dense(num_classes)

    def call(self, x):
        x = self.backbone(x)
        x = self.flatten(self.max_pool(x))
        return self.fc(x)

# Initialize a trainable, customizable, tf.keras.Model
x = tf.random.normal(shape=(1, 3, 224, 224))
classifier = PerceiverIOClassifier()
ret = classifier(x)
Any library
From PyTorch
import ivy
import kornia
import requests
import numpy as np
import tensorflow as tf
from PIL import Image

# transpile kornia from torch to tensorflow
tf_kornia = ivy.transpile(kornia, source="torch", to="tensorflow")

# get an image
url = "http://images.cocodataset.org/train2017/000000000034.jpg"
raw_img = Image.open(requests.get(url, stream=True).raw)

# convert it to the format expected by kornia
img = np.array(raw_img)
img = tf.transpose(tf.constant(img), (2, 0, 1))
img = tf.expand_dims(img, 0) / 255

# and use the transpiled version of any function from the library!
out = tf_kornia.enhance.sharpness(img, 5)
From JAX
import ivy
import rax
import tensorflow as tf

# transpile rax from jax to tensorflow
tf_rax = ivy.transpile(rax, source="jax", to="tensorflow")

# get some arrays
scores = tf.constant([2.2, 1.3, 5.4])
labels = tf.constant([1.0, 0.0, 0.0])

# and use the transpiled version of any function from the library!
out = tf_rax.poly1_softmax_loss(scores, labels)
From NumPy
import ivy
import madmom
import tensorflow as tf

# transpile madmom from numpy to tensorflow
tf_madmom = ivy.transpile(madmom, source="numpy", to="tensorflow")

# get some arrays
freqs = tf.range(20) * 10

# and use the transpiled version of any function from the library!
out = tf_madmom.audio.filters.hz2midi(freqs)
Any function
From PyTorch
import ivy
import torch
import tensorflow as tf

def loss(predictions, targets):
    return torch.sqrt(torch.mean((predictions - targets) ** 2))

# transpile any function from torch to tensorflow
tf_loss = ivy.transpile(loss, source="torch", to="tensorflow")

# get some arrays
p = tf.constant([3.0, 2.0, 1.0])
t = tf.constant([0.0, 0.0, 0.0])

# and use the transpiled version!
out = tf_loss(p, t)
From JAX
import ivy
import jax.numpy as jnp
import tensorflow as tf

def loss(predictions, targets):
    return jnp.sqrt(jnp.mean((predictions - targets) ** 2))

# transpile any function from jax to tensorflow
tf_loss = ivy.transpile(loss, source="jax", to="tensorflow")

# get some arrays
p = tf.constant([3.0, 2.0, 1.0])
t = tf.constant([0.0, 0.0, 0.0])

# and use the transpiled version!
out = tf_loss(p, t)
From NumPy
import ivy
import numpy as np
import tensorflow as tf

def loss(predictions, targets):
    return np.sqrt(np.mean((predictions - targets) ** 2))

# transpile any function from numpy to tensorflow
tf_loss = ivy.transpile(loss, source="numpy", to="tensorflow")

# get some arrays
p = tf.constant([3.0, 2.0, 1.0])
t = tf.constant([0.0, 0.0, 0.0])

# and use the transpiled version!
out = tf_loss(p, t)
I'm using Jax 
You can use Ivy to get JAX code from:
Any model
From PyTorch
import ivy
import timm
import torch
import jax
import haiku as hk

# Get a pretrained pytorch model
mlp_encoder = timm.create_model("mixer_b16_224", pretrained=True, num_classes=0)

# Transpile it into a hk.Module with the corresponding parameters
noise = torch.randn(1, 3, 224, 224)
mlp_encoder = ivy.transpile(mlp_encoder, source="torch", to="haiku", args=(noise,))

# Build a classifier using the transpiled encoder
class Classifier(hk.Module):
    def __init__(self, num_classes=1000):
        super().__init__()
        self.encoder = mlp_encoder()
        self.fc = hk.Linear(output_size=num_classes, with_bias=True)

    def __call__(self, x):
        x = self.encoder(x)
        x = self.fc(x)
        return x

def _forward_classifier(x):
    module = Classifier()
    return module(x)

# Transform the classifier and use it as a standard hk.Module
rng_key = jax.random.PRNGKey(42)
x = jax.random.uniform(key=rng_key, shape=(1, 3, 224, 224), dtype=jax.numpy.float32)
forward_classifier = hk.transform(_forward_classifier)
params = forward_classifier.init(rng=rng_key, x=x)

ret = forward_classifier.apply(params, None, x)
From TensorFlow
import ivy
import jax
import haiku as hk
import tensorflow as tf
jax.config.update("jax_enable_x64", True)

# Get a pretrained keras model
eff_encoder = tf.keras.applications.efficientnet_v2.EfficientNetV2B0(
    include_top=False, weights="imagenet", input_shape=(224, 224, 3)
)

# Transpile it into a hk.Module with the corresponding parameters
noise = tf.random.normal(shape=(1, 224, 224, 3))
hk_eff_encoder = ivy.transpile(eff_encoder, source="tensorflow", to="haiku", args=(noise,))

# Build a classifier using the transpiled encoder
class Classifier(hk.Module):
    def __init__(self, num_classes=1000):
        super().__init__()
        self.encoder = hk_eff_encoder()
        self.fc = hk.Linear(output_size=num_classes, with_bias=True)

    def __call__(self, x):
        x = self.encoder(x)
        x = self.fc(x)
        return x

def _forward_classifier(x):
    module = Classifier()
    return module(x)

# Transform the classifier and use it as a standard hk.Module
rng_key = jax.random.PRNGKey(42)
dummy_x = jax.random.uniform(key=rng_key, shape=(1, 224, 224, 3))
forward_classifier = hk.transform(_forward_classifier)
params = forward_classifier.init(rng=rng_key, x=dummy_x)

ret = forward_classifier.apply(params, None, dummy_x)
Any library
From PyTorch
import ivy
import kornia
import requests
import jax.numpy as jnp
from PIL import Image
jax.config.update("jax_enable_x64", True)

# transpile kornia from torch to jax
jax_kornia = ivy.transpile(kornia, source="torch", to="jax")

# get an image
url = "http://images.cocodataset.org/train2017/000000000034.jpg"
raw_img = Image.open(requests.get(url, stream=True).raw)

# convert it to the format expected by kornia
img = jnp.transpose(jnp.array(raw_img), (2, 0, 1))
img = jnp.expand_dims(img, 0) / 255

# and use the transpiled version of any function from the library!
out = jax_kornia.enhance.sharpness(img, 5)
From TensorFlow
import ivy
import jax
import os
os.environ["SM_FRAMEWORK"] = "tf.keras"
import segmentation_models as sm

# transpile sm from tensorflow to jax
jax_sm = ivy.transpile(sm, source="tensorflow", to="jax")

# get some image-like arrays
key = jax.random.PRNGKey(23)
key1, key2 = jax.random.split(key)
output = jax.random.uniform(key1, (1, 3, 512, 512))
target = jax.random.uniform(key2, (1, 3, 512, 512))

# and use the transpiled version of any function from the library!
out = jax_sm.metrics.iou_score(output, target)
From NumPy
import ivy
import madmom
import jax.numpy as jnp

# transpile madmon from numpy to jax
jax_madmom = ivy.transpile(madmom, source="numpy", to="jax")

# get some arrays
freqs = jnp.arange(20) * 10

# and use the transpiled version of any function from the library!
out = jax_madmom.audio.filters.hz2midi(freqs)
Any function
From PyTorch
import ivy
import torch
import jax.numpy as jnp

def loss(predictions, targets):
    return torch.sqrt(torch.mean((predictions - targets) ** 2))

# transpile any function from torch to jax
jax_loss = ivy.transpile(loss, source="torch", to="jax")

# get some arrays
p = jnp.array([3.0, 2.0, 1.0])
t = jnp.array([0.0, 0.0, 0.0])

# and use the transpiled version!
out = jax_loss(p, t)
From TensorFlow
import ivy
import tensorflow as tf
import jax.numpy as jnp

def loss(predictions, targets):
    return tf.sqrt(tf.reduce_mean(tf.square(predictions - targets)))

# transpile any function from tf to jax
jax_loss = ivy.transpile(loss, source="tensorflow", to="jax")

# get some arrays
p = jnp.array([3.0, 2.0, 1.0])
t = jnp.array([0.0, 0.0, 0.0])

# and use the transpiled version!
out = jax_loss(p, t)
From NumPy
import ivy
import numpy as np
import jax
import jax.numpy as jnp
jax.config.update('jax_enable_x64', True)

def loss(predictions, targets):
    return np.sqrt(np.mean((predictions - targets) ** 2))

# transpile any function from numpy to jax
jax_loss = ivy.transpile(loss, source="numpy", to="jax")

# get some arrays
p = jnp.array([3.0, 2.0, 1.0])
t = jnp.array([0.0, 0.0, 0.0])

# and use the transpiled version!
out = jax_loss(p, t)
I'm using NumPy 
You can use Ivy to get NumPy code from:
Any library
From PyTorch
import ivy
import kornia
import requests
import numpy as np
from PIL import Image

# transpile kornia from torch to np
np_kornia = ivy.transpile(kornia, source="torch", to="numpy")

# get an image
url = "http://images.cocodataset.org/train2017/000000000034.jpg"
raw_img = Image.open(requests.get(url, stream=True).raw)

# convert it to the format expected by kornia
img = np.transpose(np.array(raw_img), (2, 0, 1))
img = np.expand_dims(img, 0) / 255

# and use the transpiled version of any function from the library!
out = np_kornia.enhance.sharpness(img, 5)
From TensorFlow
import ivy
import numpy as np
import os
os.environ["SM_FRAMEWORK"] = "tf.keras"
import segmentation_models as sm

# transpile sm from tensorflow to numpy
np_sm = ivy.transpile(sm, source="tensorflow", to="numpy")

# get some image-like arrays
output = np.random.rand(1, 3, 512, 512).astype(dtype=np.float32)
target = np.random.rand(1, 3, 512, 512).astype(dtype=np.float32)

# and use the transpiled version of any function from the library!
out = np_sm.metrics.iou_score(output, target)
From Jax
import ivy
import rax
import numpy as np

# transpile rax from jax to numpy
np_rax = ivy.transpile(rax, source="jax", to="numpy")

# get some arrays
scores = np.array([2.2, 1.3, 5.4])
labels = np.array([1.0, 0.0, 0.0])

# and use the transpiled version of any function from the library!
out = np_rax.poly1_softmax_loss(scores, labels)
Any function
From PyTorch
import ivy
import torch
import numpy as np

def loss(predictions, targets):
    return torch.sqrt(torch.mean((predictions - targets) ** 2))

# transpile any function from torch to numpy
np_loss = ivy.transpile(loss, source="torch", to="numpy")

# get some arrays
p = np.array([3.0, 2.0, 1.0])
t = np.array([0.0, 0.0, 0.0])

# and use the transpiled version!
out = np_loss(p, t)
From TensorFlow
import ivy
import tensorflow as tf
import numpy as np

def loss(predictions, targets):
    return tf.sqrt(tf.reduce_mean(tf.square(predictions - targets)))

# transpile any function from tf to numpy
np_loss = ivy.transpile(loss, source="tensorflow", to="numpy")

# get some arrays
p = np.array([3.0, 2.0, 1.0])
t = np.array([0.0, 0.0, 0.0])

# and use the transpiled version!
out = np_loss(p, t)
From JAX
import ivy
import jax.numpy as jnp
import numpy as np

def loss(predictions, targets):
    return jnp.sqrt(jnp.mean((predictions - targets) ** 2))

# transpile any function from jax to numpy
np_loss = ivy.transpile(loss, source="jax", to="numpy")

# get some arrays
p = np.array([3.0, 2.0, 1.0])
t = np.array([0.0, 0.0, 0.0])

# and use the transpiled version!
out = np_loss(p, t)


For a more comprehensive overview, head over to the Demos section with more on the basics, a few guides and a wide-ranging set of examples that demonstrate the transpilation of various popular models. We continue to expand on that list, let us know what demos you'd like us to add next 🎯


How ivy works?

Let's take a look at how Ivy works as a transpiler in more detail to get an idea of why and where to use it.

When is Ivy's transpiler useful?

If you want to use building blocks published in other frameworks (neural networks, layers, array computing libraries, training pipelines...), you want to integrate code developed in various frameworks, or maybe straight up migrate code from one framework to another or even between versions of the same framework, the transpiler is definitely the tool for the job! You can use the converted code just as if it was code originally developed in that framework, applying framework-specific optimizations or tools, instantly exposing your project to all of the unique perks of a different framework.


Ivy's transpiler allows you to use code from any other framework (or from any other version of the same framework!) in your own code, by just adding one line of code. Under the hood, Ivy traces a computational graph and leverages the frontends and backends to link one version of one framework to another version of another framework.

This way, Ivy makes all ML-related projects available for you, independently of the framework you want to use to research, develop, or deploy systems. Feel free to head over to the docs for the full API reference, but the functions you'd most likely want to use are:

# Traces an efficient fully-functional graph from a function, removing all wrapping and redundant code. See usage in the documentation
ivy.trace_graph()

# Converts framework-specific code to a target framework of choice. See usage in the documentation
ivy.transpile()

# Converts framework-specific code to Ivy's framework-agnostic API. See usage in the documentation
ivy.unify()

These functions can be used eagerly or lazily. If you pass the necessary arguments for function tracing, the graph tracing/transpilation step will happen instantly (eagerly). Otherwise, the graph tracing/transpilation will happen only when the returned function is first invoked.

import ivy
import jax
ivy.set_backend("jax")

# Simple JAX function to transpile
def test_fn(x):
    return jax.numpy.sum(x)

x1 = ivy.array([1., 2.])
# Arguments are available -> transpilation happens eagerly
eager_graph = ivy.transpile(test_fn, source="jax", to="torch", args=(x1,))

# eager_graph is now torch code and runs efficiently
ret = eager_graph(x1)
# Arguments are not available -> transpilation happens lazily
lazy_graph = ivy.transpile(test_fn, source="jax", to="torch")

# The transpiled graph is initialized, transpilation will happen here
ret = lazy_graph(x1)

# lazy_graph is now torch code and runs efficiently
ret = lazy_graph(x1)

If you want to learn more, you can find more information in the Ivy as a transpiler section of the docs!


Documentation

You can find Ivy's documentation on the Docs page, which includes:

  • Motivation: This contextualizes the problem Ivy is trying to solve by going over
  • Related Work: Which paints a picture of the role Ivy plays in the ML stack, comparing it to other existing solutions in terms of functionalities and abstraction level.
  • Design: A user-focused guide about the design decision behind the architecture and the main building blocks of Ivy.
  • Deep Dive: Which delves deeper into the implementation details of Ivy and is oriented towards potential contributors to the code base.

Contributing

We believe that everyone can contribute and make a difference. Whether it's writing code, fixing bugs, or simply sharing feedback, your contributions are definitely welcome and appreciated 🙌

Check out all of our Open Tasks, and find out more info in our Contributing guide in the docs! Or to immediately dive into a useful task, look for any failing tests on our Test Dashboard!


Community

Join our growing community on a mission to make conversions between frameworks simple and accessible to all! Whether you are a seasoned developer or just starting out, you'll find a place here! Join the Ivy community on our Discord 👾 server, which is the perfect place to ask questions, share ideas, and get help from both fellow developers and the Ivy Team directly.

See you there!


Citation

If you use Ivy for your work, please don't forget to give proper credit by including the accompanying paper 📄 in your references. It's a small way to show appreciation and help to continue to support this and other open source projects 🙌

@article{lenton2021ivy,
  title={Ivy: Templated deep learning for inter-framework portability},
  author={Lenton, Daniel and Pardo, Fabio and Falck, Fabian and James, Stephen and Clark, Ronald},
  journal={arXiv preprint arXiv:2102.02886},
  year={2021}
}

ivy's People

Contributors

1doomdie1 avatar aarsh2001 avatar ahmedo42 avatar annatz avatar catb1t avatar daniel4078 avatar djl11 avatar fnhirwa avatar fspyridakos avatar hello-fri-end avatar hmahmood24 avatar ishticode avatar ivy-branch avatar jerrygcding avatar jiahanxie353 avatar juliagsy avatar kareemmax avatar mahmoudashraf97 avatar mattbarrett98 avatar nripeshn avatar rashulchutani avatar ricksanchezstoic avatar saeedashrraf avatar sai-suraj-27 avatar sam-armstrong avatar sherry30 avatar simonetgordon avatar vaatsalya123 avatar vedpatwardhan avatar zaeemansari70 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ivy's Issues

Add Pooling Layers

Would be better to be able to add pooling in the layers, like MaxPool2D.

Add Miscellaneous Operations to PyTorch Frontend

Add miscellaneous operations to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/miscellaneous\_ops.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_miscellaneous\_ops.py

Add Functions to core API

Add new Ivy functions:

general

  • is_floating_point
  • is_nonzero
  • set_default_dtype
  • get_default_dtype
  • numel
  • full
  • full_like

random

  • bernoulli
  • poisson
  • randperm

math

  • copysign
  • deg2rad
  • rad2deg

Add General Functions to Ivy Frontend

Add General Functions to Ivy frontend:

  • get_referrers_recursive
  • array
  • is_array
  • copy_array
  • array_equal
  • arrays_equal
  • equal
  • to_numpy
  • to_scalar
  • to_list
  • shape
  • get_num_dims
  • minimum
  • maximum
  • clip
  • clip_vector_norm
  • clip_matrix_norm
  • round
  • floormod
  • floor
  • ceil
  • abs
  • argmax
  • argmin
  • argsort
  • cast
  • arange
  • linspace
  • logspace
  • concatenate
  • flip
  • stack
  • unstack
  • split
  • repeat
  • tile
  • constant_pad
  • zero_pad
  • fourier_encode
  • swapaxes
  • transpose
  • expand_dims
  • where
  • indices_where
  • isnan
  • value_is_nan
  • has_nans
  • reshape
  • broadcast_to
  • squeeze
  • zeros
  • zeros_like
  • ones
  • ones_like
  • one_hot
  • cross
  • matmul
  • cumsum
  • cumprod
  • identity
  • meshgrid
  • scatter_flat
  • scatter_nd
  • gather
  • gather_nd
  • linear_resample
  • exists
  • default
  • try_else_none
  • arg_names
  • match_kwargs
  • dtype
  • dtype_to_str
  • dtype_str
  • cache_fn
  • current_framework_str
  • einops_rearrange
  • einops_reduce
  • einops_repeat
  • get_min_denominator
  • set_min_denominator
  • stable_divide
  • get_min_base
  • set_min_base
  • stable_pow
  • multiprocessing
  • set_queue_timeout
  • queue_timeout
  • tmp_dir
  • set_tmp_dir
  • get_all_arrays_in_memory
  • num_arrays_in_memory
  • print_all_arrays_in_memory
  • container_types

Add Pooling functions to PyTorch Frontend

Add Pooling functions to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/nn/functional/pooling\_functions.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_pooling\_functions.py

Add Loss functions to PyTorch Frontend

Add Loss functions to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/loss\_functions.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_loss\_functions.py
  • ivy/functional/frontends/torch/nn/functional/loss\_functions.py

Add Ivy Container Instance Methods

Add Ivy Container Instance Methods:

  • update_config
  • inplace_update
  • set_framework
  • all_true
  • all_false
  • reduce_sum
  • reduce_prod
  • reduce_mean
  • reduce_var
  • reduce_std
  • reduce_min
  • reduce_max
  • minimum
  • maximum
  • clip
  • clip_vector_norm
  • einsum
  • vector_norm
  • matrix_norm
  • flip
  • shuffle
  • slice_via_key
  • as_ones
  • as_zeros
  • as_bools
  • as_random_uniform
  • to_native
  • to_ivy
  • expand_dims
  • dev_clone
  • dev_dist
  • to_multi_dev
  • unstack
  • split
  • gather
  • gather_nd
  • repeat
  • swapaxes
  • reshape
  • einops_rearrange
  • einops_reduce
  • einops_repeat
  • to_dev
  • stop_gradients
  • as_variables
  • as_arrays
  • num_arrays
  • size_ordered_arrays
  • to_numpy
  • from_numpy
  • arrays_as_lists
  • to_disk_as_hdf5
  • to_disk_as_pickled
  • to_jsonable
  • to_disk_as_json
  • to_list
  • to_raw
  • to_dict
  • to_iterator
  • to_iterator_values
  • to_iterator_keys
  • to_flat_list
  • from_flat_list
  • has_key
  • has_key_chain
  • find_sub_container
  • contains_sub_container
  • assert_contains_sub_container
  • find_sub_structure
  • contains_sub_structure
  • assert_contains_sub_structure
  • has_nans
  • at_keys
  • at_key_chain
  • at_key_chains
  • all_key_chains
  • key_chains_containing
  • set_at_keys
  • set_at_key_chain
  • overwrite_at_key_chain
  • set_at_key_chains
  • overwrite_at_key_chains
  • prune_keys
  • prune_key_chain
  • prune_key_chains
  • format_key_chains
  • sort_by_key
  • prune_empty
  • prune_key_from_key_chains
  • prune_keys_from_key_chains
  • restructure_key_chains
  • restructure
  • flatten_key_chains
  • copy
  • deep_copy
  • map
  • map_conts
  • dtype
  • with_entries_as_lists
  • reshape_like
  • create_if_absent
  • if_exists
  • try_kc
  • cutoff_at_depth
  • cutoff_at_height
  • slice_keys
  • with_print_limit
  • remove_print_limit
  • with_key_length_limit
  • remove_key_length_limit
  • with_print_indent
  • with_print_line_spacing
  • with_default_key_color
  • with_ivy_backend
  • set_ivy_backend
  • show
  • show_sub_container

Add Gradient Functions + Classes to Ivy Frontend

Add Gradient Functions + Classes to Ivy frontend:

  • GradientTracking

Gradient Mode

  • with_grads
  • set_with_grads
  • unset_with_grads

Variables

  • variable
  • is_variable
  • variable_data
  • inplace_update
  • inplace_decrement
  • inplace_increment
  • stop_gradient

AutoGrad

  • execute_with_gradients

Optimizer Steps

  • adam_step

Optimizer Updates

  • optimizer_update
  • gradient_descent_update
  • lars_update
  • adam_update
  • lamb_update

ivy.torch.use not exist

I have worked around ivy for some time and found the idea of integrating multiple frameworks very intersting and useful. I have noticed the following error on my machine. The following code does not work properly:

import ivy
import torch
import numpy as np

with ivy.numpy.use:
    x = np.array([0.])
    y = ivy.cos(x) 

with ivy.torch.use:
    x = torch.tensor([0.])
    y = ivy.cos(x) 

The above code emits the error: module 'ivy' has no attribute 'numpy'. This error is only resolved when I set the framework to numpy. Moreover, it seems that if I set the framework to torch, there is still no variable named use in ivy.torch. I have looked at the implementation and found this problem a bit weird, since the use variable seems to be declared in each module.

Thank you for viewing this request.

Add Indexing, Slicing, Joining, Mutating Ops to PyTorch Frontend

Add indexing, slicing, joining, mutating ops to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/indexing\_slicing\_joining\_mutating\_ops.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_indexing\_slicing\_joining\_mutating\_ops.py
  • ivy/array/experimental/manipulation.py
  • ivy/container/experimental/manipulation.py
  • ivy/functional/backends/jax/experimental/manipulation.py
  • ivy/functional/backends/numpy/experimental/manipulation.py
  • ivy/functional/backends/tensorflow/experimental/manipulation.py
  • ivy/functional/backends/torch/experimental/manipulation.py
  • ivy/functional/ivy/experimental/manipulation.py

"Quick Start" issue

Avoid other changes when changing frameworks except ivy.set_framework(). In the example, Quick Start, if I want to run it in tensorflow, except changing 'torch' to 'tensorflow' for ivy.set_framework(), I have to change the shape of x_in from [3] to [1, 3].

Add Ivy Container Static Methods

Add Ivy Container Static Methods:

  • list_join
  • list_stack
  • unify
  • concat
  • stack
  • combine
  • diff
  • structural_diff
  • multi_map
  • common_key_chains
  • identical
  • assert_identical
  • identical_structure
  • assert_identical_structure
  • identical_configs
  • identical_array_shapes
  • from_disk_as_hdf5
  • from_disk_as_pickled
  • from_disk_as_json
  • h5_file_size
  • shuffle_h5_file
  • reduce
  • flatten_key_chain
  • trim_key

Add Nest Functions to Ivy Frontend

Add Nest Functions to Ivy frontend:

  • index_nest
  • set_nest_at_index
  • map_nest_at_index
  • multi_index_nest
  • set_nest_at_indices
  • map_nest_at_indices
  • nested_indices_where
  • all_nested_indices
  • map
  • nested_map
  • copy_nest

Add Vision functions to PyTorch Frontend

Add Vision functions to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/vision\_functions.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_vision\_functions.py
  • ivy/functional/frontends/torch/nn/functional/vision\_functions.py

Add Pointwise ops to PyTorch Frontend

Add Pointwise ops to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/pointwise\_ops.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_pointwise\_ops.py
  • ivy/functional/frontends/torch/\_\_init\_\_.py
  • ivy/functional/frontends/torch/non\_linear\_activation\_functions.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_non\_linear\_activation\_functions.py
  • ivy\_tests/test\_ivy/test\_stateful/test\_activations.py

Add Non-linear activation functions to PyTorch Frontend

Add Non-linear activation functions to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

Add Reduction ops to PyTorch Frontend

Add Reduction ops to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/reduction\_ops.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_reduction\_ops.py

Add Layer Functions to Ivy Frontend

Add Layer Functions to Ivy frontend:

Linear

  • linear

Dropout

  • dropout

Attention

  • scaled_dot_product_attention
  • multi_head_attention

Convolutions

  • conv1d
  • conv1d_transpose
  • conv2d
  • conv2d_transpose
  • depthwise_conv2d
  • conv3d
  • conv3d_transpose

LSTM

  • lstm_update

Add Creation Ops to PyTorch Frontend

Add creation ops to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/creation\_ops.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_creation\_ops.py

Add BLAS and LAPACK Operations to PyTorch Frontend

Add BLAS and LAPACK Operations to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

Note: If the function to be implemented has identical behavior to another PyTorch function, you should simply keep an alias in the blas\_and\_lapack\_ops.py file rather than creating a duplicate implementation.
For example:
torch.det is defined as an alias of torch.linalg.det in the official docs, and so it is defined as shown below
https://github.com/unifyai/ivy/blob/7c28666a4ff161117e7b9e4104f08be3bd7cad26/ivy/functional/frontends/torch/blas\_and\_lapack\_ops.py#L93

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/blas\_and\_lapack\_ops.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_blas\_and\_lapack\_ops.py

Add Multi-Device Functions + Classes to Ivy Frontend

Add Multi-Device Functions + Classes to Ivy frontend:

Multi-Device

  • MultiDev
  • MultiDevItem
  • MultiDevIter
  • MultiDevNest

Device Distribution

  • DevDistItem
  • DevDistIter
  • DevDistNest
  • dev_dist_array
  • dev_dist
  • dev_dist_iter
  • dev_dist_nest

Device Cloning

  • DevClonedItem
  • DevClonedIter
  • DevClonedNest
  • dev_clone_array
  • dev_clone
  • dev_clone_iter
  • dev_clone_nest

Device Unification

  • dev_unify_array
  • dev_unify
  • dev_unify_iter
  • dev_unify_nest

Device Mappers

  • DevMapper
  • DevMapperMultiProc

Device Manager

  • DevManager

Profiler

  • Profiler

Add Device Functions to Ivy Frontend

Add Device Functions to Ivy frontend:

Device Queries

Array Printing:

  • get_all_arrays_on_dev
  • num_arrays_on_dev
  • print_all_arrays_on_dev

Retireval:

  • dev
  • dev_str

Conversion:

  • dev_to_str
  • str_to_dev

Memory:

  • clear_mem_on_dev
  • total_mem_on_dev
  • used_mem_on_dev
  • percent_used_mem_on_dev

Utilization:

  • dev_util

Availability:

  • gpu_is_available
  • num_cpu_cores
  • num_gpus
  • tpu_is_available

Default Device

  • default_device
  • set_default_device
  • unset_default_device

Device Allocation

  • to_dev

Function Splitting

  • split_factor
  • set_split_factor
  • split_func_call

Add Ivy Container Built-in Methods

Add Ivy Container Built-in Methods:

  • repr
  • dir
  • getattr
  • setattr
  • getitem
  • setitem
  • contains
  • pos
  • neg
  • pow
  • rpow
  • add
  • radd
  • sub
  • rsub
  • mul
  • rmul
  • truediv
  • rtruediv
  • floordiv
  • rfloordiv
  • abs
  • lt
  • le
  • eq
  • ne
  • gt
  • ge
  • and
  • rand
  • or
  • ror
  • invert
  • xor
  • rxor
  • getstate
  • setstate

Add Dropout functions to PyTorch Frontend

Add Dropout functions to PyTorch frontend:

_

Please keep in mind that the proper way to link an issue to this list is to comment "- [ ] #issue_number" while the issue's title only includes the name of the function you've chosen.

_

The main file paths where these functions are likely to be added are:

  • ivy/functional/frontends/torch/nn/functional/dropout\_functions.py
  • ivy\_tests/test\_ivy/test\_frontends/test\_torch/test\_dropout\_functions.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.