Coder Social home page Coder Social logo

redisai-py's Introduction

redisai-py

https://readthedocs.org/projects/redisai-py/badge/?version=latest https://img.shields.io/badge/Forum-RedisAI-blue https://img.shields.io/discord/697882427875393627?style=flat-square https://snyk.io/test/github/RedisAI/redisai-py/badge.svg?targetFile=pyproject.toml

redisai-py is the Python client for RedisAI. Checkout the documentation for API details and examples

Installation

  1. Install Redis 5.0 or above
  2. Install RedisAI
  3. Install the Python client
$ pip install redisai
  1. Install serialization-deserialization utility (optional)
$ pip install ml2rt

Development

  1. Assuming you have virtualenv installed, create a virtualenv to manage your python dependencies, and activate it. `virtualenv -v venv; source venv/bin/activate`
  2. Install [pypoetry](https://python-poetry.org/) to manage your dependencies. `pip install poetry`
  3. Install dependencies. `poetry install --no-root`

[tox](https://tox.readthedocs.io/en/latest/) runs all tests as its default target. Running tox by itself will run unit tests. Ensure you have a running redis, with the module loaded.

Contributing

Prior to submitting a pull request, please ensure you've built and installed poetry as above. Then:

  1. Run the linter. `tox -e linters.`
  2. Run the unit tests. This assumes you have a redis server running, with the [RedisAI module](https://redisai.io) already loaded. If you don't, you may want to install a [docker build](https://hub.docker.com/r/redislabs/redisai/tags). `tox -e tests`

RedisAI example repo shows few examples made using redisai-py under python_client folder. Also, checkout ml2rt for convenient functions those might help in converting models (sparkml, sklearn, xgboost to ONNX), serializing models to disk, loading it back to redisai-py etc.

redisai-py's People

Contributors

alonre24 avatar avitalfineredis avatar boat-builder avatar chayim avatar dagdelenmustafa avatar dvirdukhan avatar filipecosta90 avatar gkorland avatar guyav46 avatar itamarhaber avatar lantiga avatar mnunberg avatar silv-eran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

redisai-py's Issues

Standardize & clean up the API

There are few things which are not consistent across the package

  • Decoding the return string from RedisAI
  • Return schema/datatype from different APIs. Some are namedtuples and some are dict
  • Use of the ENUMs might not be the best for UX
  • Utils.py is a mess.

How to use redisai client in redis cluster environment?

Hi, I'm jaehyeong.
I've been using RedisAI in redis cluster environment (3 nodes) and I have an issue about redisai client.

First, This is my 3 redis cluster environment.

CONTAINER ID   IMAGE                      COMMAND                  CREATED          STATUS          PORTS                                                        NAMES
c1e92b73241e   redislabs/redisai:latest   "docker-entrypoint.s…"   4 hours ago      Up 37 minutes   0.0.0.0:7001->7001/tcp, 6379/tcp, 0.0.0.0:17001->17001/tcp   redis-cluster-test_redis-cluster-1_1
e02becc34028   redislabs/redisai:latest   "docker-entrypoint.s…"   4 hours ago      Up 37 minutes   0.0.0.0:7002->7002/tcp, 6379/tcp, 0.0.0.0:17002->17002/tcp   redis-cluster-test_redis-cluster-2_1
4e50e5f16b06   redislabs/redisai:latest   "docker-entrypoint.s…"   4 hours ago      Up 37 minutes   0.0.0.0:7003->7003/tcp, 6379/tcp, 0.0.0.0:17003->17003/tcp   redis-cluster-test_redis-cluster-3_1

I had checked that redis cluster works fine using redis-cli.
스크린샷 2022-06-23 오후 6 56 18

And I tried to get data through redisai client.

from redisai import Client
client = Client(host='172.21.0.4', port=7001)

client.tensorget('tA')

but, I got this error.
스크린샷 2022-06-23 오후 6 58 09

I think error occured because redisai client is single connection client.
So, How I control redis cluster using redisai client?

Thanks

ValueError

It becomes an error because a byte type response is returned

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-120-dc676108d415> in <module>
      1 client.tensorset('input', Tensor(DType.float, [1, 11], [1.47574819401444, 160.8588351431392, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 0.0]))
      2 client.modelrun('model', ['input'], ['output'])
----> 3 output = client.tensorget('output')

/opt/conda/lib/python3.7/site-packages/redisai/client.py in tensorget(self, key, as_type, meta_only)
    242             return as_type(dtype, shape, [])
    243         else:
--> 244             return as_type.from_resp(dtype, shape, res[2])
    245 
    246     def scriptset(self, name, device, script):

/opt/conda/lib/python3.7/site-packages/redisai/client.py in from_resp(cls, dtype, shape, value)
     98         # recurse value, replacing each element of b'' with the
     99         # appropriate element
--> 100         _convert_to_num(dtype, value)
    101         return cls(dtype, shape, value)
    102 

/opt/conda/lib/python3.7/site-packages/redisai/client.py in _convert_to_num(dt, arr)
     58                 arr[ix] = float(obj)
     59             else:
---> 60                 arr[ix] = int(obj)
     61 
     62 

ValueError: invalid literal for int() with base 10: b'0.88191938400268555'

model get method returns incorrect backend type

I am working with a yolo model. I was able to set and run the model by:

con = Client()
with open('yolo.pb', 'rb') as f:
     model = f.read()
con.modelset(
     'yolomodel', rai.Backend.tf, rai.Device.cpu, model,
     input=['input_1', 'input_image_shape'],
     output=['concat_11', 'concat_12', 'concat_13'])
con.modelrun(
     'yolomodel',
     input=['normalized_image', 'input_shape'],
     output=['boxes', 'scores', 'classes'])

After this when I try to simply get the model with following command, it returns an error:

con = Client()
model = con.modelget('yolomodel')
con.modelrun(
     'yolomodel',
     input=['normalized_image', 'input_shape'],
     output=['boxes', 'scores', 'classes'])

Error is as follows:

ValueError: 0 is not a valid Backend

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File ".\redis_client.py", line 19, in <module>
    model = con.modelget('yolomodel')
  File "C:\Users\212767749\redisai_example\redisai_example\lib\site-packages\redisai\client.py", line 197, in modelget
    'backend': Backend(rv[0]),
  File "C:\Users\212767749\redisai_example\redisai_example\lib\enum.py", line 310, in __call__
    return cls.__new__(cls, value)
  File "C:\Users\212767749\redisai_example\redisai_example\lib\enum.py", line 564, in __new__
    raise exc
  File "C:\Users\212767749\redisai_example\redisai_example\lib\enum.py", line 548, in __new__
    result = cls._missing_(value)
  File "C:\Users\212767749\redisai_example\redisai_example\lib\enum.py", line 577, in _missing_
    raise ValueError("%r is not a valid %s" % (value, cls.__name__))
ValueError: 0 is not a valid Backend

However from redis-cli, I can see that the model exists in redis(EXISTS yolomodel) and can retrieve model on cli using AI.MODELGET yolomodel

How can I resolve this error?

Smoothening tensor API

This issue is the outcome of the discussion with @lantiga about smoothening the API to make it more intuitive. Here my thoughts about the design changes after going through the discussion points.

  • tensorset should accept numpy array or scalar values or python list as input. Having users wrapping them with Tensor seems like an avoidable step. We implicitly convert to Scalar tensor or BlobTensor but user doesn't have to know about it at all.
data = [1, 2, 3]
con.tensorset('a', np.array(data))
con.tensorset('b', data, shape=(1, 1, 3), dtype=np.float)
# dtype can be a string perhaps??
con.tensorset('c', data, shape=(1, 1, 3), dtype='float')
  • tensorget could take a keyword argument that decides whether it's a META call or a VALUE call or a BLOB call. So user would never get a redisai Tensor
meta = con.tensorget('name', meta_only=True)
data = con.tensorget('name')  # data -> np array
data = con.tensorget('name', as_type='VALUE')  # data -> python list

Stacking tensors would cause unexpected behaviour

@mnunberg Do you think we need to expect more than one tensors in the BlobTensor __init__ and stack them together while setting up the tensor?
One problem I could see is with the else case. If I am passing a two-dimensional tensor and if my model expects the same, my model will crash because size=1 of the below snippet will add another dimension to my input while setting the tensor.

if len(blobs) > 1:
    blobarr = bytearray()
    for b in blobs:
        if isinstance(b, BlobTensor):
            b = b.value[0]
        blobarr += b
    size = len(blobs)
    blobs = bytes(blobarr)
else:
    blobs = bytes(blobs[0])
    size = 1

Is there a proper way to retrieve a tensor from the cache?

I'm trying to make federated averaging work inside RedisAI itself. The biggest problem seems to be that none of the simplest operations required to perform such algorithm - i.e. creating an optimizer, making it load a precise state and doing a step for each round - seems to be working due to some pathetic torchscript compilation issue, which i clearly do not properly understand. Isn't there a way to avoid passing a torchscript to the cache from a string? Also, where can i find more documentation related to what torchscript can or cannot do? Pytorch official documentation would be more useful if it was non existent. Also, is there any documentation related to the redis.execute() API? I'm opening this issue out of 2 weeks of pure frustration. Sorry about the thousand questions.

DAGRUN is not working with RedisAI 1.2.1

Hi,

Could you give a hand on solving the puzzle?

Such SHELL command executes successfully:

AI.DAGRUN LOAD 1 "model_2_2021-03-02_input" PERSIST 1 predictions |> AI.MODELRUN "model_2_2021-03-02" INPUTS "model_2_2021-03-02_input" OUTPUTS predictions |> AI.TENSORGET predictions VALUES

But If I'm executing it redisai.py way:

dag = redisclient.dag(load=['model_2_2021-03-02_input'], persist=['predictions']) dag.modelrun('model_2_2021-03-02', inputs=['model_2_2021-03-02_input'], outputs=['predictions']) predictions = dag.tensorget('predictions').run()

It fails with the result:

ResponseError: Invalid DAGRUN command

Failing to store models in container

Hi everyone, i'm testing redisAI within a simple test environmenet and i'm just trying to do very basic stuff for now.

scripted_network = torch.jit.script(network)
scripted_network.save("my_model.pt")
binary_model = open("./my_model.pt", "rb").read()

client = redisai.Client(host="0.0.0.0", port=6379)
client.modelstore(
    key="new_model",
    backend="torch",
    device="cpu",
    data=binary_model,
    tag="1.0:latest"
)
print(client.modelscan())

But, whenever i run this simple script, i get:

Traceback (most recent call last):
File "/home/webbelle/test/redis_test.py", line 15, in
client.modelstore(
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redisai/client.py", line 247, in modelstore
chunk_size = self.config('MODEL_CHUNK_SIZE')
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redisai/client.py", line 183, in config
res = self.execute_command(args)
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redis/client.py", line 1269, in execute_command
return conn.retry.call_with_retry(
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redis/client.py", line 1270, in
lambda: self._send_command_parse_response(
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redis/client.py", line 1246, in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redis/client.py", line 1286, in parse_response
response = connection.read_response()
File "/home/webbelle/test/testenv/lib/python3.10/site-packages/redis/connection.py", line 905, in read_response
raise response
redis.exceptions.ResponseError: unsupported subcommand

and, the facto, the model is not there when i query the keys from the container's cli.
I'm using python 3.10.12 and running redisAI latest image through docker.

Response error for pytorch

import redisai as rai
import torch
import torchvision
import ml2rt

conn = rai.Client(host='redis', port='6379')
vgg16 = torchvision.models.vgg16(pretrained=True)
vgg16_no_top = torch.nn.Sequential(*(list(vgg16.children())[:-3]))
torch.save(vgg16_no_top, 'vgg16_no_top.pt')
vgg16_no_top= ml2rt.load_model('vgg16_no_top.pt')
conn.modelset(
            'vgg16',
            'torch',
            device,
            vgg16_no_top
        )

throws

vgg16_no_top
  File "/usr/local/lib/python3.7/site-packages/redisai/client.py", line 228, in modelset
    return self.execute_command(*args).decode()
  File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 878, in execute_command
    return self.parse_response(conn, command_name, **options)
  File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 892, in parse_response
    response = connection.read_response()
  File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 752, in read_response
    raise response
redis.exceptions.ResponseError: [enforce fail at inline_container.cc:208] . file not found: archive/constants.pkl frame #0: c10::ThrowEn
forceNotMet(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void
const*) + 0x67 (0x7fa9d4010787 in /usr/lib/redis/modules/backends/redisai_torch/lib/libc10.so) frame #1: caffe2::serialize::PyTorchStrea
mReader::getRecordID(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0xd6 (0x7fa9c6f84376 in /
usr/lib/redis/modules/backends/redisai_torch/lib/libtorch_cpu.so) frame #2: caffe2::serialize::PyTorchStreamReader::getRecord(std::__cxx
11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x38 (0x7fa9c6f85018 in /usr/lib/redis/modules/backends/
redisai_torch/lib/libtorch_cpu.so) frame #3: torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::functio
n<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c1
0::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&) + 0xda (0x7fa9c80083aa in /usr/lib/redis/modules/bac
kends/redisai_torch/lib/libtorch_cpu.so) frame #4: <unknown function> + 0x2f3bc9d (0x7fa9c8008c9d in /usr/lib/redis/modules/backends/red
isai_torch/lib/libtorch_cpu.so) frame #5: <unknown function> + 0x2f3e26f (0x7fa9c800b26f in /usr/lib/redis/modules/backends/redisai_torc
h/lib/libtorch_cpu.so) frame #6: torch::jit::load(std::unique_ptr<caffe2::serialize::ReadAdapterInterface, std::default_delete<caffe2::s
erialize::ReadAdapterInterface> >, c10::optional<c10::Device>, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char
>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basi
c_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>,
std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const
, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >&) + 0x179 (0x7fa9c800bbf9 in /usr/lib/redis/modu
les/backends/redisai_torch/lib/libtorch_cpu.so) frame #7: torch::jit::load(std::istream&, c10::optional<c10::Device>, std::unordered_map
<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char
>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<st
d::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<cha
r, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >
 > > >&) + 0x75 (0x7fa9c800c3f5 in /usr/lib/redis/modules/backends/redisai_torch/lib/libtorch_cpu.so) frame #8: torchLoadModel + 0x215 (
0x7fa9d444b1e5 in /usr/lib/redis/modules/backends/redisai_torch/redisai_torch.so) frame #9: RAI_ModelCreateTorch + 0x8a (0x7fa9d444408a
in /usr/lib/redis/modules/backends/redisai_torch/redisai_torch.so) frame #10: RAI_ModelCreate + 0x16d (0x7fa9dcbebd2d in /usr/lib/redis/
modules/redisai.so) frame #11: RedisAI_ModelSet_RedisCommand + 0x6ea (0x7fa9dcbe452a in /usr/lib/redis/modules/redisai.so) frame #12: Re
disModuleCommandDispatcher + 0x54 (0x561d6f034ca4 in redis-server *:6379) frame #13: call + 0x9d (0x561d6efc0f0d in redis-server *:6379)
 frame #14: processCommand + 0x327 (0x561d6efc1687 in redis-server *:6379) frame #15: processCommandAndResetClient + 0x10 (0x561d6efcf28
0 in redis-server *:6379) frame #16: processInputBuffer + 0x18f (0x561d6efd37cf in redis-server *:6379) frame #17: <unknown function> +
0xd4b4c (0x561d6f050b4c in redis-server *:6379) frame #18: aeProcessEvents + 0x111 (0x561d6efbaa21 in redis-server *:6379) frame #19: ae
Main + 0x2b (0x561d6efbaeab in redis-server *:6379) frame #20: main + 0x4db (0x561d6efb77eb in redis-server *:6379) frame #21: __libc_st
art_main + 0xeb (0x7fa9dd44609b in /lib/x86_64-linux-gnu/libc.so.6) frame #22: _start + 0x2a (0x561d6efb7a7a in redis-server *:6379)

Client Authentication Timing Out

The Client connection times out when trying to perform a tensorset/get after connecting to the remote server that requires a password. It returns a connection object and does not complain about a password being set.

The Client has host, port, and password all passed as arguments but still cannot perform any actions.

Client<ConnectionPool<Connection<host=XXX.XXX.XXX.XXX,port=6370,db=0>>>

DAG.run() throws if response error by any element

DAG.run relies on the pre-created list for doing post-processing. The post-processing functions don't consider the possible response error from the server and ended up applying the post-processing on the response error object

Bulk get tensors

According to the documentation, currently (with the latest release), there is no possibility to bulk get tensor objects, for instance, by a given keyspace. If one wants to fetch, let's say n tensors, what is the current best solution to do so in an efficient manner? (besides the "usual" one of looping over all the tensor objects and fetching every single one inside). Is there a way to achieve a better performance here?

As a reference, I would like to point out the official RedisGears documentation and the option to use the redisAI module there. One can see that the functional API of the redisAI module contains (among many others) the following method: mgetTensorsFromKeyspace(tensors: List[str]) -> List[PyTensor]. Is there a way to achieve something like this with redisai-py? If not, what are the alternatives (besides the one mentioned above)?

Thanks in advance!

Crucial dependencies are missing - can't use redisai module

Hi everyone

It looks like numpy and six is missing as dependency in the setup.py. If I install pip install redisai in a complete new virtualenv, it is not possible to use redisai in my python projects. Following error occurs:

Python 3.7.5 (default, Oct 17 2019, 12:21:00)
[GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import redisai
Traceback (most recent call last):
File "", line 1, in
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/init.py", line 2, in
from .client import Client
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/client.py", line 10, in
from .utils import str_or_strsequence, to_string
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/utils.py", line 1, in
import six
ModuleNotFoundError: No module named 'six'
>>>`

In the utils.py module you import on line 1 six

Installing pip install six does not solve the issue:

Collecting six
Downloading
Installing collected packages: six
Successfully installed six-1.13.0

Python 3.7.5 (default, Oct 17 2019, 12:21:00)
[GCC 8.3.1 20190223 (Red Hat 8.3.1-2)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import redisai
Traceback (most recent call last):
File "", line 1, in
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/init.py", line 2, in
from .client import Client
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/client.py", line 11, in
from .tensor import Tensor, BlobTensor
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/tensor.py", line 62, in
class BlobTensor(Tensor):
File "/home/user/python_project/test_env/lib/python3.7/site-packages/redisai/tensor.py", line 102, in BlobTensor
def to_numpy(self) -> np.array:
AttributeError: 'NoneType' object has no attribute 'array'`

It is necessary to install numpy as well that it works.

The root cause itself lies in the tensor.py module. On line 102 you have following method:

def to_numpy(self) -> np.array:
        a = np.frombuffer(self.value[0], dtype=self._to_numpy_type(self.type))
        return a.reshape(self.shape)

If numpy as not been installed, you set on line 8 in the tensor.py module to None:

try:
    import numpy as np
except (ImportError, ModuleNotFoundError):
    np = None

This can not work if you specify the return type for the previous to_numpy method to np.array.

I would recommend to fix this issue that you add six and numpy in the setup.py module following dependencies on line 24

    install_requires=['redis', 'hiredis', 'rmtest', 'six', 'numpy']

Looking forward to help you to fix this issue 😊

Best regards,
Sandro

Issues with FLOAT tensors

images = np.array(images).astype(np.float)
print(images.dtype)

rai.tensorset('images', images)
print(self.rai.tensorget('images', meta_only=True))

returns

float64
{'dtype': 'DOUBLE', 'shape': [1, 224, 224, 3]}

Diamension mismatch after tensorset

Hi,

I see a dimension mismatch when retrieving the image back using tensorget immediately after tensorset.

Steps to reproduce.

    new_shape = 416
    image_path = './demo_data/dog.jpg'
    pil_image = Image.open(image_path)
    numpy_img = np.array(pil_image, dtype='float32')
    print('raw image shape and dtype before pre-processing', numpy_img.shape, 
    numpy_img.dtype)
    # applied a reshaping function and expanded diamension on axis 0
    image = letter_box(numpy_img, new_shape)
    print('shape and dtype before creating Blobtensor ', image.shape, image.dtype)
    image = rai.BlobTensor.from_numpy(image)
    print('shape and dtype after creating Blobtensor :', image.shape)
    con.tensorset('image', image)
    img = con.tensorget('image', as_type=rai.BlobTensor).to_numpy()
    print('shape and dtype when retrieving using tensorget  :', img.shape, img.dtype)

Output

raw image shape and dtype before pre-processing (576, 768, 3) float32
shape and dtype before creating Blobtensor  (1, 416, 416, 3) float32
shape and dtype after creating Blobtensor : (1, 416, 416, 3)
shape and dtype when retrieving using tensorget  : (1, 1, 416, 416, 3) float32

There is a dimension mismatch when retrieving the image using tensorget immediately after tensorset.

Thanks ,

Model exporting through python client

Utilities for exporting TF and PyTorch models.

  • For exporting TF model
    • Utility function should accept session object and output var names as a list of strings
    • Fetching output tensor name is difficult for non-TF users. Our function should fetch the last couple of tensors and give suggestions. We should have a utility for fetching placeholder names.
  • For exporting PyTorch model
    • Exporting with tracing is easily achievable. But we should throw proper error messages
    • Exporting through scripting is on user since we won't have control over it.
    • So the utility function should accept a traced model or a model that is traceable or a scripted model and output file path

@lantiga What do you think?

sklearn onnx - expected 2 outputs but got 1

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier

from skl2onnx import convert_sklearn, to_onnx
from skl2onnx.common.data_types import FloatTensorType
import ml2rt, redisai
import numpy as np

rai = redisai.Client(host='127.0.0.1', port='9736')
device = 'CPU'

def make_model():
    iris = load_iris()
    X, y = iris.data, iris.target
    X_train, X_test, y_train, y_test = train_test_split(X, y)
    clf = RandomForestClassifier()
    clf.fit(X_train, y_train)
    return clf, X_train[:1]

def to_onnx_save(clf, sample):
    initial_type = [('float_input', FloatTensorType([None, 4]))]
    onx = convert_sklearn(clf, initial_types=initial_type)
    # onx = to_onnx(clf, sample)
    with open("rf_iris.onnx", "wb") as f:
        f.write(onx.SerializeToString())

def load_to_redisai():
    clf= ml2rt.load_model("rf_iris.onnx")
    rai.modelset(
        'rf_iris',
        'ONNX',
        device,
        clf
    )

if __name__ == "__main__":
    clf, sample = make_model()

    # returns [one_output]
    print(clf.predict(sample))
    to_onnx_save(clf, sample)
    load_to_redisai()
    rai.tensorset("input", sample.astype(np.float32), dtype='float32', shape=sample.shape)

    # but requires two outputs here, otherwise throws expected 2 outputs but got 1
    rai.modelrun("rf_iris", ["input"], ["output1", "output2"])
    outtensor = rai.tensorget("output1")
    print(outtensor)

    # output2 key is empty but required in modelrun
    outtensor = rai.tensorget("output2")
    print(outtensor)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.