Comments (7)
Can you add torch._check(otf.numel() != 0)
before the roll call? Or can it potentially be zero?
from pytorch.
I added it, and the oft is non-zero. The problem is really in the fft function now.
from pytorch.
Here is a simple example to reproduce easily. Using this module:
class Test(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
x = torch.fft.fftn(x, dim=(-2,-1))
return x
And exporting it this way:
x = torch.randn(batch_size, 3, 224, 224)
test = Test()
t_out = test(x)
onnx_program_test = torch.onnx.dynamo_export(test, x, export_options=torch.onnx.ExportOptions(dynamic_shapes=True))
It throws the error:
C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py:136: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] fake tensor raised TypeError
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] Traceback (most recent call last):
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 896, in __torch_dispatch__
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] return self.dispatch(func, types, args, kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1241, in dispatch
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] return self._cached_dispatch_impl(func, types, args, kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 974, in _cached_dispatch_impl
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] output = self._dispatch_impl(func, types, args, kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1458, in _dispatch_impl
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] r = func(*args, **kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] return self_._op(*args, **kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_prims_common\wrappers.py", line 252, in _fn
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] result = fn(*args, **kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 285, in meta_fft_c2c
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] output = _exec_fft(output, self, out_sizes, sorted_dims, forward)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 243, in _exec_fft
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] input = self.permute(dim_permute)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] return fn(*args, **kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 587, in __torch_dispatch__
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] return func(*args, **kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] return self_._op(*args, **kwargs)
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] TypeError: Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented:
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898]
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] - tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'>
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898]
E0611 12:07:16.032000 52872 torch\_subclasses\fake_tensor.py:898] For more information, try re-running with TORCH_LOGS=not_implemented
Traceback (most recent call last):
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1428, in dynamo_export
).export()
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1171, in export
graph_module = self.options.fx_tracer.generate_fx(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\dynamo_graph_extractor.py", line 232, in generate_fx
return self.pre_export_passes(options, model, graph_module, updated_model_args) # type: ignore[return-value]
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\dynamo_graph_extractor.py", line 242, in pre_export_passes
return exporter.common_pre_export_passes(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1467, in common_pre_export_passes
module = passes.Functionalize(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\diagnostics\infra\decorator.py", line 151, in wrapper
ctx.log_and_raise_if_error(diag)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\diagnostics\infra\context.py", line 366, in log_and_raise_if_error
raise diagnostic.source_exception
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\diagnostics\infra\decorator.py", line 135, in wrapper
return_values = fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\_pass.py", line 275, in run
module = self._run(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\passes\functionalization.py", line 123, in _run
graph_module = proxy_tensor.make_fx(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 1081, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\eval_frame.py", line 451, in _fn
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\external_utils.py", line 36, in inner
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 541, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\eval_frame.py", line 451, in _fn
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\external_utils.py", line 36, in inner
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\_symbolic_trace.py", line 793, in trace
(self.create_arg(fn(*args)),),
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 559, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\passes\functionalization.py", line 86, in wrapped
out = function(*inputs_functional)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\passes\_utils.py", line 30, in wrapped
return torch.fx.Interpreter(graph_module).run(*args)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\interpreter.py", line 145, in run
self.env[node] = self.run_node(node)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\interpreter.py", line 202, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\interpreter.py", line 274, in call_function
return target(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 638, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 673, in inner_torch_dispatch
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 413, in proxy_call
out = func(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 896, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1241, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 974, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1458, in _dispatch_impl
r = func(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_prims_common\wrappers.py", line 252, in _fn
result = fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 285, in meta_fft_c2c
output = _exec_fft(output, self, out_sizes, sorted_dims, forward)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 243, in _exec_fft
input = self.permute(dim_permute)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 587, in __torch_dispatch__
return func(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
TypeError: Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented:
- tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'>
For more information, try re-running with TORCH_LOGS=not_implemented
While executing %_fft_c2c : [num_users=1] = call_function[target=torch.ops.aten._fft_c2c.default](args = (%_to_copy, [2, 3], 0, True), kwargs = {})
Original traceback:
File "C:\Users\joseperezcano\Desktop\Project2\Deblurring\mvga\ocr_improvement\export_usrnet_onnx.py", line 28, in forward
x = torch.fft.fftn(x, dim=(-2,-1))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\joseperezcano\Desktop\Project2\Deblurring\mvga\ocr_improvement\export_usrnet_onnx.py", line 120, in <module>
main()
File "C:\Users\joseperezcano\Desktop\Project2\Deblurring\mvga\ocr_improvement\export_usrnet_onnx.py", line 71, in main
onnx_program_test = torch.onnx.dynamo_export(test, x, export_options=torch.onnx.ExportOptions(dynamic_shapes=True))
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1439, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
If I don't use dynamic shapes then there is no problem.
from pytorch.
Also, when using static shapes, the USRNet module throws other exceptions like this one:
torch.onnx._internal.diagnostics.infra.context.RuntimeErrorWithDiagnostic: Unsupported FX nodes: {'call_function': ['aten._conj.default']}.
which is associated to a torch.conj(x)
call. However, when looking at supported operators in torch script they say aten::_conj is supported since opset9 (link). This can be easily fixed by using x.imag.mult_(-1)
, don't know why it throws an error when using torch.conj()
.
from pytorch.
For the FFT problem, could you run with TORCH_LOGS=not_implemented
and post the full logs?
For the onnx conj problem, I don't know, perhaps @xadupre can look into it.
from pytorch.
Here are the full logs with TORCH_LOGS=not_implemented
:
Click to see full logs
C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py:136: UserWarning: torch.onnx.dynamo_export only implements opset version 18 for now. If you need to use a different opset version, please register them with register_custom_op.
warnings.warn(
V0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:579] [__not_implemented] FakeTensor mode already active: <torch._subclasses.fake_tensor.FakeTensorMode object at 0x000001A6ABCFC310> in <torch._subclasses.fake_tensor.FakeTensorMode object at 0x000001A6ABCFC310>
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] fake tensor raised TypeError
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] Traceback (most recent call last):
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 896, in __torch_dispatch__
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] return self.dispatch(func, types, args, kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1241, in dispatch
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] return self._cached_dispatch_impl(func, types, args, kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 974, in _cached_dispatch_impl
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] output = self._dispatch_impl(func, types, args, kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1458, in _dispatch_impl
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] r = func(*args, **kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] return self_._op(*args, **kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_prims_common\wrappers.py", line 252, in _fn
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] result = fn(*args, **kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 285, in meta_fft_c2c
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] output = _exec_fft(output, self, out_sizes, sorted_dims, forward)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 243, in _exec_fft
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] input = self.permute(dim_permute)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] return fn(*args, **kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 587, in __torch_dispatch__
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] return func(*args, **kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] return self_._op(*args, **kwargs)
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] TypeError: Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented:
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898]
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] - tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'>
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898]
E0612 12:00:59.308000 10588 torch\_subclasses\fake_tensor.py:898] For more information, try re-running with TORCH_LOGS=not_implemented
Traceback (most recent call last):
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1428, in dynamo_export
).export()
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1171, in export
graph_module = self.options.fx_tracer.generate_fx(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\dynamo_graph_extractor.py", line 232, in generate_fx
return self.pre_export_passes(options, model, graph_module, updated_model_args) # type: ignore[return-value]
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\dynamo_graph_extractor.py", line 242, in pre_export_passes
return exporter.common_pre_export_passes(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1467, in common_pre_export_passes
module = passes.Functionalize(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\diagnostics\infra\decorator.py", line 151, in wrapper
ctx.log_and_raise_if_error(diag)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\diagnostics\infra\context.py", line 366, in log_and_raise_if_error
raise diagnostic.source_exception
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\diagnostics\infra\decorator.py", line 135, in wrapper
return_values = fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\_pass.py", line 275, in run
module = self._run(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\passes\functionalization.py", line 123, in _run
graph_module = proxy_tensor.make_fx(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 1081, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer, pre_dispatch), tracer=fx_tracer, concrete_args=tuple(phs))
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_compile.py", line 24, in inner
return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\eval_frame.py", line 451, in _fn
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\external_utils.py", line 36, in inner
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 541, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\eval_frame.py", line 451, in _fn
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_dynamo\external_utils.py", line 36, in inner
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\_symbolic_trace.py", line 793, in trace
(self.create_arg(fn(*args)),),
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 559, in wrapped
out = f(*tensors)
File "<string>", line 1, in <lambda>
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\passes\functionalization.py", line 86, in wrapped
out = function(*inputs_functional)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\fx\passes\_utils.py", line 30, in wrapped
return torch.fx.Interpreter(graph_module).run(*args)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\interpreter.py", line 145, in run
self.env[node] = self.run_node(node)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\interpreter.py", line 202, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\interpreter.py", line 274, in call_function
return target(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 638, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 673, in inner_torch_dispatch
return proxy_call(self, func, self.pre_dispatch, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\fx\experimental\proxy_tensor.py", line 413, in proxy_call
out = func(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 896, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1241, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 974, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 1458, in _dispatch_impl
r = func(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_prims_common\wrappers.py", line 252, in _fn
result = fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 285, in meta_fft_c2c
output = _exec_fft(output, self, out_sizes, sorted_dims, forward)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_meta_registrations.py", line 243, in _exec_fft
input = self.permute(dim_permute)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\utils\_stats.py", line 20, in wrapper
return fn(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_subclasses\fake_tensor.py", line 587, in __torch_dispatch__
return func(*args, **kwargs)
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\_ops.py", line 594, in __call__
return self_._op(*args, **kwargs)
TypeError: Multiple dispatch failed for 'torch.ops.aten.size'; all __torch_dispatch__ handlers returned NotImplemented:
- tensor subclass <class 'torch._subclasses.fake_tensor.FakeTensor'>
For more information, try re-running with TORCH_LOGS=not_implemented
While executing %_fft_c2c : [num_users=9] = call_function[target=torch.ops.aten._fft_c2c.default](args = (%_to_copy, [2, 3], 0, True), kwargs = {})
Original traceback:
File "C:\Users\joseperezcano\Desktop\Project2\Deblurring\mvga\ocr_improvement\network_usrnet_v2.py", line 195, in forward
FB = torch.fft.fftn(otf, dim=(-2,-1))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\joseperezcano\Desktop\Project2\Deblurring\mvga\ocr_improvement\export_usrnet_onnx.py", line 99, in <module>
main()
File "C:\Users\joseperezcano\Desktop\Project2\Deblurring\mvga\ocr_improvement\export_usrnet_onnx.py", line 62, in main
onnx_program = torch.onnx.dynamo_export(
File "C:\Users\joseperezcano\miniconda3\envs\deblur\lib\site-packages\torch\onnx\_internal\exporter.py", line 1439, in dynamo_export
raise OnnxExporterError(
torch.onnx.OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools. SARIF logs can be loaded in VS Code SARIF viewer extension, or SARIF web viewer (https://microsoft.github.io/sarif-web-component/). Please report a bug on PyTorch Github: https://github.com/pytorch/pytorch/issues
Also, the conj problem is also a problem of the .repeat()
and .clone()
call and many more. There is an issue about other operators failing already: #126972 maybe you can give it a look.
from pytorch.
onnx supports complex but onnxruntime does not implement them. As a result, the converter chose to convert rfft, irfft by using real tensor with an extra dimension but pytorch and onnx disagree on the result type for every operator manipulating complex. We'll need more time to address that issue.
from pytorch.
Related Issues (20)
- DISABLED test_comprehensive_special_airy_ai_cpu_float64 (__main__.TestInductorOpInfoCPU) HOT 2
- DISABLED test_comprehensive_polygamma_polygamma_n_4_cpu_bool (__main__.TestInductorOpInfoCPU) HOT 2
- DISABLED test_comprehensive_scatter_reduce_prod_cpu_bool (__main__.TestInductorOpInfoCPU) HOT 2
- DISABLED test_comprehensive_polygamma_polygamma_n_0_cpu_float64 (__main__.TestInductorOpInfoCPU) HOT 2
- DISABLED test_comprehensive_special_ndtr_cpu_float64 (__main__.TestInductorOpInfoCPU) HOT 5
- DISABLED test_multi_output_unbacked_custom_op_cuda (__main__.TestInductorDynamicCUDA) HOT 2
- DISABLED test_python_ref_executor__refs_stft_executor_aten_cuda_complex128 (__main__.TestCommonCUDA) HOT 1
- Torch compile disables denormal floating-point values HOT 2
- core_aten_decompositions table have non-functional or CIA entries. HOT 3
- `torch.nn.functional._in_projection_packed` Failed to export to ONNX HOT 1
- xpu: torchaudio build fails with torch::xpurt target not found with cmake<3.25 HOT 1
- Compilation Error, x64-windows HOT 3
- torch.sigmoid producing nan for tensor of negative complex numbers on cpu
- Severe SDPA Performance Regression 2.5.0-RC1 HOT 5
- codegen error for fallback op returning multiple tensors
- DISABLED test_comprehensive_transpose_copy_cpu_bool (__main__.TestInductorOpInfoCPU) HOT 2
- DISABLED test_grad_scaler_with_preset_grad_scale_in_place_unscale_False_SGD_cuda_float32 (__main__.TestCudaOptimsCUDA) HOT 2
- DISABLED test_comprehensive_where_cpu_int32 (__main__.TestInductorOpInfoCPU) HOT 2
- [RFC] Intel OneCCL Upstreaming
- Why not Enable non_blocking h2d copy By Default
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.