Thank you for creating this crate.
This error occurred while using TensorRT EP inference, both on my code and example code.
2023-04-15T15:47:24.013752Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Flush-to-zero and denormal-as-zero are off
2023-04-15T15:47:24.014425Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Creating and using per session threadpools since use_per_session_threads_ is true
2023-04-15T15:47:24.015015Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Dynamic block base set to 0
2023-04-15T15:47:24.202936Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Initializing session.
2023-04-15T15:47:24.203637Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Creating BFCArena for Cuda with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2023-04-15T15:47:24.204585Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Creating BFCArena for CudaPinned
with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2023-04-15T15:47:24.205679Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Creating BFCArena for CUDA_CPU with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2023-04-15T15:47:24.206431Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Allocator already registered for
OrtMemoryInfo:[name:Cuda id:0 OrtMemType:0 OrtAllocatorType:1 Device:[DeviceType:1 MemoryType:0 DeviceId:0]]. Ignoring allocator from CUDAExecutionProvider
2023-04-15T15:47:24.207074Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Allocator already registered for
OrtMemoryInfo:[name:CudaPinned id:0 OrtMemType:-1 OrtAllocatorType:1 Device:[DeviceType:0 MemoryType:1 DeviceId:0]]. Ignoring allocator from CUDAExecutionProvider
2023-04-15T15:47:24.207709Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Allocator already registered for
OrtMemoryInfo:[name:CUDA_CPU id:0 OrtMemType:-2 OrtAllocatorType:1 Device:[DeviceType:0 MemoryType:0 DeviceId:0]]. Ignoring allocator from CUDAExecutionProvider
2023-04-15T15:47:24.211005Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Total shared scalar initializer count: 8
2023-04-15T15:47:24.215380Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Total fused reshape node count: 02023-04-15T15:47:24.217219Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Total shared scalar initializer count: 0
2023-04-15T15:47:24.218852Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: Total fused reshape node count: 02023-04-15T15:47:24.219866Z DEBUG run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: [TensorRT EP] Model name is yolov5s_half.onnx
2023-04-15T15:47:25.570221Z INFO run{args=Args { model_name: ".\\yolov5s_half.onnx", img_name: ".\\bus.jpg", device: AUTO, opt_level: 1, half: true, conf_thresh: 0.2, score_thresh: 0.2, nms_thresh: 0.45, benchmark: false } warm_up=false}:init{model_file=".\\yolov5s_half.onnx" device=AUTO opt_level=1}: ort: [2023-04-15 15:47:25 WARNING] hDebInfo\_deps\onnx_tensorrt-src\onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to
cast down to INT32.
error: process didn't exit successfully: `target\debug\yolov5_onnx.exe -m .\yolov5s_half.onnx -i .\bus.jpg --half` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION)
2023-04-15T16:28:10.434694Z DEBUG ort::environment: Environment not yet initialized, creating a new one
2023-04-15T16:28:10.457540Z DEBUG ort::environment: Environment created env_ptr="0x22ad8786b70"
2023-04-15T16:28:10.458777Z INFO download_to{self=SessionBuilder { env: "GPT-2", allocator: Device, memory_type: Default } url="https://github.com/onnx/models/raw/main/text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx" download_dir="I:\\ort_test\\ort"}: ort::session: Model already exists, skipping download model_filepath="I:\\ort_test\\ort\\gpt2-lm-head-10.onnx"
2023-04-15T16:28:10.459654Z INFO apply_execution_providers: ort::execution_providers: TensorRT execution provider registered successfully
2023-04-15T16:28:10.652673Z INFO apply_execution_providers: ort::execution_providers: TensorRT execution provider registered successfully
2023-04-15T16:28:10.653144Z INFO apply_execution_providers: ort::execution_providers: TensorRT execution provider registered successfully
2023-04-15T16:28:17.079889Z INFO ort: [2023-04-15 16:28:17 WARNING] hDebInfo\_deps\onnx_tensorrt-src\onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
error: process didn't exit successfully: `target\debug\examples\gpt.exe` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION)