aiinfer's People
Forkers
hackerliang lifeng0718 827346462 neverstoplearn guanshanjushi jjho1314 outbreak-hui islzf huangzhenjie deepaicheng gongwk xiaoyuercv barongeng ccl-1 maxenergy wangzy-code wxxz975 mjlsuccess yxliang 2017zysaiinfer's Issues
Segmentation fault (core dumped)
你好,我想问下,FP16分割推理最后这个报错 是怎么回事?
前面生成engine 啥地都很正常 ... 也转换维度了
./infer_seg -f weights/yolov8n-seg.engine -i res/bus.jpg -o cuda_res
***** Display run Config: start *****
model path set to: weights/yolov8n-seg.engine
image path set to: res/bus.jpg
batch size set to: 1
score threshold set to: 0.4
device id set to: 0
loop count set to: 10
num of warmup runs set to: 2
output directory set to: cuda_res
***** Display run Config: end *****
[trt_infer.cpp:147]: Infer 0x5605cfcd2a90 [StaticShape]
[trt_infer.cpp:160]: Inputs: 1
[trt_infer.cpp:165]: 0.images : shape {1x3x640x640}
[trt_infer.cpp:168]: Outputs: 2
[trt_infer.cpp:173]: 0.output1 : shape {1x32x160x160}
[trt_infer.cpp:173]: 1.output0 : shape {1x8400x116}
[Batch=1, iters=10,run infer mean time:]: 1.80176 ms
Segmentation fault (core dumped)
RT-DETR动态输入的onnx导出的trt推理报错,使用的是Python版本,报错信息如下:[TRT] [E] 3: [executionContext.cpp::resolveSlots::2791] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::resolveSlots::2791, condition: allInputDimensionsSpecified(routine)),静态onnx导出的trt推理没问题
CmakeList.txt编译时遇到的问题
cmake --version
cmake version 3.26.0
cmake -S . -B build
success
cmake --build build
fatal error: Eigen/Core: No such file or directory
sudo apt-get install libeigen3-dev
修改CmakeLists.txt
`include_directories(
# eigen3
"/usr/include/eigen3"
${OpenCV_INCLUDE_DIRS}
${CUDA_INCLUDE_DIRS}
${EIGEN3_INCLUDE_DIRS} # 追踪要用到
# tensorrt
${TensorRT_ROOT}/include
${TensorRT_ROOT}/samples/common # 导入这个主要是为了适应于trt多版本[v7.xx,v8.xx]的logger导入
# 项目里要用到的
${PROJECT_SOURCE_DIR}/utils
${PROJECT_SOURCE_DIR}/application
)
build success`
yolov8-pose的代码存在问题
添加C++编译问题
-- Configuring done
-- Generating done
-- Build files have been written to: /home/uisee/disk/dl_model_deploy/dl_model_infer/build
[ 14%] Building NVCC (Device) object CMakeFiles/utils_cu_cpp.dir/utils/preprocess/utils_cu_cpp_generated_pre_process.cu.o
/home/uisee/disk/dl_model_deploy/dl_model_infer/utils/preprocess/pre_process.cu(30): error: more than one instance of overloaded function "ai::preprocess::rint" matches the argument list:
function "rint(float)"
function "std::rint(float)"
argument types are: (float)
/home/uisee/disk/dl_model_deploy/dl_model_infer/utils/preprocess/pre_process.cu(31): error: more than one instance of overloaded function "ai::preprocess::rint" matches the argument list:
function "rint(float)"
function "std::rint(float)"
argument types are: (float)
2 errors detected in the compilation of "/home/uisee/disk/dl_model_deploy/dl_model_infer/utils/preprocess/pre_process.cu".
CMake Error at utils_cu_cpp_generated_pre_process.cu.o.Debug.cmake:280 (message):
Error generating file
/home/uisee/disk/dl_model_deploy/dl_model_infer/build/CMakeFiles/utils_cu_cpp.dir/utils/preprocess/./utils_cu_cpp_generated_pre_process.cu.o
CMakeFiles/utils_cu_cpp.dir/build.make:537: recipe for target 'CMakeFiles/utils_cu_cpp.dir/utils/preprocess/utils_cu_cpp_generated_pre_process.cu.o' failed
make[2]: *** [CMakeFiles/utils_cu_cpp.dir/utils/preprocess/utils_cu_cpp_generated_pre_process.cu.o] Error 1
CMakeFiles/Makefile2:84: recipe for target 'CMakeFiles/utils_cu_cpp.dir/all' failed
make[1]: *** [CMakeFiles/utils_cu_cpp.dir/all] Error 2
Makefile:135: recipe for target 'all' failed
make: *** [all] Error 2
cuda编译错误
您好,start+git clone ->cuda build,整个工程代码基本没变,然后配置完本地cuda+trt属性,我用的是cuda11.7 tensorrt843,然后编译.cu时总是报错
identifier "w1" is undefined
...
identifier "v4" is undefined
this declaration has no storage class or type specifier
expression must have class type but it has type "double (*)(int, const double *)"
...
MSB3721 命令“"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\nvcc.exe" -gencode=arch=compute_52,code="sm_52,compute_52" --use-local-env -ccbin "D:\program\VS2019\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64" -x cu -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\include" --keep-dir x64\Release -maxrregcount=0 --machine 64 --compile -cudart static -DNDEBUG -D_CONSOLE -D_MBCS -Xcompiler "/EHsc /W3 /nologo /O2 /Fdx64\Release\vc142.pdb /FS /MD " -o F:\VS2019\TRT_deploy\test_trtv8seg\test_src\x64\Release\cuda_function.cu.obj "F:\VS2019\TRT_deploy\test_trtv8seg\test_src\include\cuda_function.cu"”已退出,返回代码为 1。 test_src D:\program\VS2019\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 11.7.targets 790
----->可以帮忙看下怎么看解决吗?谢谢您
RTDETR python推理代码报错段错误 (核心已转储)
定位到报错代码是:
def load_engine(engine_path):
print("ssss")
TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
runtime = trt.Runtime(TRT_LOGGER)
print("tttt")
with open(engine_path, 'rb') as f:
print('zzz')
return runtime.deserialize_cuda_engine(f.read())
中的runtime = trt.Runtime(TRT_LOGGER) ,trt文件由TensorRT 8.6.0.12转出,在c++版本中可以正常推理,pytensorrt版本是8.6.0 ,推理报错段错误 (核心已转储),请问可能的原因是什么?谢谢。
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.