Coder Social home page Coder Social logo

mtcnn_facenet_cpp_tensorrt's People

Contributors

nwesem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mtcnn_facenet_cpp_tensorrt's Issues

performance on jetson nano

in the readme, you've mentioned you got 15fps on jetson nano, I wanted to know what was the video source and video quality?

how to train the classifier based on subfolders of different people?

Hi, i would like to use subfolders in the imgs folder of different people so that i can have the classifier to train on mulitple images of the same person (person A) to better distinguish Person A from a display attack of person A (e.g. picture of person A on a phone display).

How should i modify the main.cpp code under getting embeddings of known faces as well as saving new faces of the same person into particular subfolder of that person in the imgs folder?

ValueError: Expected n_neighbors <= n_samples, but n_samples = 93, n_neighbors = 100

@shubham-shahh can you look into this?

Traceback (most recent call last):
  File "test.py", line 324, in sgie_sink_pad_buffer_probe
    result = predict_using_classifier(faces_embeddings, labels, face_to_predict_embedding)
  File "/opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/facenet-python/facenet_utils.py", line 51, in predict_using_classifier
    yhat_class = classifier.predict(samples)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/sklearn/neighbors/_classification.py", line 197, in predict
    neigh_dist, neigh_ind = self.kneighbors(X)
  File "/home/ubuntu/.local/lib/python3.6/site-packages/sklearn/neighbors/_base.py", line 683, in kneighbors
    (n_samples_fit, n_neighbors)
ValueError: Expected n_neighbors <= n_samples,  but n_samples = 93, n_neighbors = 100
[INFO] Classifier Training Done...

rtsp link is not working

i tried with the VideoStreamer videoStreamer = VideoStreamer("rtsp://192.168.10.21:8554/unicast", videoFrameWidth, videoFrameHeight); in main.cpp but it is getting error which is shown below.

End generating TensorRT runtime models
Parsing Directory: ../imgs
Segmentation fault (core dumped)

in videoStreamer.h and videoStreamer.cpp
VideoStreamer(int nmbrDevice, int videoWidth, int videoHeight, int frameRate, bool isCSICam);
i changed with
VideoStreamer(const char* nmbrDevice, int videoWidth, int videoHeight, int frameRate, bool isCSICam);

but getting the error
Segmentation fault (core dumped)

No opencv window comes up after all the steps

I am trying to run the code on jetson nano and everything works like a charm but in the end i dont see any screen , only some continuous logs on the screen . What could be the possible reason for this?
I have also changed the camera to usb camera and tested it on jetson via a python code and the camera seems to work perfectly fine. Any help would be appreciated !

Thanks !

RTMP reconnect

Hi,
I am using facenet with TensorRT for face recognition on jetson nano everything is working fine until my rtmp server restarts whenever it restarts I got this error on nano "rtmpsink could not write to resource" I tried googling how to reconnect to rtmp server if it fails but didn't get any proper solution. Any suggestions from here regarding this?
I am using below pipeline

appsrc ! videoconvert ! x264enc speed-preset=ultrafast tune=zerolatency ! flvmux streamable=true !
rtmpsink location=rtmp://IPADDRESS:1935?pwd=2021

Thanks in advance.

Using model using TensorRT and Python

Hello! I've just tested this C++ implementation and is really great! Thank you for sharing your knowledge.
I was wondering if this model could be used in Python with TensorRT.
Thanks in advance!

Error with facenet.pb to facenet.uff

Using output node dimension/LeakyRelu
Using output node orientation/l2_normalize
Using output node confidence/Softmax
Converting to UFF graph
Traceback (most recent call last):
File "./step01_pb_to_uff.py", line 32, in
output_filename=uff_file, text=False)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 178, in from_tensorflow
debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 62, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: confidence/Softmax was not found in the graph. Please use the -l option to list nodes in the graph.

facenet is not accurate

I have run through the whole process, but the facenet is not accurate.
I tested the accuracy of face recognition, but the accuracy rate is only 20%.
I suspect there are some errors between the image and the model, such as normalization, color change, I tried to find the error but no success
I looked at the feature vector, the whole value is very small, most are less than 0.1
I need help, please give me some suggestions, thanks

detecion on small faces

hi, first thanks for the repo, the perf on jetson is quite remarkable. really cool.

I am trying to run the face detection on small faces, and I struggle to get good results.

I saw there were interesting parameters in mtcnn.cpp like the below lines:

int minsize = 60; 
int MIN_DET_SIZE = 12;

I have tried different values but the results are not intuitive and usually there are really bad (most detections disappear even for very large and clear faces).

Any advice on how to get better results on smaller faces in images?

thanks
edit: below an example with the default min size values unchanged:
worlds-largest-selfie_modified

Failed to convert facenet.pb model to facenet.onnx model

Hi @nwesem,
Since converting a tensorflow model to UFF model will be deprecated in the future and transforming deep learning models to ONNX is becoming popular, I have tried to convert facenet.pb to facenet.onnx using tf2onnx but I have not succeeded. The problem is that there are some layers in facenet.pb that are not supported in tf2onnx.
Have you ever tried to do that? or Could anyone give me some suggestions on how to DEBUG? One of the BUGs is
File "/home/vanlong/.virtualenvs/facerecog/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 430, in import_graph_def raise ValueError(str(e)) ValueError: Shape must be rank 4 but is rank 0 for 'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/batchnorm/mul' (op: 'Conv2D') with input shapes: [], [3,3,3,32].

Error loading model with python

Hi,
I tested your C++ implementation and I would like to implement it in Python. I'm trying to load the engine file but the problem is that the plugin is not found when loading the engine with:

with open(engineFile, "rb") as f, trt.Runtime(G_LOGGER) as runtime: engine = runtime.deserialize_cuda_engine(f.read())

I get the following error:

[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin L2Norm_Helper_TRT version 1 [TensorRT] ERROR: safeDeserializationUtils.cpp (259) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry) [TensorRT] ERROR: INVALID_STATE: std::exception [TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

Do you know how could I solve this issue?
Thanks

Embeddings do not match with the original facenet_keras.h5

Hey, I created facenet_engine.plan file using the facenet.onnx file. When I ran the code for inference, it didn't gave any correct recognition results. I compared the embeddings (same image) generated from facenet.h5 and facenet_engine.plan file, it turns out that they both are COMPLETELY DIFFERENT. What could possibly be the reason for this strange result?

Also, I got error "AssertionError: Bottleneck_BatchNorm/batchnorm_1/add_1:0 is not in graph" while converting from facenet.pb to facenet.onnx. So, I used the command "python -m tf2onnx.convert --input path-to-facenet.pb --inputs input:0[1,160,160,3] --inputs-as-nchw input_1:0 --outputs Bottleneck/BatchNorm/batchnorm/add_1:0 --output path-to-save-facenet.onnx" instead. Later one worked without causing any error.

Please help me to get the right embeddings!
Thank you!

UNKNOWN: messages & ERROR: UffParser : Could not open ../facenetModels/facenet.uff

I follow all your steps.
But when I ran ./mtcnn_facenet_cpp_tensorRT, this messages pop up...
How can I fix it?

my tensorflow version : 1.15.4 with jp4.4

UNKNOWN: Registered plugin creator - ::GridAnchor_TRT version 1
UNKNOWN: Registered plugin creator - ::NMS_TRT version 1
UNKNOWN: Registered plugin creator - ::Reorg_TRT version 1
UNKNOWN: Registered plugin creator - ::Region_TRT version 1
UNKNOWN: Registered plugin creator - ::Clip_TRT version 1
UNKNOWN: Registered plugin creator - ::LReLU_TRT version 1
UNKNOWN: Registered plugin creator - ::PriorBox_TRT version 1
UNKNOWN: Registered plugin creator - ::Normalize_TRT version 1
UNKNOWN: Registered plugin creator - ::RPROI_TRT version 1
UNKNOWN: Registered plugin creator - ::BatchedNMS_TRT version 1
UNKNOWN: Registered plugin creator - ::FlattenConcat_TRT version 1
UNKNOWN: Registered plugin creator - ::CropAndResize version 1
UNKNOWN: Registered plugin creator - ::DetectionLayer_TRT version 1
UNKNOWN: Registered plugin creator - ::Proposal version 1
UNKNOWN: Registered plugin creator - ::ProposalLayer_TRT version 1
UNKNOWN: Registered plugin creator - ::PyramidROIAlign_TRT version 1
UNKNOWN: Registered plugin creator - ::ResizeNearest_TRT version 1
UNKNOWN: Registered plugin creator - ::Split version 1
UNKNOWN: Registered plugin creator - ::SpecialSlice_TRT version 1
UNKNOWN: Registered plugin creator - ::InstanceNormalization_TRT version 1
ERROR: UffParser: Could not open ../facenetModels/facenet.uff
Failed to parse UFF
terminate called after throwing an instance of 'std::exception'
what(): std::exception
Aborted (core dumped)

Failed to parse UFF - custom UFF file

I created a new uff file named new.uff by the following steps:

  1. Train Tensorflow model: https://github.com/davidsandberg/facenet/wiki/Classifier-training-of-inception-resnet-v1.
    The result of this step is a directory contains 4 files(checkpoint, model....index, model...data,...)

  2. Freeze the model by https://github.com/davidsandberg/facenet/blob/master/src/freeze_graph.py
    python freeze_graph.py path_to_directory_step_1 new.pb
    new.pb is my new output model

  3. Modify the code and convert that new.pb to new.uff

  4. Copy to facenetModels folder, modify code and run

The error:
UffParser: Validator error: InceptionResnetV1/Repeat_2/block8_5/Branch_0/Conv2d_1x1/BatchNorm/cond/Switch: Unsupported operation _Switch
...
Failed to parse UFF

Was I training the model incorrectly?
I could run well with the given facenet.pb

Failed to Compile facenet

Hi

I followed your github instruction closely, but I had the following errors:

dlinano@jetson-nano:~/mtcnn_facenet_cpp_tensorRT/build$ make -j${nproc}
[ 7%] Building NVCC (Device) object trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/trt_l2norm_helper_generated_l2norm_helper.cu.o
Scanning dependencies of target trt_l2norm_helper
[ 14%] Building CXX object trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/l2norm_helper.cpp.o
[ 21%] Linking CXX static library libtrt_l2norm_helper.a
[ 21%] Built target trt_l2norm_helper
Scanning dependencies of target mtcnn_facenet_cpp_tensorRT
[ 28%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/baseEngine.cpp.o
[ 35%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/faceNet.cpp.o
[ 42%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/common.cpp.o
[ 50%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/network.cpp.o
[ 57%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/pnet_rt.cpp.o
[ 64%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/main.cpp.o
[ 78%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/mtcnn.cpp.o
[ 78%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/videoStreamer.cpp.o
[ 85%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/onet_rt.cpp.o
[ 92%] Building CXX object CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/rnet_rt.cpp.o
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp: In member function ‘void FaceNetClassifier::createOrLoadEngine()’:
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:54:62: warning: ‘virtual nvinfer1::INetworkDefinition* nvinfer1::IBuilder::createNetwork()’ is deprecated [-Wdeprecated-declarations]
INetworkDefinition network = builder->createNetwork();
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.h:15:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:1:
/usr/include/aarch64-linux-gnu/NvInfer.h:5431:58: note: declared here
TRT_DEPRECATED virtual nvinfer1::INetworkDefinition
createNetwork() TRTNOEXCEPT = 0;
^~~~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:56:59: warning: ‘DimsCHW’ is deprecated [-Wdeprecated-declarations]
parser->registerInput("input", DimsCHW(160, 160, 3), UffInputOrder::kNHWC);
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.h:15:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:1:
/usr/include/aarch64-linux-gnu/NvInfer.h:206:22: note: declared here
class TRT_DEPRECATED DimsCHW : public Dims3
^~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:71:38: warning: ‘virtual void nvinfer1::IBuilder::setFp16Mode(bool)’ is deprecated [-Wdeprecated-declarations]
builder->setFp16Mode(true);
^
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp: In member function ‘virtual void baseEngine::caffeToGIEModel(const string&, const string&, const std::vector<std::__cxx11::basic_string >&, unsigned int, nvinfer1::IHostMemory*&)’:
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp:71:62: warning: ‘virtual nvinfer1::INetworkDefinition* nvinfer1::IBuilder::createNetwork()’ is deprecated [-Wdeprecated-declarations]
INetworkDefinition network = builder->createNetwork();
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.h:15:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:1:
/usr/include/aarch64-linux-gnu/NvInfer.h:5757:33: note: declared here
TRT_DEPRECATED virtual void setFp16Mode(bool mode) TRTNOEXCEPT = 0;
^~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:74:38: warning: ‘virtual void nvinfer1::IBuilder::setInt8Mode(bool)’ is deprecated [-Wdeprecated-declarations]
builder->setInt8Mode(true);
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.h:15:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:1:
/usr/include/aarch64-linux-gnu/NvInfer.h:5595:33: note: declared here
TRT_DEPRECATED virtual void setInt8Mode(bool mode) TRTNOEXCEPT = 0;
^~~~~~~~~~~
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/common.h:7:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.h:4,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:5431:58: note: declared here
TRT_DEPRECATED virtual nvinfer1::INetworkDefinition
createNetwork() TRTNOEXCEPT = 0;
^~~~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:79:43: warning: ‘virtual void nvinfer1::IBuilder::setMaxWorkspaceSize(std::size_t)’ is deprecated [-Wdeprecated-declarations]
builder->setMaxWorkspaceSize(1<<30);
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.h:15:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:1:
/usr/include/aarch64-linux-gnu/NvInfer.h:5462:33: note: declared here
TRT_DEPRECATED virtual void setMaxWorkspaceSize(std::size_t workspaceSize) TRTNOEXCEPT = 0;
^~~~~~~~~~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:87:53: warning: ‘virtual nvinfer1::ICudaEngine* nvinfer1::IBuilder::buildCudaEngine(nvinfer1::INetworkDefinition&)’ is deprecated [-Wdeprecated-declarations]
m_engine = builder->buildCudaEngine(network);
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.h:15:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:1:
/usr/include/aarch64-linux-gnu/NvInfer.h:5566:51: note: declared here
TRT_DEPRECATED virtual nvinfer1::ICudaEngine
buildCudaEngine(
^~~~~~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp: In member function ‘void FaceNetClassifier::preprocessFaces()’:
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:128:76: error: ‘CV_RGB2BGR’ was not declared in this scope
cv::cvtColor(m_croppedFaces[i].faceMat, m_croppedFaces[i].faceMat, CV_RGB2BGR);
^~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp:84:45: warning: ‘virtual void nvinfer1::IBuilder::setMaxWorkspaceSize(std::size_t)’ is deprecated [-Wdeprecated-declarations]
builder->setMaxWorkspaceSize(1 << 25);
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/common.h:7:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.h:4,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:5462:33: note: declared here
TRT_DEPRECATED virtual void setMaxWorkspaceSize(std::size_t workspaceSize) TRTNOEXCEPT = 0;
^~~~~~~~~~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp:85:64: warning: ‘virtual nvinfer1::ICudaEngine* nvinfer1::IBuilder::buildCudaEngine(nvinfer1::INetworkDefinition&)’ is deprecated [-Wdeprecated-declarations]
ICudaEngine *engine = builder->buildCudaEngine(network);
^
In file included from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/common.h:7:0,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.h:4,
from /home/dlinano/mtcnn_facenet_cpp_tensorRT/src/baseEngine.cpp:5:
/usr/include/aarch64-linux-gnu/NvInfer.h:5566:51: note: declared here
TRT_DEPRECATED virtual nvinfer1::ICudaEngine
buildCudaEngine(
^~~~~~~~~~~~~~~
/home/dlinano/mtcnn_facenet_cpp_tensorRT/src/faceNet.cpp:128:76: note: suggested alternative: ‘CV_RGB’
cv::cvtColor(m_croppedFaces[i].faceMat, m_croppedFaces[i].faceMat, CV_RGB2BGR);
^~~~~~~~~~
CV_RGB
CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/build.make:110: recipe for target 'CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/faceNet.cpp.o' failed
make[2]: *** [CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/faceNet.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/all' failed
make[1]: *** [CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Cannot read from USB camera

Hi @nwesem
I am using Jetson nano and followed all the steps you've mentioned in readme
when I run ./mtcnn_facenet_cpp_tensorRT
i am getting the following output i have attached in the .txt file
iss.txt

Empty frame! Exiting... Try restarting nvargus-daemon by doing: sudo systemctl restart nvargus-daemon

Hi @nwesem,

Thanks for this Repo.
I am testing this in my Laptop having RTX 2060.
When I am running this command ./mtcnn_facenet_cpp_tensorRT I am getting like this:
End generating TensorRT runtime models
Parsing Directory: ../imgs
Empty frame! Exiting...
Try restarting nvargus-daemon by doing: sudo systemctl restart nvargus-daemon
Counted 0 frames in 0 seconds! This equals -nanfps.

May I know what i am doing wrong?

Thanks in advance

Build failed in trt_l2norm_helper_generated_l2norm_helper.cu.o

Hi,

I'm attempting to build this project on a Jetson Nano. CMake completes without warnings or errors, but when I run make -j${nproc} I receive the following errors:

`$ make -j${nproc}
[ 7%] Building NVCC (Device) object trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/trt_l2norm_helper_generated_l2norm_helper.cu.o
/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(52): error: member function declared with "override" does not override a base class member

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(52): warning: function "nvinfer1::IPluginV2::enqueue(int32_t, const void *const *, void *const *, void *, cudaStream_t)" is hidden by "L2NormHelper::enqueue" -- virtual function override intended?

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(73): error: exception specification for virtual function "L2NormHelper::configureWithFormat" is incompatible with that of overridden function "nvinfer1::IPluginV2::configureWithFormat"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(71): error: exception specification for virtual function "L2NormHelper::getPluginNamespace" is incompatible with that of overridden function "nvinfer1::IPluginV2::getPluginNamespace"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(69): error: exception specification for virtual function "L2NormHelper::setPluginNamespace" is incompatible with that of overridden function "nvinfer1::IPluginV2::setPluginNamespace"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(67): error: exception specification for virtual function "L2NormHelper::clone" is incompatible with that of overridden function "nvinfer1::IPluginV2::clone"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(65): error: exception specification for virtual function "L2NormHelper::destroy" is incompatible with that of overridden function "nvinfer1::IPluginV2::destroy"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(63): error: exception specification for virtual function "L2NormHelper::getPluginVersion" is incompatible with that of overridden function "nvinfer1::IPluginV2::getPluginVersion"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(61): error: exception specification for virtual function "L2NormHelper::getPluginType" is incompatible with that of overridden function "nvinfer1::IPluginV2::getPluginType"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(59): error: exception specification for virtual function "L2NormHelper::supportsFormat" is incompatible with that of overridden function "nvinfer1::IPluginV2::supportsFormat"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(57): error: exception specification for virtual function "L2NormHelper::serialize" is incompatible with that of overridden function "nvinfer1::IPluginV2::serialize"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(55): error: exception specification for virtual function "L2NormHelper::getSerializationSize" is incompatible with that of overridden function "nvinfer1::IPluginV2::getSerializationSize"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(50): error: exception specification for virtual function "L2NormHelper::getWorkspaceSize" is incompatible with that of overridden function "nvinfer1::IPluginV2::getWorkspaceSize"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(48): error: exception specification for virtual function "L2NormHelper::terminate" is incompatible with that of overridden function "nvinfer1::IPluginV2::terminate"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(46): error: exception specification for virtual function "L2NormHelper::initialize" is incompatible with that of overridden function "nvinfer1::IPluginV2::initialize"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(44): error: exception specification for virtual function "L2NormHelper::getOutputDimensions" is incompatible with that of overridden function "nvinfer1::IPluginV2::getOutputDimensions"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(42): error: exception specification for virtual function "L2NormHelper::getNbOutputs" is incompatible with that of overridden function "nvinfer1::IPluginV2::getNbOutputs"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(104): error: exception specification for virtual function "L2NormHelperPluginCreator::deserializePlugin" is incompatible with that of overridden function "nvinfer1::IPluginCreator::deserializePlugin"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(102): error: exception specification for virtual function "L2NormHelperPluginCreator::createPlugin" is incompatible with that of overridden function "nvinfer1::IPluginCreator::createPlugin"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(100): error: exception specification for virtual function "L2NormHelperPluginCreator::getFieldNames" is incompatible with that of overridden function "nvinfer1::IPluginCreator::getFieldNames"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(98): error: exception specification for virtual function "L2NormHelperPluginCreator::getPluginNamespace" is incompatible with that of overridden function "nvinfer1::IPluginCreator::getPluginNamespace"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(96): error: exception specification for virtual function "L2NormHelperPluginCreator::setPluginNamespace" is incompatible with that of overridden function "nvinfer1::IPluginCreator::setPluginNamespace"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(94): error: exception specification for virtual function "L2NormHelperPluginCreator::getPluginVersion" is incompatible with that of overridden function "nvinfer1::IPluginCreator::getPluginVersion"

/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h(92): error: exception specification for virtual function "L2NormHelperPluginCreator::getPluginName" is incompatible with that of overridden function "nvinfer1::IPluginCreator::getPluginName"

23 errors detected in the compilation of "/tmp/tmpxft_000033fe_00000000-8_l2norm_helper.compute_72.cpp1.ii".
CMake Error at trt_l2norm_helper_generated_l2norm_helper.cu.o.Release.cmake:279 (message):
Error generating file
/home/marc/src/facialRecognition/mtcnn_facenet_cpp_tensorRT/build/trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir//./trt_l2norm_helper_generated_l2norm_helper.cu.o

trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/build.make:306: recipe for target 'trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/trt_l2norm_helper_generated_l2norm_helper.cu.o' failed
make[2]: *** [trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/trt_l2norm_helper_generated_l2norm_helper.cu.o] Error 1
CMakeFiles/Makefile2:122: recipe for target 'trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/all' failed
make[1]: *** [trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
`
Any help you could provide would be appreciated.
Thanks!

Failed to parse UFF


ERROR: UffParser: Unsupported number of graph 0
Failed to parse UFF
terminate called after throwing an instance of 'std::exception'
what(): std::exception
Aborted (core dumped)

Hi above is the stack trace for the error while executing ./mtcnn_facenet_cpp_tensorRT
Hardware: Jetson Nano with Jetpack 4.2.1
Tensorflow Version: Tried both 1.14 and 1.15
TensorRT: 5.1.6.1

Video file test

Is there a way i can use video file as input instead of image/photo.

Question of inputsize

hi I had an error while running this program and I wonder if there's any solution.
The setting-inputsize of the model are 640 and 480, and the minsize of mtcnn is 60. But when I change the two parameters, the program doesnot run successfully.
According to the model, it build a image pyramid so that any size of images should work. Is there any rule when setting the parameters? I check the program and cannot find the reason so I wonder if you could give some advices. Thanks a lot.

Inference over Facenet makes different results on Jetson Nano and dGPU

Hi, i'm using this repo from a laptop with TensorRT (it's TensorRT 20.03 docker image), and also from a Jetson, the code works without problems in Jetson, the detections are OK this is in expected values.
But when i execute this repo on my own laptop, it return values between 500 and 1000 do you know why is it?

Jetson Nano 2GB & JetsonPack 4.5

Hi Niclas, @nwesem

I have been running this project on a Jetson Nano 4GB & JetsonPack 4.4 successfully for the last few months, but this past week I was trying to get it up and running on a Jetson Nano 2GB with no luck.

The system keeps running out of memory while running a graph construction and optimization - please refer to the picture attached - .

Do you know if there's a way to run this project on the 2GB Jetson Nano, or should I just stick with the 4GB for this project?

I ran the command dmesg to confirm that the process was getting killed due to memory running out.

Thanks in advance.

issue_nano_2gb

jetpack 4.6

does it work with jetpack 4.6 ?
tensorRT 8.0
cudnn 8.2
cuda 10.02
plZ

changing the screen resolution

I am not familiar with opencv so pardon me with my question.

I would like to ask if it is possible to run the program with 720p resolution?
I tried doing that and face detection did not work.
I am currently understanding the code in the facenet.cpp
Can you help me with this?

Thank you.

Jetpack 4.6

Has anyone tested this with Jetpack 4.6?
Is everything working as expected?

In my jetson device, it is not working.

Deepstream

Is there a way i can deploy using Nvidia Deepstream ? Or create Deepstream app from this?

Face Registration with multiple faces of a Single Person

Hi @nwesem,

Thank you for your Repo.
I was having a doubt on giving a multiple faces of a Single Person for Registration. How to do so?
If we are giving a single image for the Person the accuracy is not good. So, I thought of giving a multiple images for a Single Person to get the good accuracy.

Thanks in advance
Darshan

Error with "make -j${4}"

Hello all!

I am trying to launch this project on Jetson Nano.
Specifications:

  • CUDA 10.2
  • TensorRT 7.1.3.0.
  • Cmake 3.13.0
  • JetPack 4.4

I am getting following error message when typing last command in fifth step of the instructions.

Command: make -j${nproc}
Result:
image

Any help will be appreciated.

Jetpack 4.5

Has anyone tested this with Jetpack 4.5? Is everything working as expected?

Issue with .pb to uff conversion

Following issue is observed:-
arning: No conversion function registered for layer: L2Norm_Helper_TRT yet.
Converting embeddings/Rsqrt as custom op: L2Norm_Helper_TRT
Traceback (most recent call last):
File "step01_pb_to_uff.py", line 33, in
output_filename=uff_file, text=False)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 225, in from_tensorflow
debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 141, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 126, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 88, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 269, in parse_tf_attrs
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 269, in
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 265, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 226, in convert_tf2uff_field
if isinstance(val, tf.AttrValue):
This was mentioned in #9 but I'm also using Jetpack 4.4 and it seem sthe project is updated for that.
Tensorflow Version 2.321 is used.

Aborted (core dumped)

Hi @nwesem
I am using Jetson nano and followed all the steps you've mentioned in readme
when I run ./mtcnn_facenet_cpp_tensorRT
I am getting the following output:
quan@quan:~/Desktop/PyProject/mtcnn_facenet_cpp_tensorRT/build$ ./mtcnn_facenet_cpp_tensorRT
UNKNOWN: Registered plugin creator - ::GridAnchor_TRT version 1
UNKNOWN: Registered plugin creator - ::NMS_TRT version 1
UNKNOWN: Registered plugin creator - ::Reorg_TRT version 1
UNKNOWN: Registered plugin creator - ::Region_TRT version 1
UNKNOWN: Registered plugin creator - ::Clip_TRT version 1
UNKNOWN: Registered plugin creator - ::LReLU_TRT version 1
UNKNOWN: Registered plugin creator - ::PriorBox_TRT version 1
UNKNOWN: Registered plugin creator - ::Normalize_TRT version 1
UNKNOWN: Registered plugin creator - ::RPROI_TRT version 1
UNKNOWN: Registered plugin creator - ::BatchedNMS_TRT version 1
UNKNOWN: Registered plugin creator - ::FlattenConcat_TRT version 1
UNKNOWN: Registered plugin creator - ::CropAndResize version 1
UNKNOWN: Registered plugin creator - ::DetectionLayer_TRT version 1
UNKNOWN: Registered plugin creator - ::Proposal version 1
UNKNOWN: Registered plugin creator - ::ProposalLayer_TRT version 1
UNKNOWN: Registered plugin creator - ::PyramidROIAlign_TRT version 1
UNKNOWN: Registered plugin creator - ::ResizeNearest_TRT version 1
UNKNOWN: Registered plugin creator - ::Split version 1
UNKNOWN: Registered plugin creator - ::SpecialSlice_TRT version 1
UNKNOWN: Registered plugin creator - ::InstanceNormalization_TRT version 1
size48841475
WARNING: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
UNKNOWN: Deserialize required 3201545 microseconds.

Using pipeline:
nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)NV12, framerate=(fraction)60/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)640, height=(int)480, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:568 Failed to create CaptureSession
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1

Start generating mtCNN TenosrRT runtime models
changedH = 96, changedW = 128
terminate called after throwing an instance of 'std::out_of_range'
what(): basic_string::replace: __pos (which is 15) > this->size() (which is 0)
Aborted (core dumped)

Parse UFF ERROR

when i try to parse pb to uff,it shows:
Warning: keepdims is ignored by the UFF Parser and defaults to True
DEBUG: convert reshape to flatten node

and i move the .uff,it shows:
ERROR: InceptionResnetV1/Conv2d_1a_3x3/convolution: kernel weights has count 864 but 46080 was expected
ERROR: UFFParser: Parser error: InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/batchnorm/mul: The input to the Scale Layer is required to have a minimum of 3 dimensions.
Failed to parse UFF

do you have any idea with it?

IndexError: list index out of range

i tried to convert pb to uff file using jetson agx xavier on jetpack 5.0 with tensorflow 1.15 and got this error
Traceback (most recent call last):
File "step01_pb_to_uff.py", line 32, in
uff_model = uff.from_tensorflow(dynamic_graph.as_graph_def(), output_nodes=output_nodes,
File "/usr/lib/python3.8/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 219, in from_tensorflow
uff_graph = tf2uff.convert_tf2uff_graph(
File "/usr/lib/python3.8/dist-packages/uff/converters/tensorflow/converter.py", line 140, in convert_tf2uff_graph
nodes_to_convert += cls.convert_tf2uff_node(nodes_to_convert.pop(), tf_nodes,
File "/usr/lib/python3.8/dist-packages/uff/converters/tensorflow/converter.py", line 125, in convert_tf2uff_node
uff_node = cls.convert_layer(
File "/usr/lib/python3.8/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_layer
return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
File "/usr/lib/python3.8/dist-packages/uff/converters/tensorflow/converter_functions.py", line 97, in convert_mul
uff_graph.binary(inputs[0], inputs[1], 'mul', name)
IndexError: list index out of range

cmake error makefile all failed

CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/build.make:110: recipe for target 'CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/faceNet.cpp.o' failed

make[2]: *** [CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/src/faceNet.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/all' failed
make[1]: *** [CMakeFiles/mtcnn_facenet_cpp_tensorRT.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Cant make MTCNN engines

Hi, so i followed step by step instruction, but the build cant seem to make mtcnn model engines by itself, here's the output am getting

Start generating mtCNN TenosrRT runtime models
changedH = 96, changedW = 128
rawName = ../mtCNNModels/det1_relu1.engine
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 7:1: Expected identifier, got: <
ERROR: CaffeParser: Could not parse deploy file
Segmentation fault (core dumped)

"det1_relu1" i had succesfully created my own engines placed them in the folder, apparently the code seems to want to make its own hence adding '1' at the end the name
Can i run the code with my custom made engines?

Unable to find jpg in imgs folder

I followed the steps and added person1.jpg and person2.jpg in imgs folder but when i run ./mtcnn_facenet_cpp_tensorRT inside build folder i get the below error , please advise what am i doing wrong , thank you

End generate rnet runtime models
rawName = ../mtCNNModels/det3_relu.engine
size1924528
size1924528UNKNOWN: Deserialize required 44143 microseconds.

End generating TensorRT runtime models
Parsing Directory: ../imgs
Empty frame! Exiting... 

Cuda 10.2

Hi,

got my Jetson nano yesterday and i get following errors:

2020-04-23 20:54:33.340662: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory
2020-04-23 20:54:33.340727: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-04-23 20:54:36.024791: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-04-23 20:54:36.024987: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-04-23 20:54:36.025035: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.7
=== Automatically deduced input nodes ===
[name: "input"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "shape"
  value {
    shape {
    }
  }
}
]
=========================================

Using output node embeddings
Converting to UFF graph
Warning: No conversion function registered for layer: L2Norm_Helper_TRT yet.
Converting embeddings/Rsqrt as custom op: L2Norm_Helper_TRT
Traceback (most recent call last):
  File "step01_pb_to_uff.py", line 33, in <module>
    output_filename=uff_file, text=False)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 178, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 79, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 41, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 222, in parse_tf_attrs
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 222, in <dictcomp>
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 218, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 179, in convert_tf2uff_field
    if isinstance(val, tf.AttrValue):
AttributeError: module 'tensorflow' has no attribute 'AttrValue'

regards

trt_norm build error

I'm trying to build and it shows the following error

In file included from /usr/include/c++/5/cstdint:35:0,
from /usr/include/x86_64-linux-gnu/NvInferRuntimeCommon.h:54,
from /usr/include/x86_64-linux-gnu/NvInferRuntime.h:59,
from /usr/include/x86_64-linux-gnu/NvInfer.h:53,
from /usr/include/x86_64-linux-gnu/NvInferPlugin.h:53,
from /mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.h:3,
from /mtcnn_facenet_cpp_tensorRT/trt_l2norm_helper/l2norm_helper.cu:1:
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support
^
CMake Error at trt_l2norm_helper_generated_l2norm_helper.cu.o.Release.cmake:220 (message):
Error generating
/mtcnn_facenet_cpp_tensorRT/build/trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir//./trt_l2norm_helper_generated_l2norm_helper.cu.o

trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/build.make:82: recipe for target 'trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/trt_l2norm_helper_generated_l2norm_helper.cu.o' failed
make[2]: *** [trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/trt_l2norm_helper_generated_l2norm_helper.cu.o] Error 1
CMakeFiles/Makefile2:141: recipe for target 'trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/all' failed
make[1]: *** [trt_l2norm_helper/CMakeFiles/trt_l2norm_helper.dir/all] Error 2

Please help what to do next.

Low performance

Great work,
but I am not able to reach desired fps (on Jetson Nano 15fps)

My result are quite low (3fps)

With LOG_TIMES it show:
mtCNN took 201ms
Forward took 135ms
Feature matching took 0ms.

I am using last Jetpack 4.4.1

#jetson_release

  • NVIDIA Jetson Nano (Developer Kit Version)
    • Jetpack 4.4.1 [L4T 32.4.4]
    • NV Power Mode: MAXN - Type: 0
    • jetson_stats.service: active
  • Libraries:
    • CUDA: 10.2.89
    • cuDNN: 8.0.0.180
    • TensorRT: 7.1.3.0
    • Visionworks: 1.6.0.501
    • OpenCV: 4.5.0 compiled CUDA: YES
    • VPI: 0.4.4
    • Vulkan: 1.2.70

Can it be connected with:
opencv/opencv#18340
or
https://forums.developer.nvidia.com/t/darknet-slower-using-jetpack-4-4-cudnn-8-0-0-cuda-10-2-than-jetpack-4-3-cudnn-7-6-3-cuda-10-0/121579

Thanks

Performance & Findings with 300 pictures.

Hi there @nwesem ,
First of all - thank you for this repo.

Second of all, I have loaded the imgs folder as of today with 300 pictures of people, and the system seems to be working within ~70% accuracy or less.

Please see below some of my findings I had during the deployment:

  • Lighting during the recognition process. - the system is located outdoors and usually in the mornings (when the sun hits the camera directly) the system will get the lowest accuracy.

  • The initial picture taken should be taken with a high resolution camera, so far I have seen that pictures taken with a resolution of 3000 X 4000 have better results than pictures taken with a high resolution webcam with 1080X1920 resolution.

  • Another point is the usage of facemasks, which throws off the system and triggers false positives - sometimes "recognizing" two or three people with the same person (not registered) wearing a facemask.

Questions:

  • Is there a way to increase the accuracy by getting the system more strict when it comes to compare the faces ? This to reduce the false positives.

  • What are your suggestions in terms of lighting ?

  • If I would like to reduce the amount of faces to be detected down to one, what or where is the parameter I need change? Since the main.cpp after I changed it, still recognizes two or three faces even when the parameter was changed down to one.

Thank you again for your repo and help, take it easy!

FYI - The system has been running for a week or so, as I get more findings I will post them here for your reference.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.