enoxsoftware / opencvforunity Goto Github PK
View Code? Open in Web Editor NEWOpenCV for Unity (Untiy Asset Plugin)
Home Page: https://assetstore.unity.com/packages/tools/integration/opencv-for-unity-21088
OpenCV for Unity (Untiy Asset Plugin)
Home Page: https://assetstore.unity.com/packages/tools/integration/opencv-for-unity-21088
I tried to use OpenCV on Unity5 for iOS. I built for iOS, but then, it crashed.
Both IL2CPP and Mono compiled app crashed.
Could you update this assets to Unity5?
#define directives restrict to UNITY_5 so importing to Unity 2017 has ~23 errors. I was able to make a hacky fix by change from UNITY_5 to UNITY_5_3_OR_NEWER across several files including Mat.cs.
As per subject, is it possible to do face recognition using OpenCVForUnity which is comparing it with existing database to check either is it match or not. Please advice. Thank you.
Unity Version : 2018.3.4
Xcode: 10.2.1
When i archive and when i want to publish it "Code signing "opencv2.framework" failed".
Please help
ARUCO Samples do not work because opencvforunity.dll is missing the end points.
Some contents of opencv_contrib are already implemented in OpenCVForUnity. However, cnn_3dobj
is missing. Is there a chance that this module will come to OpenCVForUnity as well?
I'm trying to get the value of CAP_PROP_FRAME_COUNT.
It always return 1 no matter what video I open.
In UnityEditor build it returns the correct value,
when I build for iOS it always return 1.
What could be the reason?
This is my code for reference:
VideoCapture cap = new VideoCapture();
cap.open(path);
int max_frames = (int)cap.get(Videoio.CAP_PROP_FRAME_COUNT);
Your reply would be appreciated.
Hi,
We've been using the DLib face tracker and OpenCV library successfully in the 32bit architecture. However, since google play store has mandated 64 bit architectures from 1st August onward, we need to build using the same.
However we are finding that it doesn't work on 64 bit devices. Unity doesn't throw any errors during building and the app even asks for permission to access the camera but then isn't able to capture/display video from the camera.
I want to use VideoCapture.set( ) for frame position seeking,
but constant "CV_CAP_PROP_POS_FRAMES" does not exist.
(should not it exist in Highgui.cs?)
I think this is a issue, but how about?
how to open rtsp camera
I'm getting the following error when trying to run the WebCamTextureToMatSample example.
I'm on Unity 4.6.1f1 on a Macbook Air OS X 10.10.2
ArgumentException: The output Mat object has to be of the same size
OpenCVForUnity.Utils.WebCamTextureToMat (UnityEngine.WebCamTexture webCamTexture, OpenCVForUnity.Mat mat, UnityEngine.Color32[] bufferColors) (at Assets/OpenCVForUnity/org/opencv/unity/Utils.cs:360)
OpenCVForUnitySample.WebCamTextureToMatSample.Update () (at Assets/OpenCVForUnity/Samples/WebCamTextureToMatSample/WebCamTextureToMatSample.cs:101)
Hi,
I want to use the webcam and I am using the example provided which transforme webCamTexture to Mat.
The problem is that I want to use the same webcam in two different application and an error is happening after the webCamTexture.Play().
Is there anything I could do?
I need to use estimateRigidTransform but it's not in ver 2.3.3
I noticed it's in ver 2.3.2 https://enoxsoftware.github.io/OpenCVForUnity/3.0.0/doc/html/class_open_c_v_for_unity_1_1_video.html
How can I use this method?
Is there a new equivalent?
Please respond.
Thank you.
I'm running the WebTextureToMat Sample just fine on my iPad mini and iPad 3 and 4, but when I try it on my new iPad Air2, it doesn't work. It just flashes the screen briefly and goes blue. This happens with all the WebCam samples except ComicFilter which just shows a black screen.
Hi,
Could you please release a version with OpenCV 4.1.1?
There are some fixes that would be quite useful on some of the apps we are using OpenCV for Unity with (commercial version).
Thx
I clone the repo and it downlaoded compleely but getting this error:
Assets/OpenCVForUnity/Examples/Advanced/HandPoseEstimationExample/ColorBlobDetector.cs(5,7): error CS0246: The type or namespace name `OpenCVForUnity' could not be found. Are you missing an assembly reference?
How can i solve it ? There is not documentation available.
Does the package include the HDR processing?
Thanks,
T
Hi,
i am trying to play a video of my own to get some work done but somehow i could not get it to play
I am using HOGDescriptorSample.cs to play video But nothing works its just black screen
This is the code Of Start Mehtod
` void Start ()
{
rgbMat = new Mat ();
capture = new VideoCapture ();
// capture.open (Utils.getFilePath ("768x576_mjpeg.mjpeg"));
capture.open(Utils.getFilePath("GOPR0082 (240P).avi"));
Debug.Log(" path utilss " + Utils.getFilePath("GOPR0082 (240P).avi"));
if (capture.isOpened ()) {
Debug.Log ("capture.isOpened() true");
} else {
Debug.Log ("capture.isOpened() false");
}
Debug.Log ("CAP_PROP_FORMAT: " + capture.get (Videoio.CAP_PROP_FORMAT));
Debug.Log ("CV_CAP_PROP_PREVIEW_FORMAT: " + capture.get (Videoio.CV_CAP_PROP_PREVIEW_FORMAT));
//My Addition
//Debug.Log("CV_CAP_PROP_PREVIEW_FORMAT: " + capture.get(Videoio));
Debug.Log ("CAP_PROP_POS_MSEC: " + capture.get (Videoio.CAP_PROP_POS_MSEC));
Debug.Log ("CAP_PROP_POS_FRAMES: " + capture.get (Videoio.CAP_PROP_POS_FRAMES));
Debug.Log ("CAP_PROP_POS_AVI_RATIO: " + capture.get (Videoio.CAP_PROP_POS_AVI_RATIO));
Debug.Log ("CAP_PROP_FRAME_COUNT: " + capture.get (Videoio.CAP_PROP_FRAME_COUNT));
Debug.Log ("CAP_PROP_FPS: " + capture.get (Videoio.CAP_PROP_FPS));
Debug.Log ("CAP_PROP_FRAME_WIDTH: " + capture.get (Videoio.CAP_PROP_FRAME_WIDTH));
Debug.Log ("CAP_PROP_FRAME_HEIGHT: " + capture.get (Videoio.CAP_PROP_FRAME_HEIGHT));
capture.grab ();
capture.retrieve (rgbMat, 0);
int frameWidth = rgbMat.cols ();
int frameHeight = rgbMat.rows ();
colors = new Color32[frameWidth * frameHeight];
texture = new Texture2D (frameWidth, frameHeight, TextureFormat.RGBA32, false);
gameObject.transform.localScale = new Vector3 ((float)frameWidth, (float)frameHeight, 1);
float widthScale = (float)Screen.width / (float)frameWidth;
float heightScale = (float)Screen.height / (float)frameHeight;
if (widthScale < heightScale) {
Camera.main.orthographicSize = ((float)frameWidth * (float)Screen.height / (float)Screen.width) / 2;
Debug.Log("Camera main orthographicSize " + Camera.main.orthographicSize );
} else {
Camera.main.orthographicSize = (float)frameHeight / 2;
}
capture.set (Videoio.CAP_PROP_POS_FRAMES, 0);
gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
des = new HOGDescriptor();
}`
I try to use A*B, but the compiler say the operator * can not apply to Mat
Could you please update the WebCamTexture Text Reco sample?
I need to perform OCR text recognition on a HoloLens. So I'm building a UWP app with the HoloToolkit and the HoloLensCameraStream package, followed along with your HoloLens example scenes to ensure things were building and running, but then when it came time for me to start writing OCR code, I discovered all of the classes in org/opencv_contrib/text are surrounded by #if !UNITY_WSA_10_0
which means that entire namespace is inaccessible.
What's the reason for this? I commented out that #if
statement on all 9 of the files currently in the text folder and everything appears to compile and export just fine.
If there's a specific reason for excluding the text detection module in UWP/WSA builds, I think you should provide documentation for which specific modules are available under which runtime platforms, which isn't something I've found and I purchased this plugin with the understanding that it fully supported UWP (as I had no reason to suspect "partial" support).
Otherwise, please add text/OCR support to UWP/WSA builds.
Hi,
I am working with the ARUCO example that comes with the openCVforUnity plugin. I am trying to call the class Aruco.cs but I can't seems to figure out how the example knows the location of this file. At the top of the example it does not reference this file anywhere yet it is able to make function called from this file. I am not knew to programming but am knew to Unity and C#. Any direction would help.
Thanks!
I need equivalents for the following classes and methods:
Pointer (Class)
GetSeqElem (Method)
ConvexityDefect (Class)
If you could show me the existing equivalent or create a new one, that'd be great.
Hi I was just wondering if it is possible to use this library for developing Hololens application for object detection using yolo ?
thank you for the support
Hi,
We have purchased your unity plugin from the assets store and followed the instruction in the ReadMe.pdf however when compiling in XCode we are getting the following linker errors, could you please help?
Undefined symbols for architecture armv7:
"std::ostream& std::ostream::_M_insert<void const*>(void const*)", referenced from:
cvflann::anyimpl::small_any_policy<cvflann::KDTreeIndex<cvflann::L1<float> >::Node**>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
cvflann::anyimpl::small_any_policy<cvflann::KDTreeIndex<cvflann::L2<float> >::Node**>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
"std::_Rb_tree_decrement(std::_Rb_tree_node_base const*)", referenced from:
cvflann::KNNUniqueResultSet<float>::addPoint(float, int) in opencv2(miniflann.o)
cvflann::KNNUniqueResultSet<int>::addPoint(int, int) in opencv2(miniflann.o)
"std::basic_ios<char, std::char_traits<char> >::widen(char) const", referenced from:
cvCreateCameraCapture_AVFoundation(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::startCaptureDevice(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::queryFrame() in opencv2(cap_avfoundation.o)
cv::LDA::lda(cv::_InputArray const&, cv::_InputArray const&) in opencv2(lda.o)
cvflann::LshIndex<cvflann::L1<float> >::getNeighbors(float const*, cvflann::ResultSet<float>&) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::add(unsigned int, float const*) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::LshTable(unsigned int, unsigned int) in opencv2(miniflann.o)
...
"vtable for std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
"std::ios_base::Init::Init()", referenced from:
__GLOBAL__I_a in libopencvforunity.a(ml.o)
__GLOBAL__I_a in opencv2(knearest.o)
__GLOBAL__I_a in opencv2(rtrees.o)
__GLOBAL__I_a in opencv2(svm.o)
__GLOBAL__I_a in opencv2(boost.o)
__GLOBAL__I_a in opencv2(tree.o)
__GLOBAL__I_a in opencv2(gbt.o)
...
"std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&)", referenced from:
_core_Core_getBuildInformation_10 in libopencvforunity.a(core.o)
_gpu_DeviceInfo_name_10 in libopencvforunity.a(gpu.o)
cv::operator<<(cv::FileStorage&, std::string const&) in opencv2(persistence.o)
cv::Exception::Exception(int, std::string const&, std::string const&, std::string const&, int) in opencv2(system.o)
cv::tempfile(char const*) in opencv2(system.o)
cv::Exception::Exception(cv::Exception const&) in opencv2(system.o)
cv::HOGDescriptor::save(std::string const&, std::string const&) const in opencv2(hog.o)
...
"std::string::_Rep::_M_destroy(std::allocator<char> const&)", referenced from:
_contrib_FaceRecognizer_load_10 in libopencvforunity.a(contrib.o)
_contrib_FaceRecognizer_save_10 in libopencvforunity.a(contrib.o)
_core_Core_getBuildInformation_10 in libopencvforunity.a(core.o)
_core_Core_putText_10 in libopencvforunity.a(core.o)
_core_Core_putText_11 in libopencvforunity.a(core.o)
_core_Core_putText_12 in libopencvforunity.a(core.o)
_core_Core_n_1getTextSize in libopencvforunity.a(core.o)
...
"std::ostream& std::ostream::_M_insert<double>(double)", referenced from:
cv::writeElems(std::ostream&, void const*, int, int, char) in opencv2(out.o)
cvflann::anyimpl::big_any_policy<double>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
cvflann::anyimpl::small_any_policy<float>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
"std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(unsigned long, char, std::allocator<char> const&)", referenced from:
cv::imdecode_(cv::Mat const&, int, int, cv::Mat*) in opencv2(loadsave.o)
cv::findDecoder(std::string const&) in opencv2(loadsave.o)
"std::cerr", referenced from:
cvflann::LshIndex<cvflann::L1<float> >::getNeighbors(float const*, cvflann::ResultSet<float>&) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::add(unsigned int, float const*) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::LshTable(unsigned int, unsigned int) in opencv2(miniflann.o)
cvflann::LshIndex<cvflann::L2<float> >::getNeighbors(float const*, cvflann::ResultSet<float>&) in opencv2(miniflann.o)
"std::string::reserve(unsigned long)", referenced from:
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::GenericDescriptorMatcher::create(std::string const&, std::string const&) in opencv2(matchers.o)
cv::DescriptorExtractor::create(std::string const&) in opencv2(descriptors.o)
cv::getErrorMessageForWrongArgumentInSetter(std::string, std::string, int, int) in opencv2(algorithm.o)
cv::getErrorMessageForWrongArgumentInGetter(std::string, std::string, int, int) in opencv2(algorithm.o)
cv::BaseImageEncoder::throwOnEror() const in opencv2(grfmt_base.o)
"std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, std::_Rb_tree_node_base*, std::_Rb_tree_node_base&)", referenced from:
std::_Rb_tree<int, int, std::_Identity<int>, std::less<int>, std::allocator<int> >::_M_insert_unique(int const&) in opencv2(facerec.o)
std::_Rb_tree<int, std::pair<int const, int>, std::_Select1st<std::pair<int const, int> >, std::less<int>, std::allocator<std::pair<int const, int> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<int const, int> >, std::pair<int const, int> const&) in opencv2(calibinit.o)
std::_Rb_tree<unsigned long, unsigned long, std::_Identity<unsigned long>, std::less<unsigned long>, std::allocator<unsigned long> >::_M_insert_unique(unsigned long const&) in opencv2(circlesgrid.o)
std::_Rb_tree<unsigned long, std::pair<unsigned long const, Graph::Vertex>, std::_Select1st<std::pair<unsigned long const, Graph::Vertex> >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, Graph::Vertex> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<unsigned long const, Graph::Vertex> >, std::pair<unsigned long const, Graph::Vertex> const&) in opencv2(circlesgrid.o)
std::_Rb_tree<unsigned long, std::pair<unsigned long const, Graph::Vertex>, std::_Select1st<std::pair<unsigned long const, Graph::Vertex> >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, Graph::Vertex> > >::_M_insert_unique(std::pair<unsigned long const, Graph::Vertex> const&) in opencv2(circlesgrid.o)
std::_Rb_tree<int, std::pair<int const, int>, std::_Select1st<std::pair<int const, int> >, std::less<int>, std::allocator<std::pair<int const, int> > >::_M_insert_unique(std::pair<int const, int> const&) in opencv2(lda.o)
std::_Rb_tree<cvflann::UniqueResultSet<float>::DistIndex, cvflann::UniqueResultSet<float>::DistIndex, std::_Identity<cvflann::UniqueResultSet<float>::DistIndex>, std::less<cvflann::UniqueResultSet<float>::DistIndex>, std::allocator<cvflann::UniqueResultSet<float>::DistIndex> >::_M_insert_unique(cvflann::UniqueResultSet<float>::DistIndex const&) in opencv2(miniflann.o)
...
"std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string()", referenced from:
cv::getBuildInformation() in opencv2(system.o)
"std::string::assign(std::string const&)", referenced from:
cv::operator<<(cv::FileStorage&, std::string const&) in opencv2(persistence.o)
cv::Exception::formatMessage() in opencv2(system.o)
cv::FileNode::operator std::string() const in opencv2(cascadedetect.o)
cv::gpu::DeviceInfo::query() in opencv2(gpumat.o)
cv::imdecode_(cv::Mat const&, int, int, cv::Mat*) in opencv2(loadsave.o)
cv::findDecoder(std::string const&) in opencv2(loadsave.o)
cv::AlgorithmInfo::set(cv::Algorithm*, char const*, int, void const*, bool) const in opencv2(algorithm.o)
...
"std::string::resize(unsigned long, char)", referenced from:
icvClose(CvFileStorage*, std::string*) in opencv2(persistence.o)
"std::string::_M_mutate(unsigned long, unsigned long, unsigned long)", referenced from:
icvClose(CvFileStorage*, std::string*) in opencv2(persistence.o)
"std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tree_node_base&)", referenced from:
std::set<unsigned long, std::less<unsigned long>, std::allocator<unsigned long> >::erase(unsigned long const&) in opencv2(circlesgrid.o)
std::_Rb_tree<cvflann::UniqueResultSet<float>::DistIndex, cvflann::UniqueResultSet<float>::DistIndex, std::_Identity<cvflann::UniqueResultSet<float>::DistIndex>, std::less<cvflann::UniqueResultSet<float>::DistIndex>, std::allocator<cvflann::UniqueResultSet<float>::DistIndex> >::erase(cvflann::UniqueResultSet<float>::DistIndex const&) in opencv2(miniflann.o)
std::_Rb_tree<cvflann::UniqueResultSet<int>::DistIndex, cvflann::UniqueResultSet<int>::DistIndex, std::_Identity<cvflann::UniqueResultSet<int>::DistIndex>, std::less<cvflann::UniqueResultSet<int>::DistIndex>, std::allocator<cvflann::UniqueResultSet<int>::DistIndex> >::erase(cvflann::UniqueResultSet<int>::DistIndex const&) in opencv2(miniflann.o)
"std::_List_node_base::hook(std::_List_node_base*)", referenced from:
CirclesGridClusterFinder::hierarchicalClustering(std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >, cv::Size_<int> const&, std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >&) in opencv2(circlesgrid.o)
void std::__uninitialized_fill_n_aux<std::list<unsigned long, std::allocator<unsigned long> >*, unsigned long, std::list<unsigned long, std::allocator<unsigned long> > >(std::list<unsigned long, std::allocator<unsigned long> >*, unsigned long, std::list<unsigned long, std::allocator<unsigned long> > const&, std::__false_type) in opencv2(circlesgrid.o)
"std::string::compare(char const*) const", referenced from:
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::CascadeClassifier::Data::read(cv::FileNode const&) in opencv2(cascadedetect.o)
cv::DescriptorMatcher::create(std::string const&) in opencv2(matchers.o)
cv::AdjusterAdapter::create(std::string const&) in opencv2(dynamic.o)
"std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&, unsigned long, unsigned long)", referenced from:
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::DescriptorExtractor::create(std::string const&) in opencv2(descriptors.o)
cv::findDecoder(std::string const&) in opencv2(loadsave.o)
"std::ostream& std::ostream::_M_insert<bool>(bool)", referenced from:
cvflann::anyimpl::small_any_policy<bool>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
"std::_Rb_tree_increment(std::_Rb_tree_node_base*)", referenced from:
std::_Rb_tree<int, std::pair<int const, int>, std::_Select1st<std::pair<int const, int> >, std::less<int>, std::allocator<std::pair<int const, int> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<int const, int> >, std::pair<int const, int> const&) in opencv2(calibinit.o)
std::set<unsigned long, std::less<unsigned long>, std::allocator<unsigned long> >::erase(unsigned long const&) in opencv2(circlesgrid.o)
std::_Rb_tree<unsigned long, std::pair<unsigned long const, Graph::Vertex>, std::_Select1st<std::pair<unsigned long const, Graph::Vertex> >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, Graph::Vertex> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<unsigned long const, Graph::Vertex> >, std::pair<unsigned long const, Graph::Vertex> const&) in opencv2(circlesgrid.o)
std::_Rb_tree<cvflann::UniqueResultSet<float>::DistIndex, cvflann::UniqueResultSet<float>::DistIndex, std::_Identity<cvflann::UniqueResultSet<float>::DistIndex>, std::less<cvflann::UniqueResultSet<float>::DistIndex>, std::allocator<cvflann::UniqueResultSet<float>::DistIndex> >::erase(cvflann::UniqueResultSet<float>::DistIndex const&) in opencv2(miniflann.o)
std::_Rb_tree<unsigned int, std::pair<unsigned int const, std::vector<unsigned int, std::allocator<unsigned int> > >, std::_Select1st<std::pair<unsigned int const, std::vector<unsigned int, std::allocator<unsigned int> > > >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, std::vector<unsigned int, std::allocator<unsigned int> > > > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<unsigned int const, std::vector<unsigned int, std::allocator<unsigned int> > > >, std::pair<unsigned int const, std::vector<unsigned int, std::allocator<unsigned int> > > const&) in opencv2(miniflann.o)
std::_Rb_tree<cvflann::UniqueResultSet<int>::DistIndex, cvflann::UniqueResultSet<int>::DistIndex, std::_Identity<cvflann::UniqueResultSet<int>::DistIndex>, std::less<cvflann::UniqueResultSet<int>::DistIndex>, std::allocator<cvflann::UniqueResultSet<int>::DistIndex> >::erase(cvflann::UniqueResultSet<int>::DistIndex const&) in opencv2(miniflann.o)
std::_Rb_tree<std::string, std::pair<std::string const, cvflann::any>, std::_Select1st<std::pair<std::string const, cvflann::any> >, std::less<std::string>, std::allocator<std::pair<std::string const, cvflann::any> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<std::string const, cvflann::any> >, std::pair<std::string const, cvflann::any> const&) in opencv2(miniflann.o)
...
"std::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >::basic_stringstream(std::_Ios_Openmode)", referenced from:
_core_Mat_nDump in libopencvforunity.a(Mat.o)
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
"std::ostream::operator<<(int)", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cv::writeElems(std::ostream&, void const*, int, int, char) in opencv2(out.o)
CvCaptureCAM::startCaptureDevice(int) in opencv2(cap_avfoundation.o)
cvflann::anyimpl::big_any_policy<cvflann::flann_centers_init_t>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
cvflann::anyimpl::big_any_policy<cvflann::flann_algorithm_t>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
cvflann::anyimpl::small_any_policy<int>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
...
"std::string::_Rep::_S_empty_rep_storage", referenced from:
_contrib_FaceRecognizer_load_10 in libopencvforunity.a(contrib.o)
_contrib_FaceRecognizer_save_10 in libopencvforunity.a(contrib.o)
_core_Core_getBuildInformation_10 in libopencvforunity.a(core.o)
_core_Core_putText_10 in libopencvforunity.a(core.o)
_core_Core_putText_11 in libopencvforunity.a(core.o)
_core_Core_putText_12 in libopencvforunity.a(core.o)
_core_Core_n_1getTextSize in libopencvforunity.a(core.o)
...
"std::locale::~locale()", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
"std::ostream::put(char)", referenced from:
cvCreateCameraCapture_AVFoundation(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::startCaptureDevice(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::queryFrame() in opencv2(cap_avfoundation.o)
cv::LDA::lda(cv::_InputArray const&, cv::_InputArray const&) in opencv2(lda.o)
cvflann::LshIndex<cvflann::L1<float> >::getNeighbors(float const*, cvflann::ResultSet<float>&) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::add(unsigned int, float const*) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::LshTable(unsigned int, unsigned int) in opencv2(miniflann.o)
...
"std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&)", referenced from:
_contrib_FaceRecognizer_load_10 in libopencvforunity.a(contrib.o)
_contrib_FaceRecognizer_save_10 in libopencvforunity.a(contrib.o)
_core_Core_putText_10 in libopencvforunity.a(core.o)
_core_Core_putText_11 in libopencvforunity.a(core.o)
_core_Core_putText_12 in libopencvforunity.a(core.o)
_core_Core_n_1getTextSize in libopencvforunity.a(core.o)
_core_Algorithm_getBool_10 in libopencvforunity.a(core.o)
...
"vtable for std::basic_streambuf<char, std::char_traits<char> >", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
NOTE: a missing vtable usually means the first non-inline virtual member function has no definition.
"std::_Rb_tree_increment(std::_Rb_tree_node_base const*)", referenced from:
std::vector<int, std::allocator<int> > cv::remove_dups<int>(std::vector<int, std::allocator<int> > const&) in opencv2(facerec.o)
Graph::floydWarshall(cv::Mat&, int) const in opencv2(circlesgrid.o)
CirclesGridFinder::rng2gridGraph(Graph&, std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >&) const in opencv2(circlesgrid.o)
cv::flann::IndexParams::getAll(std::vector<std::string, std::allocator<std::string> >&, std::vector<int, std::allocator<int> >&, std::vector<std::string, std::allocator<std::string> >&, std::vector<double, std::allocator<double> >&) const in opencv2(miniflann.o)
cvflann::UniqueResultSet<float>::copy(int*, float*, int) const in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::optimize() in opencv2(miniflann.o)
cvflann::AutotunedIndex<cvflann::L1<float> >::buildIndex() in opencv2(miniflann.o)
...
"std::ios_base::~ios_base()", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
"std::string::append(std::string const&)", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::GenericDescriptorMatcher::create(std::string const&, std::string const&) in opencv2(matchers.o)
cv::DescriptorExtractor::create(std::string const&) in opencv2(descriptors.o)
cv::getErrorMessageForWrongArgumentInSetter(std::string, std::string, int, int) in opencv2(algorithm.o)
cv::getErrorMessageForWrongArgumentInGetter(std::string, std::string, int, int) in opencv2(algorithm.o)
...
"std::__throw_out_of_range(char const*)", referenced from:
cv::ChamferMatcher::matching(cv::ChamferMatcher::Template&, cv::Mat&) in opencv2(chamfermatching.o)
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::DescriptorExtractor::create(std::string const&) in opencv2(descriptors.o)
CirclesGridClusterFinder::parsePatternPoints(std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > > const&, std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > > const&, std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >&) in opencv2(circlesgrid.o)
CirclesGridFinder::isDetectionCorrect() in opencv2(circlesgrid.o)
CirclesGridFinder::findLongestPath(std::vector<Graph, std::allocator<Graph> >&, Path&) in opencv2(circlesgrid.o)
cv::BriskScaleSpace::getKeypoints(int, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >&) in opencv2(brisk.o)
...
"std::string::append(char const*, unsigned long)", referenced from:
cv::javaFeatureDetector::create(int) in libopencvforunity.a(features2d.o)
cv::javaDescriptorExtractor::create(int) in libopencvforunity.a(features2d.o)
cv::tempfile(char const*) in opencv2(system.o)
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::GenericDescriptorMatcher::create(std::string const&, std::string const&) in opencv2(matchers.o)
cv::DescriptorExtractor::create(std::string const&) in opencv2(descriptors.o)
cv::getErrorMessageForWrongArgumentInSetter(std::string, std::string, int, int) in opencv2(algorithm.o)
...
"std::__throw_length_error(char const*)", referenced from:
std::vector<cv::Mat, std::allocator<cv::Mat> >::reserve(unsigned long) in libopencvforunity.a(converters.o)
std::vector<std::vector<char, std::allocator<char> >, std::allocator<std::vector<char, std::allocator<char> > > >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::vector<char, std::allocator<char> >*, std::vector<std::vector<char, std::allocator<char> >, std::allocator<std::vector<char, std::allocator<char> > > > >, std::vector<char, std::allocator<char> > const&) in libopencvforunity.a(converters.o)
std::vector<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >, std::allocator<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> > > >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >*, std::vector<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> >, std::allocator<std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> > > > >, std::vector<cv::KeyPoint, std::allocator<cv::KeyPoint> > const&) in libopencvforunity.a(converters.o)
std::vector<std::vector<cv::Point3_<float>, std::allocator<cv::Point3_<float> > >, std::allocator<std::vector<cv::Point3_<float>, std::allocator<cv::Point3_<float> > > > >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::vector<cv::Point3_<float>, std::allocator<cv::Point3_<float> > >*, std::vector<std::vector<cv::Point3_<float>, std::allocator<cv::Point3_<float> > >, std::allocator<std::vector<cv::Point3_<float>, std::allocator<cv::Point3_<float> > > > > >, std::vector<cv::Point3_<float>, std::allocator<cv::Point3_<float> > > const&) in libopencvforunity.a(converters.o)
std::vector<std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > >, std::allocator<std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > > > >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > >*, std::vector<std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > >, std::allocator<std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > > > > >, std::vector<cv::Point_<int>, std::allocator<cv::Point_<int> > > const&) in libopencvforunity.a(converters.o)
std::vector<float, std::allocator<float> >::_M_fill_insert(__gnu_cxx::__normal_iterator<float*, std::vector<float, std::allocator<float> > >, unsigned long, float const&) in opencv2(rtrees.o)
std::vector<unsigned char, std::allocator<unsigned char> >::_M_fill_insert(__gnu_cxx::__normal_iterator<unsigned char*, std::vector<unsigned char, std::allocator<unsigned char> > >, unsigned long, unsigned char const&) in opencv2(hough.o)
...
"std::_Rb_tree_decrement(std::_Rb_tree_node_base*)", referenced from:
std::_Rb_tree<int, int, std::_Identity<int>, std::less<int>, std::allocator<int> >::_M_insert_unique(int const&) in opencv2(facerec.o)
std::_Rb_tree<int, std::pair<int const, int>, std::_Select1st<std::pair<int const, int> >, std::less<int>, std::allocator<std::pair<int const, int> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<int const, int> >, std::pair<int const, int> const&) in opencv2(calibinit.o)
std::_Rb_tree<unsigned long, unsigned long, std::_Identity<unsigned long>, std::less<unsigned long>, std::allocator<unsigned long> >::_M_insert_unique(unsigned long const&) in opencv2(circlesgrid.o)
std::_Rb_tree<unsigned long, std::pair<unsigned long const, Graph::Vertex>, std::_Select1st<std::pair<unsigned long const, Graph::Vertex> >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, Graph::Vertex> > >::_M_insert_unique(std::_Rb_tree_iterator<std::pair<unsigned long const, Graph::Vertex> >, std::pair<unsigned long const, Graph::Vertex> const&) in opencv2(circlesgrid.o)
std::_Rb_tree<unsigned long, std::pair<unsigned long const, Graph::Vertex>, std::_Select1st<std::pair<unsigned long const, Graph::Vertex> >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, Graph::Vertex> > >::_M_insert_unique(std::pair<unsigned long const, Graph::Vertex> const&) in opencv2(circlesgrid.o)
std::_Rb_tree<int, std::pair<int const, int>, std::_Select1st<std::pair<int const, int> >, std::less<int>, std::allocator<std::pair<int const, int> > >::_M_insert_unique(std::pair<int const, int> const&) in opencv2(lda.o)
std::_Rb_tree<cvflann::UniqueResultSet<float>::DistIndex, cvflann::UniqueResultSet<float>::DistIndex, std::_Identity<cvflann::UniqueResultSet<float>::DistIndex>, std::less<cvflann::UniqueResultSet<float>::DistIndex>, std::allocator<cvflann::UniqueResultSet<float>::DistIndex> >::_M_insert_unique(cvflann::UniqueResultSet<float>::DistIndex const&) in opencv2(miniflann.o)
...
"std::ios_base::Init::~Init()", referenced from:
__GLOBAL__I_a in libopencvforunity.a(ml.o)
__GLOBAL__I_a in opencv2(knearest.o)
__GLOBAL__I_a in opencv2(rtrees.o)
__GLOBAL__I_a in opencv2(svm.o)
__GLOBAL__I_a in opencv2(boost.o)
__GLOBAL__I_a in opencv2(tree.o)
__GLOBAL__I_a in opencv2(gbt.o)
...
"std::ostream::flush()", referenced from:
cvCreateCameraCapture_AVFoundation(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::startCaptureDevice(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::queryFrame() in opencv2(cap_avfoundation.o)
cv::LDA::lda(cv::_InputArray const&, cv::_InputArray const&) in opencv2(lda.o)
cvflann::LshIndex<cvflann::L1<float> >::getNeighbors(float const*, cvflann::ResultSet<float>&) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::add(unsigned int, float const*) in opencv2(miniflann.o)
cvflann::lsh::LshTable<float>::LshTable(unsigned int, unsigned int) in opencv2(miniflann.o)
...
"std::runtime_error::runtime_error(std::string const&)", referenced from:
cvflann::flann_algorithm_t cvflann::get_param<cvflann::flann_algorithm_t>(std::map<std::string, cvflann::any, std::less<std::string>, std::allocator<std::pair<std::string const, cvflann::any> > > const&, std::string) in opencv2(miniflann.o)
std::string cvflann::get_param<std::string>(std::map<std::string, cvflann::any, std::less<std::string>, std::allocator<std::pair<std::string const, cvflann::any> > > const&, std::string) in opencv2(miniflann.o)
cvflann::FLANNException::FLANNException(char const*) in opencv2(miniflann.o)
int cvflann::get_param<int>(std::map<std::string, cvflann::any, std::less<std::string>, std::allocator<std::pair<std::string const, cvflann::any> > > const&, std::string) in opencv2(miniflann.o)
"std::basic_stringbuf<char, std::char_traits<char>, std::allocator<char> >::str() const", referenced from:
_core_Mat_nDump in libopencvforunity.a(Mat.o)
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
"std::string::assign(char const*, unsigned long)", referenced from:
cv::javaFeatureDetector::create(int) in libopencvforunity.a(features2d.o)
cv::javaDescriptorMatcher::create(int) in libopencvforunity.a(features2d.o)
cv::javaDescriptorExtractor::create(int) in libopencvforunity.a(features2d.o)
cv::javaGenericDescriptorMatcher::create(int) in libopencvforunity.a(features2d.o)
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cv::tempfile(char const*) in opencv2(system.o)
...
"VTT for std::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >", referenced from:
CvGBTrees::write(CvFileStorage*, char const*) const in opencv2(gbt.o)
CvGBTrees::read(CvFileStorage*, CvFileNode*) in opencv2(gbt.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
"std::ostream& std::ostream::_M_insert<unsigned long>(unsigned long)", referenced from:
CvCaptureCAM::startCaptureDevice(int) in opencv2(cap_avfoundation.o)
cvflann::anyimpl::small_any_policy<unsigned int>::print(std::ostream&, void* const*) in opencv2(miniflann.o)
cvflann::lsh::LshTable<unsigned char>::initialize(unsigned long) in opencv2(miniflann.o)
"std::_List_node_base::transfer(std::_List_node_base*, std::_List_node_base*)", referenced from:
CirclesGridClusterFinder::hierarchicalClustering(std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >, cv::Size_<int> const&, std::vector<cv::Point_<float>, std::allocator<cv::Point_<float> > >&) in opencv2(circlesgrid.o)
"std::string::find(char const*, unsigned long, unsigned long) const", referenced from:
cv::FeatureDetector::create(std::string const&) in opencv2(detectors.o)
cv::DescriptorExtractor::create(std::string const&) in opencv2(descriptors.o)
"std::basic_ostream<char, std::char_traits<char> >& std::__ostream_insert<char, std::char_traits<char> >(std::basic_ostream<char, std::char_traits<char> >&, char const*, int)", referenced from:
cv::writeElems(std::ostream&, void const*, int, int, char) in opencv2(out.o)
cv::writeMat(std::ostream&, cv::Mat const&, char, char, bool) in opencv2(out.o)
cv::CFormatter::write(std::ostream&, cv::Mat const&, int const*, int) const in opencv2(out.o)
cv::CSVFormatter::write(std::ostream&, cv::Mat const&, int const*, int) const in opencv2(out.o)
cv::NumpyFormatter::write(std::ostream&, cv::Mat const&, int const*, int) const in opencv2(out.o)
cv::PythonFormatter::write(std::ostream&, cv::Mat const&, int const*, int) const in opencv2(out.o)
cv::MatlabFormatter::write(std::ostream&, cv::Mat const&, int const*, int) const in opencv2(out.o)
...
"std::cout", referenced from:
cvCreateCameraCapture_AVFoundation(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::startCaptureDevice(int) in opencv2(cap_avfoundation.o)
CvCaptureCAM::queryFrame() in opencv2(cap_avfoundation.o)
cv::LDA::lda(cv::_InputArray const&, cv::_InputArray const&) in opencv2(lda.o)
cvflann::AutotunedIndex<cvflann::L1<float> >::buildIndex() in opencv2(miniflann.o)
cvflann::AutotunedIndex<cvflann::L2<float> >::buildIndex() in opencv2(miniflann.o)
"std::basic_stringstream<char, std::char_traits<char>, std::allocator<char> >::~basic_stringstream()", referenced from:
_core_Mat_nDump in libopencvforunity.a(Mat.o)
"std::string::_M_leak_hard()", referenced from:
icvClose(CvFileStorage*, std::string*) in opencv2(persistence.o)
cv::tempfile(char const*) in opencv2(system.o)
cv::imdecode_(cv::Mat const&, int, int, cv::Mat*) in opencv2(loadsave.o)
cv::findDecoder(std::string const&) in opencv2(loadsave.o)
"std::basic_ios<char, std::char_traits<char> >::clear(std::_Ios_Iostate)", referenced from:
cv::NumpyFormatter::write(std::ostream&, cv::Mat const&, int const*, int) const in opencv2(out.o)
ld: symbol(s) not found for architecture armv7
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Hi,
I have purchased this product off the Unity Asset Store and came across a possible bug in org.opencv.utils.Converters.cs
From the method starting on 637. It would seem that this block would always throw, because if you take line 639 and line 645, no matter what m is, it will throw. So in my source I commented out 639 and 640 to get passed it. Can you please let me know if my assumptions are correct?
Thank you.
637 public static void Mat_to_vector_vector_Point(Mat m, List<MatOfPoint> pts)
638 {
639 if (m != null)
640 m.ThrowIfDisposed();
641
642 if (pts == null)
643 throw new CvException("Output List can't be null");
644
645 if (m == null)
646 throw new CvException("Input Mat can't be null");
List<Mat> mats = new List<Mat>(m.rows());
Mat_to_vector_Mat(m, mats);
foreach (Mat mi in mats)
{
MatOfPoint pt = new MatOfPoint(mi);
pts.Add(pt);
mi.release();
}
mats.Clear();
}
To accelerate video capturing, I modified WebCamTextureToMatSample by using VideoCapture directly instead of using WebCamTexture.
It worked well on OSX, WIN, and iOS, but not on Android 6.
It just showed a black texture.
According to the old document below:
http://enoxsoftware.github.io/OpenCVForUnity/doc/html/class_open_c_v_for_unity_1_1_video_capture.html
it seems that VideoCapture is not supported for above Android 5.
However, I'm using the latest asset and there are no similar words in the latest document:
https://enoxsoftware.github.io/OpenCVForUnity/3.0.0/doc/html/class_open_c_v_for_unity_1_1_video_capture.html
I appreciate if someone know whether VideoCapture is supported or not for Android in the latest asset.
How can I display an arbitrary image when enclosing the object specified by cam shift example with a red or green frame?
Hi,
With Opencvforunity, how to detect hand gesture including each finger from web camera? Thanks in advance if you can give any clues or code snippet for me.
It looks like SimpleBlobDetector, that is part of the OpenCV 4.0.x package, is not in OpenCVforUnity. Is this intentional or could this be an oversight?
Hi,
Can I get an older version of OpenCVForUnity (ver 2.3.2)?
https://enoxsoftware.github.io/OpenCVForUnity/3.0.0/doc/html/class_open_c_v_for_unity_1_1_video.html
I really need to use the function estimateRigidTransform.
After testing it in OpenCV C++, the result of estimateRigidTransform is better than estimateAffine2D.
I'm using it for video stabilization.
estimateRigidTransform is deprecated in the latest version.
In vanilla example ArUcoWebCamTextureExample
i get:
OpenCVForUnityExample.ARUtils.ConvertTvecToPos (System.Double[] tvec) (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ARUtils.cs:40)
OpenCVForUnityExample.ARUtils.ConvertRvecTvecToPoseData (System.Double[] rvec, System.Double[] tvec) (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ARUtils.cs:52)
OpenCVForUnityExample.ArUcoWebCamTextureExample.UpdateARObjectTransform (OpenCVForUnity.Mat rvec, OpenCVForUnity.Mat tvec) (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ArUcoWebCamTextureExample.cs:661)
OpenCVForUnityExample.ArUcoWebCamTextureExample.EstimatePoseChArUcoBoard (OpenCVForUnity.Mat rgbMat) (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ArUcoWebCamTextureExample.cs:634)
OpenCVForUnityExample.ArUcoWebCamTextureExample.Update () (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ArUcoWebCamTextureExample.cs:562)
Also in UpdateARObjectTransform( Mat rvec, Mat tvec )
I get that rvec and tvec have both rows and cols equal to 1 in the working case of CanonicalMarker, and the array returned by .get(0,0)
has length 3. With ChArUcoBoard they are 1 cols by 3 rows, and the array returned by .get(0,0)
has length 1.
Use the sample code from face mask example, video only play once then the grabbed frame size is 0 * 0. but on unity editor is ok
How to loop on iOS
Hi,
With Opencvforunity, how to extract human full body from video via web camera? Thanks in advance if you can give any clues or code snippet for me.
Hi, I have encountered a problem that my iOS app built from unity project which camera video was totally black on screen and the camera video source is from the ARCamera Prefab provided by Vuforia with version 4.2.3. and the unity project only contains the following libraries : Vuforia and OpenCVForUnity.
I have tested the iOS App with only Vuforia library and it can normally obtain the camera video but if I add OpenCVForUnity library into the project then the iOS app running on iPhone or iPad would become the situation mentioned above.
Because I saw there is a example that combine Vuforia and OpenCVForUnity on Unity 5, have you tested these setting on Unity 4.6 and it work normally as the same as Unity 5. If you have done this before, please tell me the build setting in XCode
Hey,
I'm trying to compile the examples (mainly the MobileNet SSD, Caffe, Tensor stuff) from this repo (2.2.8) with the OpenCVForUnity i downloaded from the asset store (also says 2.2.8) and these three packages seem to be missing, i do seem them on the documentation page.
I also have to fix the return type of several methods to get it to compile (vs2017, unity 2017.3.1), but everything else seems to work.
`Failed running "D:\Unity Installs\2019.1.2f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten_Win\python\2.7.5.3_64bit\python.exe" -E "D:\Unity Installs\2019.1.2f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\emcc" @"D:\Projects\2019WebGLTest\Assets..\Temp\emcc_arguments.resp"
stdout:
stderr:error: Linking globals named 'z_inflateInit2_': symbol multiply defined!ERROR:root:Failed to run llvm optimizations:
UnityEngine.GUIUtility: ProcessEvent(Int32, IntPtr)`
Using Unity 2018.4.3f1
Opencvforunity: Creation of mat worked perfectly fine in Unity 5 series. Now, we are working on unity 2017.3 and now mat object of the desired size is not getting created.
for example: Mat img=new Mat(480,480,CvType.CV_8UC3);
if we check the size of Mat it returns 0 or Null.
Function: Utils.matToTexture2D(f8UC3,imgTexture);
We get the exception: "ArgumentException: The output Texture2D object has to be of the same size
OpenCVForUnity.Utils.matToTexture2D (OpenCVForUnity.Mat mat, UnityEngine.Texture2D texture2D, UnityEngine.Color32[] bufferColors) (at Assets/OpenCVForUnity/org/opencv/unity/Utils.cs:303)
CameraDisplay.LateUpdate () (at Assets/CameraDisplay.cs:103)
"
To convert from openCV's right handed CS to Unity's left handed CS it should only require a flip of the y-axis. Why does the ArUco Example also flip the z-axis?
Hi there, I would like to know if there is a port for smile detection and other facial expressions? I would like to develope an application that detects a few of that expressions (like happy, smile, sad)
I am trying to get object detection via caffee model but whenever i try to do my unity editor gets crash.
Assets/OpenCVForUnity/Examples/Advanced/AlphaBlendingExample/AlphaBlendingExample.cs(7,7): error CS0246: The type or namespace name `OpenCVForUnity' could not be found. Are you missing an assembly reference?
how to solve this error??
ArgumentOutOfRangeException: Argument is out of range.
Parameter name: index
System.Collections.Generic.List`1[OpenCVForUnity.KeyPoint].get_Item (Int32 index) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.Collections.Generic/List.cs:633)
PatternDetector.refineMatchesWithHomography (OpenCVForUnity.MatOfKeyPoint queryKeypoints, OpenCVForUnity.MatOfKeyPoint trainKeypoints, Single reprojectionThreshold, OpenCVForUnity.MatOfDMatch matches, OpenCVForUnity.Mat homography) (at Assets/MarkerLessARExample/MarkerLessAR/PatternDetector.cs:438)
PatternDetector.findPattern (OpenCVForUnity.Mat image, .PatternTrackingInfo info) (at Assets/MarkerLessARExample/MarkerLessAR/PatternDetector.cs:226)
CVAR.Update () (at Assets/MarkerLessARExample/Scripts/CVAR.cs:222)
That happens when you put the object you're tracking away.
Trying to create IPA with enterprise signature. It fails every time on Xcode 11.1 and OpenCV 2.3.7. Logs doesn't print anything interesting except:
/Applications/Xcode.app/Contents/Developer/usr/bin/ipatool exited with 1
Tried to disable bitcode from issue #40 - didn't help.
Xcode 10.3 works great though.
How to capture the detected object either every frame or on command when using your CamShift example? I'm able to detect the particular object but not able to create images for each frame in which the object was detected and to only show the area that is detected.
I have error:
The type or namespace name `DisposableOpenCVObject' could not be found. Are you missing a using directive or an assembly reference?
in all reference classes.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.