Coder Social home page Coder Social logo

hololensforcv's Introduction

page_type name description languages products
sample
HoloLens2ForCV samples
HoloLens Research Mode samples for building and deploying projects with OpenCV with sensor data streaming and recording.
cpp
windows-mixed-reality
hololens

HoloLensForCV samples

We want to help people use the HoloLens as a Computer Vision and Robotics research device. The project was launched at CVPR 2017, and we intend to extend it as new capabilities become available on the HoloLens.

Contents

This repository contains reusable components, samples and tools aimed to make it easier to use HoloLens as a tool for Computer Vision and Robotics research.

Take a quick look at the HoloLensForCV UWP component. This component is used by most of our samples and tools to access, stream and record HoloLens sensor data.

Learn how to build the project and deploy the samples.

Learn how to use OpenCV on HoloLens.

Learn how to stream sensor data and how to process it online on a companion PC.

Learn how to record sensor data and how to process it offline on a companion PC.

Universal Windows Platform development

All of the samples require Visual Studio 2017 Update 3 and the Windows Software Development Kit (SDK) for Windows 10 to build, test, and deploy your Universal Windows Platform apps. In addition, samples specific to Windows 10 Holographic require a Windows Holographic device to execute. Windows Holographic devices include the Microsoft HoloLens and the Microsoft HoloLens Emulator.

Get a free copy of Visual Studio 2017 Community Edition with support for building Universal Windows Platform apps

Install the Windows Holographic tools.

Learn how to build great apps for Windows by experimenting with our sample apps.

Find more information on Microsoft Developer site.

Additionally, to stay on top of the latest updates to Windows and the development tools, become a Windows Insider by joining the Windows Insider Program.

Become a Windows Insider

Using the samples

The easiest way to use these samples without using Git is to download the zip file containing the current version (using the following link or by clicking the "Download ZIP" button on the repo page). You can then unzip the entire archive and use the samples in Visual Studio 2017.

Download the samples ZIP

Notes:

  • Before you unzip the archive, right-click it, select Properties, and then select Unblock.
  • Be sure to unzip the entire archive, and not just individual samples. The samples all depend on the Shared folder in the archive.
  • In Visual Studio 2017, the platform target defaults to ARM, so be sure to change that to x64 or x86 if you want to test on a non-ARM device.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

hololensforcv's People

Contributors

ahojnnes avatar bschoun avatar davidgedye avatar evrova avatar haochihlin avatar hferrone avatar lwallmicrosoft avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar msftgits avatar pablospe avatar polszta avatar sshiv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hololensforcv's Issues

Unable to receive sensor data on the ReceiverVLC project

I am unable to receive sensor data on the ReceiverVLC project. I am able to connect successfully to the Hololens server, which runs StreamerVLC, but after that it looks like I get stuck in the ReceiveAsync function and don't receive any sensorFrame images. I may also just be running the Receiver or Streamer project incorrectly (I hadn't attempted to run the Streamer/Receiver pair on the RS4 build since I was only using the Recorder so this was my first attempt in running them.) Mostly tried it out now to see if there's some systemic issue with the Research Mode sensors on the latest build

Recorder doesn't work on HoloLens RS5

Recorder program works fine on HoloLens RS4 build (17143).
However, the device automatically update to RS5 build (17763.134) and the Recorder program doesn't work on it.
It still create the Tar files but nothing is recorded.
Here is the part of the output

MediaFrameSourceGroup::GetSensorType:: assuming SensorType::Undefined given missing MF_MT_USER_DATA
MediaFrameSourceGroup::InitializeMediaSourceWorkerAsync: could not map the media frame source to a Research Mode sensor type!
MediaFrameSourceGroup::InitializeMediaSourceWorkerAsync: no eligible sources in 'MN34150'

Tools/Recorder No Longer Working on OS Build 10.0.17763.134

After the HoloLens was updated to an OS build of 10.0.17763.134, the Recorder tool, which worked fine previously, is no longer working. Specifically, when I air tap to stop recording, the "Ending recording, wait a moment to finish" message is skipped and instead the message "Finished recording" is played. The archives are also corrupted and the csv's are empty.

Make the HoloLogram follow the user's Head

Hello,

Is there a simple way to make the hologram in the streamer or compute on device.. follow the user's head instead of repositioning it by air taping?

Thanks in advance

Streamer/Receiver

Hi, I am beginning with Visual Studio and the Hololens.
I am trying to figure out how to use the streamer or the receiver. From Visual Studio, the project should be released on the Hololens (remote machine) or on the companion PC (local machine). And then, what is expected in host ?
Thanks

Sensor Stream Viewer error

Hi everyone,

I'm pretty new to the Hololens and UWP, and I can't seem to figure out why I get an error message saying "[1] (time of app launch) : No Sensor Streaming groups found" when I launch the Sensor Stream Viewer on my Hololens device.

Any suggestions ?
ssv_error

Display holograms on a moving window.

I am working with ComputeOnDevice. Is there an easy way to render the processed frame in the users's head direction and not in a fixed window ?
I was trying to follow the Windows Sample HolographicFaceDetection but the texture rendering scheme appears to be quite different. Thank you for your help.

Simultaneous remote rendering and camera access

I have previously brought up this issue here in April.

For the AR application I am trying to make, I need to simultaneously remotely render and access the camera stream. I can remotely render with this and access the camera stream with this or this current repository. Unfortunately, I can't remotely render and access the camera stream simultaneously because the functionality is provided by separate apps on the HoloLens side.

I would like to merge the functionality into a single app, but I have not been able to reproduce the functionality of the closed-source Holographic Remoting Player app. I don't know who to contact about this, so I am requesting it again here. It would be nice to have the Holographic Remoting Player app open sourced - or at least a minimal example provided - so I can mix it in with the streamer in this repository.

FrameRenderer hard-codes depth range to 0.2f to 1.0f ?

While running the StreamViewer sample on my HoloLens, I noticed that the 'far' depth stream didn't seem to be displaying anything other than nearby artefacts and this turned out to be because (AFAICT) FrameRenderer.cpp hard-codes the acceptable depth range to 0.2m to 1.0m whereas it seems like the 'long range' depth stream works beyond about the 0.8m range? I reworked mine to 0.2m to 4.0m and got better results for that stream so wondering why the sample doesn't allow that range and/or why it doesn't use the range values from the APIs themselves?

ArUco Marker Tracker without Research Mode

Hi!

I started writing an ArUco based marker tracker for the HoloLens, using the HoloLensForCV project as a base and came across a few issues that maybe you could help me with.
In contrast to the ArUcoMarkerTracker, I only want to access the PhotoVideo SensorType. So instead of triangulation between the sensors and the detected corner locations of the marker, I'm using the camera intrinsics provided by SensorFrame and the void cv::aruco::estimatePoseSingleMarkers(InputArrayOfArrays corners, float markerLength, InputArray cameraMatrix, InputArray distCoeffs, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs ) to get the estimated position and orientation of the tracked marker. On this pose I'm applying different transformations to get the position in the right coordinate system:

_markerSlateRenderer->SetMarkerMatrix(markerTransformationCV * (frameToUsedCoordinateSystem * headCoordinates));

with

  • markerSlateRenderer: SlateRenderer, expanded by the SetMarkerMatrix, setting position and rotation of the hologram
  • markerTransformationCV: Windows::Foundation::Numerics::float4x4 representation of the rvec and tvec provided by estimatePoseSingleMarkers (with the coordinate system already adapted and the Rodrigues vector already converted into a rotation matrix)
  • frameToUsedCoordinateSystem: result of latestFrame->FrameToOrigin (with latestFrame of type HoloLensForCV::SensorFrame)
  • headCoordinates: result of float4x4 headCoordinates = make_float4x4_world(poseOriginPos, poseOriginForward, poseOriginUp) with the head pose information coming from Windows::UI::Input::Spatial::SpatialPointerPose^ poseOrigin = Windows::UI::Input::Spatial::SpatialPointerPose::TryGetAtTimestamp(_spatialPerception->GetCoordinateSystemToUse(), holographicFrame->CurrentPrediction->Timestamp);

While the orientation of the virtual (hologram) marker looks fine and aligns with the real world marker, the position seems a bit off (a small, but significant offset).
I also noticed that when reducing the frame resolution (and with that increasing the fps) the positioning gets a little bit more accurate.
Do you have any idea where this offset might come from? Is there a mistake in those transformations or could this inaccuracy be caused by only using one camera frame for position prediction?
I'm relatively new to this marker tracking for HoloLens topic, so any suggestions might be really helpful.
Thank you in advance!

rendered frame ComputeOnDevice

Hi,

After a couple of experiments, found that the frame is not rendered 2 meters away from the user, but less than that, although the transformation matrix in SlateRendered.cpp is correct. Do you have an idea why ?

Converting raw depth data into depth image in meters

Hi all,

Using HoloLensForCV I am able to get a raw short throw depth image as .pgm file. The pgm file just contains the 16 bit data. The next is to convert this to an actual depth image with distances in meters. Does someone know how to do this?

Additionally, I noticed that in some cases distances are measured wrong. In the image below you can see my two screens with the wall above. Black means closer than white. So in these case the wall appears to be closer than the screens which is actually wrong. Does someone have an explanation for that?

test2

Best regards
Felix

Cannot connect to streamer app running on HL

When I try running ComputeOnDesktop on my local machine, I receive the following exception:

Exception thrown at 0x74F2D722 in ComputeOnDesktop.exe: Microsoft C++ exception: Platform::COMException ^ at memory location 0x1366F4C0. HRESULT:0x8007274C A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
WinRT information: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

I have the correct IP address of the HL entered, and I am letting it connect to the default 23940 PV camera service name

Failure to initialize media capture: Access is denied.

First, I checked that SensorStreamViewer project works, but few days later, I updated HoloLens and then it doesn't work. The error I received is "Access is Denied"

Exception thrown at 0x772C3332 in SensorStreamViewer.exe: Microsoft C++ exception: Platform::AccessDeniedException ^ at memory location 0x0180E680. HRESULT:0x80070005 Access is denied. WinRT information: The required device capability has not been declared in the manifest.

Of course, I enable the WebCam capability.

When I start others project like 'ComputeOnDevice', I can see an alert message window asking me allow to access camera. However, when I start 'SensorStreamViewer', I didn't see any alert message asking about camera access.

How can I solve this problem?

Tarball File

Hello,
I was trying the recording tool, and i wanted to download the tarball file, but i couldn't find it. The files that i found (.CSV and PMG) couldn't be read by the batch tool. I don't know what is the problem. Any idea?

Thank you,

Transforming short throw depth's pixel point to world coordinate

I am trying to transforming certain pixel point in short throw depth frame to Unity's world coordinate for object tracking and hologram augmenting. Since the projection matrix is not available, I used HoloLensForCV project as DLL in Unity to get unprojected point through MapImagePointToCameraUnitPlane method. I also made little modification in HoloLensForCV to access the MediaFrameReference to access to SpatialCoordinateSystem's TryGetTransformTo method and get frame to Unity's world coordinate transformation matrix. I got inverted view transform matrix as well. All matrix (except the projection matrix) are obtained through locatable camera.
I got camera to world transform matrix through frame to Unity's world Transform Matrix * Inverted View Transform Matrix and changed third row's sign to minus because of considering UWP's right-handed coordinate system to Unity's left-handed coordinate system.

So I tried this process to transform the short throw depth frame's pixel point to unity's world coordinates.

  1. Enable and start the short depth sensor streaming through HoloLensForCV DLL.
  2. Get sensor's software bitmap and get certain pixel point (depth point).
  3. Push the pixel point to MapImagePointToCameraUnitPlane to get unprojected coordinate.
  4. Pack the returned value from MapImagePointToCameraUnitPlane to vector (-X, -Y, -1) (from #63 it looks like the output of MapImagePointToCameraUnitPlane is inverted in X and Y).
  5. Then, multiply pixel's intensity (depth) / 1000 (getting as metric) value to (-X, -Y, -1) to get point.
  6. Transform the point to Unity's world coordinate through camera to world transform matrix from above with MultiplyPoint3x4 to get world coordinate.

From my understanding and some experiments in RGB camera, the transformed world coordinate should be the point where the captured real-world object are in unity's world coordinate (For example, the hologram with transformed world coordinate should be place on the object like AR). However, the output seems like the depth camera's pose is not counted in the transformation. For example, since the depth camera is looking down, I need to place the object significantly below from my eye sight to see the hologram (which should be augmented on the object) in front of me.

I read all issues related to depth in the repository including #37 , #38 , #63 and #64 , but I really have no idea why this problem is happening. Could anyone give an idea why this one is happening and how to solve it? Thank you in advance.

Null frames in Streamer

Overview

Testing the Streamer function for accessing Research Mode streams (on RS4 with Research Mode enabled) I am getting errors for null frames in the Visible Light Left/Right Front and Short Throw ToF Depth (but not the Short Throw ToF Reflectivity) data streams. The sensors are recognized and initialized properly, but then when the stream begins I get a bunch of errors for null frames.

MediaFrameSourceGroup::GetSensorType:: found sensor name 'Short Throw ToF Depth' in MF_MT_USER_DATA (blob has 44 bytes)
MediaFrameSourceGroup::GetSubtypeForFrameReader: evaluating MediaFrameSourceKind::Depth with format Video-D16 @15/1Hz
MediaFrameSourceGroup::InitializeMediaSourceWorkerAsync: created the 'ShortThrowToFDepth' frame reader
The thread 0xda4 has exited with code 0 (0x0).

MediaFrameReaderContext::FrameArrived: _sensorType=ShortThrowToFDepth (1), frame is null

Attached is a screenshot of the data stream when running Streamer.
20180517_151858_hololens

Steps to reproduce

Running Streamer from Master branch with default settings.

RS4 Update 6 - Build 17134.1011 - Research Mode enabled
VS 2017 Community - V 15.7.1

Compute on Desktop and Render on Desktop - HoloRemoting

Hello, is it possible using this examples to make an app which processes the stream of frames/data on desktop computer, and also renders the 3D scene on Desktop and displays the result on device (like in Holographic Remoting)?

BatchProcessing can't find camera_calibration.csv file

Hello,

I'm trying to process offline sensor data with BatchProcessing but I can't because it tries to open a file (camera_calibration.csv) that is nowhere to be found.
I used the recorder tool provided and recorder_console.py to dowload and extract the data but I can't find that file anywhere.
Am I doing something in the wrong way?

Thanks in advance for your help

Failed to connect to IP ComputeOnDesktop

Build of computeOnDesktop is done properly and in the HoloLens I see a window where I have to enter an IP adress.
I thought it might be an IP to target a running app on the HoloLens but I don't know how to map the adress of singular apps on the HoloLens. Or is the IP that I have to enter something completely different?

Short depth camera's focal length ?

Hi all,
I can get the depth image from HoloLens, but i need to convert it to 3D point cloud for other purposes,
thus i want get the focal_x, focal_y, and u0, v0?

Anyone can tell me the value of those parameters,or any method?

Update sample documentation

Add or update walk-throughs documenting the steps to get each of the samples running either on HoloLens or the companion PC as appropriate to demonstrate the intent of the sample.

For instance, ComputeOnDevice now has a new air-tap behavior that is not documented.

The pixel value of captured depth sensor data (pgm files)

I am doing some works with captured short throw depth sensors (including reflectivity) data from recorder program in this project.
I am curious about what each pixel value in pgm files indicates of. It seems like the value is not related to distance from the device, for example, lower value is further and higher value is closer or vice versa. I googled about this but any useful information didn't come out.
Does anybody know about this? and Could you let me know what it is indicating of?
Thank you in advance.

edit 1: I found that the intensity decrease if I goes further, but if the distance becomes further than certain point, the intensity has certain drop. I attached the one example at the bottom (Gray Look up table - black to white). I separated ranges (1 and 2) by that certain point. There is clear view of boundary (starting of range 2). It seems like the intensity starts as new point and increases again when the distance goes further than start of range 2. The intensity closer than range 1 seems not right as well. I watched CVPR2018 HoloLens research mode tutorial video and short throw catches depth up to 1 meter. The boundary can be 1 meter point, but it would be great to know what those intensity really stand for and how they are calculated. Thanks.
short_throw_depth_annot

SensorStreamViewer: “Failed to initialize media capture: Access is denied.”

Hello, SensorStreamViewer does not show any camera preview and gives me the following message: Failed to initialize media capture: Access is denied.

How to reproduce:

  • Enable Research Mode (as instructed at aka.ms/hololensresearchmode)
  • Clone this repository
  • Open HoloLensForCV solution, and build SensorStreamViewer
  • Create an app package and side load it using the Windows Device Portal
  • Run the app. Failed to initialize media capture: Access is denied.
    The app will not ask for Camera permission, so you will have to enable it on Settings > Privacy > Camera.
  • Run the app again. Same message.

I am running Windows 17686.1003.x86fre.rs_prerelease.180603-1447 and cannot find a way of updating it / leaving Insider Preview.

Someone else had the same problem: https://stackoverflow.com/questions/51165223/sensorstreamviewer-hololensforcv-failed-to-initialized-media-capture-access

Software bitmap resolution

Hello,

While trying to find the resolution of the frames captured with the VisibleLight cameras, i found out using PixelHight and PixelWidth, that the resolution is 160x480 which is different from the resolution of the registered images with the recorder in a pgm format (640x480).
And how can we use this resolution (680x480) with the streamer ?

Thank you

ComputeOnDevice

I saw that you are using the Canny edge detector, is there a possibility to use another detector?
Thanks in advance

Banding on depth output

When using the recorder to capture depth data we are seeing a circular image with significant banding. Any idea what's causing this? Since the API providing the image only returns frames and not raw data, we do not have much access to troubleshoot.

image

Connecting to Streamer issue

Hello there,

I have a problem when running Streamer app on HL and trying to connect to it through ComputeOnDesktop.

Streamer on HL works fine - I can air-tap and I see updated video stream object. Launching ComputeOnDesktop, after entering IP app waits 30 seconds then throws an exception:
WinRT information: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

IP address is correct (app is deployed wirelessly). I checked nmap on HL device for open ports while Streamer is running, but only http/s were open (I was expecting 23940 to be also open).

README is out of date at Shared/HoloLensForCV?

The README at this location says:

Note that support for additional HoloLens sensors (ToF Depth, Visible Light, ...) is not currently available publicly. Stay tuned for updates!

I'm assuming these sensors are now exposed, and the README needs an update?

No connection could be made because target machine actively refused it - ComputeOnDesktop

Hi, I'm trying to run the ComputeOnDesktop sample and I'm getting this error:

Exception thrown at 0x76C62552 (KernelBase.dll) in ComputeOnDesktop.exe: WinRT originate error - 0x8007274D : 'No connection could be made because the target machine actively refused it.'.
Exception thrown at 0x76C62552 in ComputeOnDesktop.exe: Microsoft C++ exception: Platform::COMException ^ at memory location 0x015FF400. HRESULT:0x8007274D No connection could be made because the target machine actively refused it.
WinRT information: No connection could be made because the target machine actively refused it.

This happens after I input the ip of my computer as host name on the hololens.

Any ideas why this is happening? Am I supposed to input something else instead of the ip?

Thank you.

Streamer Exception thrown

Hi, I have this message when I am trying to run Streamer on device:

Exception thrown: read access violation.
**mtx** was 0x30.

Compute on device sample empty?

Trying to run the compute on device sample. It builds fine but all I see is the empty default skybox unity space in a headlocked square. No response from airtaps either.

Duplicating laptop/computer desktop on-to hololens?

Hello,

First time using Github, and I am quite new to programming. My question is; is it possible to duplicate things happening on my computer in hololens?
In other words, if I have 3DS Max open on my PC, can I then stream that to my Hololens?

Thanks

Timble.

HoloLensForCV.winmd could not be found

Hey guys,

I'm new to this, so please forgive me if the below is a rookie error. I've installed VS17 and required HoloLens tools (https://docs.microsoft.com/en-us/windows/mixed-reality/install-the-tools#installation-checklist-for-hololens).

Unfortunately I can't build the project and I get the following:

1>------ Build started: Project: Debugging, Configuration: Debug Win32 ------
1>Object reference not set to an instance of an object.
2>------ Build started: Project: Io, Configuration: Debug Win32 ------
2>Object reference not set to an instance of an object.
3>------ Build started: Project: Audio, Configuration: Debug Win32 ------
3>Object reference not set to an instance of an object.
4>------ Build started: Project: Rendering, Configuration: Debug Win32 ------
4>Object reference not set to an instance of an object.
5>------ Build started: Project: SensorStreamViewer, Configuration: Debug Win32 ------
5>Object reference not set to an instance of an object.
6>------ Build started: Project: Graphics, Configuration: Debug Win32 ------
6>Object reference not set to an instance of an object.
7>------ Build started: Project: HoloLensForCV, Configuration: Debug Win32 ------
7>Object reference not set to an instance of an object.
8>------ Build started: Project: BatchProcessing, Configuration: Debug Win32 ------
8>Object reference not set to an instance of an object.
9>------ Build started: Project: OpenCVHelpers, Configuration: Debug Win32 ------
9>Object reference not set to an instance of an object.
10>------ Build started: Project: Holographic, Configuration: Debug Win32 ------
11>------ Build started: Project: ReceiverPV, Configuration: Debug x86 ------
12>------ Build started: Project: ReceiverVLC, Configuration: Debug x86 ------
10>Object reference not set to an instance of an object.
13>------ Build started: Project: ComputeOnDesktop, Configuration: Debug Win32 ------
13>Object reference not set to an instance of an object.
14>------ Build started: Project: ComputeOnDevice, Configuration: Debug Win32 ------
14>Object reference not set to an instance of an object.
15>------ Build started: Project: StreamerPV, Configuration: Debug Win32 ------
15>Object reference not set to an instance of an object.
16>------ Build started: Project: StreamerVLC, Configuration: Debug Win32 ------
16>Object reference not set to an instance of an object.
17>------ Build started: Project: Recorder, Configuration: Debug Win32 ------
17>Object reference not set to an instance of an object.
18>------ Build started: Project: ArUcoMarkerTracker, Configuration: Debug Win32 ------
18>Object reference not set to an instance of an object.
11>C:\Users\wojci\Workspace\HoloLensForCV\Tools\ReceiverPV\ReceiverPV.csproj : XamlCompiler error WMC1006: Cannot resolve Assembly or Windows Metadata file 'C:\Users\wojci\Workspace\HoloLensForCV\Debug\HoloLensForCV\HoloLensForCV.winmd'
11>CSC : error CS0006: Metadata file 'C:\Users\wojci\Workspace\HoloLensForCV\Debug\HoloLensForCV\HoloLensForCV.winmd' could not be found
12>C:\Users\wojci\Workspace\HoloLensForCV\Tools\ReceiverVLC\ReceiverVLC.csproj : XamlCompiler error WMC1006: Cannot resolve Assembly or Windows Metadata file 'C:\Users\wojci\Workspace\HoloLensForCV\Debug\HoloLensForCV\HoloLensForCV.winmd'
12>CSC : error CS0006: Metadata file 'C:\Users\wojci\Workspace\HoloLensForCV\Debug\HoloLensForCV\HoloLensForCV.winmd' could not be found
========== Build: 0 succeeded, 18 failed, 0 up-to-date, 0 skipped ==========

The recorder_console.py results in error

Hi,
I have deployed the CV recorder app on Hololens. It is recording just fine and I can see the files in File Explorer.

However, when I try to use the recorder_console script to download/construct/view/... the files, it rans into the following problem:

Searching for recordings...
Traceback (most recent call last):
  File "recorder_console.py", line 667, in <module>
    main()
  File "recorder_console.py", line 602, in main
    args.dev_portal_password)
  File "recorder_console.py", line 116, in connect
    self.url, self.package_full_name))
  File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File 
....
"/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/usr/local/Cellar/python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/urllib/request.py", line 650, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

I traced back the error (the url address is http://172.22.61.5/api/filesystem/apps/files?knownfolderid=LocalAppData&packagefullname=7A37D94C-C432-4875-8C57-FA185E1C92B4_1.0.0.0_x86__1e2jkdcqpv8b4&path=\\TempState) and the returned result is

{"Code" : -2135228013, "CodeText" : "Unavailable", "Reason" : "Unsupported known folder id", "Success" : false}

@ahojnnes

How to acquire synchronized frames from depth and main(1280x720) RGB cameras?

Depth and RGB frames in the Hololens are not aligned. I'm trying to get information about the depth in the scene the Hololens user is seeing, but to do so i must establish a correspondence between the coordinates of the two kinds of frame, probably with some stereo calibration algorithm or something similar. The problem is that any way i can think of for achieving the alignment requires the RGB frames to be acquired "at the same time" as the ones i can easily get from the Recorder tool, any idea on how to do that?

ComputeOnDevice object location

For this sample, how does the program or HL choose where to locate the slate object? For me, it's initializing fairly far away, in an area of the room where I haven't been since turning on the HL. I didn't even notice it the first few times I deployed - I thought the app wasn't working because I didn't see anything.

Why I can not take Photos when in ResearchMode

When in ResearchMode,the function "PhotoCapture.CreateAsync(bool showHolograms, OnCaptureResourceCreatedCallback onCreatedCallback)"won't run.Why?Can I do something to make the function run?Or,is there other way to take photo?

ComputeOnDevice dll error

Hey! I'm having some trouble running ComputeOnDevice. I keep getting an error with the ucrtbased.dll file when running on my device (and remoting). I'm able to run the Streamer and Receivers on my device with no issues. The error is below. Any help would be much appreciated!

Unhandled exception at 0x630F1FD5 (ucrtbased.dll) in ComputeOnDevice.exe: An invalid parameter was passed to a function that considers invalid parameters fatal. occurred

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.