Coder Social home page Coder Social logo

Comments (13)

DanAndersen avatar DanAndersen commented on May 30, 2024 2

@danilogr Thank you for linking the sample code. I am currently making some commits in a fork of HoloLensARToolkit (https://github.com/DanAndersen/HoloLensARToolKit) along the lines you are talking about. By using the approach shown in https://github.com/camnewnham/HoloLensCameraStream I found I didn't have to edit any C++ code and could do it all in C#.

I need to look further into the question of precise alignment of AR and real markers (still not exactly aligned), but at the very least I've been able to use the per-frame pose for the locatable camera as the reference point from which the AR markers are placed, rather than doing it based on the coordinate system of the "Main Camera" (which represents the HoloLens IMU and not the locatable camera).

One benefit of this approach is that markers remain world-anchored even if the user's head moves quickly. Currently with HoloLensARToolKit, if you have a world-anchored marker at the left side of your camera, and then rotate your head quickly (holding the marker fixed) so that the marker ends up on the right side of the camera view, the marker will appear head-anchored and not world-anchored during the transition. This is because the marker isn't detected due to motion blur, and so it remains at the same pose relative to the Main Camera pose, which is changing. Instead, I create a reference GameObject for the locatable camera pose, which is in world space, and off of which the markers update their relative position, which resolves that issue.

from hololensartoolkit.

qian256 avatar qian256 commented on May 30, 2024 2

@DanAndersen Thank you for doing this!
I will take a look at the code soon and merge back. Apologies for lazy updates on this repo.
On my local node, I also switched to the HoloLensCameraStream as a replacement to ARUWPVideo.cs, and uses the pose at the time of capturing instead of the pose of Unity virtual camera which is tens of milliseconds later than the time of capturing. I have two experiences:

  1. When you rotate your head left, the virtual cube does not drift left with the head, but drift very slightly to the right.
  2. Even if I use the camera pose from UWP, the alignment is still not perfect, although much better than before. For perfect alignment, a user-specific calibration is still needed. Thank you @danilogr for referring to my updated arxiv paper.

The code to perform calibration is still not available on this repo, due to some coordination issues between our team. My apologies again. I hope the "hold" will be released in 2 months. At that time, I will upload the code to do the calibration, and apply calibration results to alignment (closing the loop to update magicMatrix).

from hololensartoolkit.

pocketmagic avatar pocketmagic commented on May 30, 2024 1

hi @qian256,we do not understand what the mean of the magicMatrix in the magic functions,how to adjust the magicMatrix

from hololensartoolkit.

tsteffelbauer avatar tsteffelbauer commented on May 30, 2024

You could try to calibrate your Hololens camera with this repository: https://github.com/qian256/HoloLensCamCalib

from hololensartoolkit.

qian256 avatar qian256 commented on May 30, 2024

Hi @pocce90
It is very common for the virtual object not aligning very well with the marker, because we never do a calibration between the coordinate system of HoloLens and coordinate system of tracking camera.
The magic functions in ARUWPMarker.cs script is the place where you can tune the tracking transformation to the display transformation (for alignment). Currently, you have to manually supply the numbers in the magic function.

from hololensartoolkit.

araujokth avatar araujokth commented on May 30, 2024

Hi @qian256, I finally found some nice parameters for the magic functions for my Hololens which work very well for a distance between the Hololens and the object up to 60-70 cm. No need for rotation adjustments, only translation as [0.002, 0.04, 0.08]. However, beyond 1 meter, the error between the virtual and real object increases quite a lot. I guess that this may be because of tracking errors from the ARToolKit, IPD error, among others? Did you also experience that when doing the experiments for your paper?

from hololensartoolkit.

qian256 avatar qian256 commented on May 30, 2024

@araujokth , good to hear that you found working parameters.
The calibration in the paper also has a restricted volume, and the result must favor that volume and gets worse and worse if it is out of the calibration volume. The 3D space of tracking and 3D space of display may not be only affine, or perspective, there is distortion as well. This includes the distortion of camera tracking parameters too.

from hololensartoolkit.

araujokth avatar araujokth commented on May 30, 2024

@qian256 makes sense! I guess that will be good material for your next paper? :)

from hololensartoolkit.

RaymondKen avatar RaymondKen commented on May 30, 2024

hi @qian256 same as @pocketmagic , I don't understand the mean of the magicMatrixx1, and how can we adjust it.

from hololensartoolkit.

tsteffelbauer avatar tsteffelbauer commented on May 30, 2024

The magicMatrix is a rotation matrix (https://en.wikipedia.org/wiki/Rotation_matrix) that is added to the transformation to reduce the error manually. Look up which parameter affects which transformation parameter and edit the matrix according to the error you see.

from hololensartoolkit.

araujokth avatar araujokth commented on May 30, 2024

Hi @tsteffelbauer, both magic matrices are 3D transformation matrices (https://en.wikipedia.org/wiki/Transformation_matrix) since they are performing both translation (magicMatrix1) and rotation (magicMatrix2), but of course one could do the operation in the MagicFunction in a single step using a single transformation magicMatrix instead.

From my experience I would not recommend doing this manually by looking into the error you see since this would take quite some tedious time to perform and it is quicker to implement a calibration method to do it properly. Mainly if one has to perform a rotation. I would suggest implementing the method described in @qian256's paper https://arxiv.org/abs/1703.05834 since its quite quick to implement.

from hololensartoolkit.

danilogr avatar danilogr commented on May 30, 2024

Hello everyone,
I am here dealing with the same issue of misaligned holograms. This issue stems from ARUWPVideo.cs using Windows.Media.Capture (C#) to receive video frames. As far as I can tell from Microsoft Documentation this API does not provide a CameraViewTransform per frame. As a result - and in accordance to what @qian256 said before - the ARToolkit coordinates are in the locatable camera space and not in the app space.

Solving this problem might be a little bit involved as one would have to rewrite ARUWPVideo.cs from the WinRT / C++ side of things, exporting the CameraViewTransform of each frame, and applying it to the marker coordinates.

Also, I don't believe that per-user calibration is needed unless trying to fine tune for a specific application and viewpoint (i.e., surgical procedures). I've been fairly successful with Vuforia for various marker sizes. All in all, if anyone is willing to improve that part of things, this repository has some sample code capturing frames and applying transforms to the Locatable Camera.

from hololensartoolkit.

danilogr avatar danilogr commented on May 30, 2024

Hey @DanAndersen, I just went through your code, and I am mind-blow! I could not imagine that it would be so simple to get the extension data from the frame reference from Unity.

Perhaps we should merge back to this repository so other people can benefit from it? @qian256 ?

As for the proper alignment, there are a couple of things:

First, the locatable camera is not perfect and has some distortion.

On HoloLens, the video and still image streams are undistorted in the system's image processing pipeline before the frames are made available to the application (the preview stream contains the original distorted frames). Because only the projection matrix is made available, applications must assume image frames represent a perfect pinhole camera, however the undistortion function in the image processor may still leave an error of up to 10 pixels when using the projection matrix in the frame metadata. In many use cases, this error will not matter, but if you are aligning holograms to real world posters/markers, for example, and you notice a <10px offset (roughly 11mm for holograms positioned 2 meters away) this distortion error could be the cause.
( Source.

Second, the interpupillary distance should be configured per user in order to minimize alignment errors.

If you want to calibrate with high accuracy ( <= 4mm error ), you might want to take a look at @qian256's updated paper here: https://arxiv.org/abs/1703.05834. The down side is that you have to calibrate per user, and they might not be able to move much around.

Once again, amazing job tackling this problem. Thanks for sharing!

from hololensartoolkit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.