Comments (8)
The magic function is the calibration of position and rotation, that takes tracked result to the correct visualization. If you turn it off, you will observe that the overlay is not registered with the view from user. Of course because there is no guarantee that the "tracking space" is same as "display space". There is offset in position, rotation, scale (even skew). The current magic calibration is one that fits me. I obtained these values from a calibration procedure (I will push them up around mid March).
For HoloLens, different people with different IPD might have different calibration matrices. I am trying to create a friendly calibration procedure (another Unity scene) for arbitrary user to find their best calibration matrices via several alignment tasks.
Currently I don't call the conversion function because it is already correct. Your extra note about the coordinates is exactly what I tried to find out. Thanks a lot!
from hololensartoolkit.
Thank you for explaining! everything makes more sense now.
from hololensartoolkit.
@qian256, I was trying to understand the impact of the magic function on the position of the virtual object and had a question.
From what I understood, the target object is the one that is being displayed, which is obtained by transforming the dummyGameObject into the world coordinates. The dummyGameObject is then calculated via the position and rotation from latestTransMatrix. I was then trying to understand the impact of the magic functions on the object and just as a simple trial I defined the 4th column of magicMatrix1 as [0.1;0.1;0.1;1.0] and to see what would happen. I then printed the values for the position of both the dummyGameObject and the target before and after applying the magicMatrix1. While the position of dummyGameObject gets translated by [0,1;0.1;0.1], the position of the target gets translated by another value.
I was then wondering what was happening with the transformation of the target? is it that the translation operation gets modified when transformed into world coordinates or did I miss something?
If one would like to then calculate the transform required to align both the real and virtual objects as you mentioned above and looking at your paper, both pi and qi would be acquired from the target positions right? But if such calibration transformation gets modified afterwards, then I guess that would destroy the alignment since the final transform is not the one intended?
Thanks for the help!
from hololensartoolkit.
Hi @araujokth ,
The purpose of the dummyGameObject is to act as an empty object, that relays the transformation of tracking to the world coordinate system. dummyGameObject is the child of HoloLens Camera, but target is in the root. The purpose here is to make use of the SLAM of hololens. When the tracking result stops update, the target will be fixed in the world instead of fixed in front of the camera. This is tuned by the parameter AnchorToWorld.
from hololensartoolkit.
Thanks for the clarification! Could you also clarify my other question?
"I was then trying to understand the impact of the magic functions on the object and just as a simple trial I defined the 4th column of magicMatrix1 as [0.1;0.1;0.1;1.0] and to see what would happen. I then printed the values for the position of both the dummyGameObject and the target before and after applying the magicMatrix1. While the position of dummyGameObject gets translated by [0,1;0.1;0.1], the position of the target gets translated by another value.
I was then wondering what was happening with the transformation of the target? is it that the translation operation gets modified when transformed into world coordinates or did I miss something?
If one would like to then calculate the transform required to align both the real and virtual objects as you mentioned above and looking at your paper, both pi and qi would be acquired from the target positions right? But if such calibration transformation gets modified afterwards, then I guess that would destroy the alignment since the final transform is not the one intended?" Or did I miss something?
Thanks for the help again!
from hololensartoolkit.
Just to clarify, what I was meaning was if the magic function should be applied to the target position (after anchortoworld is applied) and not be applied to the latestTransMatrix like it is now, since during a calibration procedure pi and qi is obtained from the target in world coordinates. Thanks again!
from hololensartoolkit.
Not sure why I thought that you could only do this step based on just world coordinates. Of course you can do it from both coordinates, but then you have to transform it correctly. Sorry for the misunderstanding!
from hololensartoolkit.
@araujokth , you are right about this point. We assume the display coordinate system is euclidean w.r.t to world coordinate system, but the tracking coordinate system is a little bit off. So the result of tracking needs to be parsed through a calibration function to be suitable for display. Also, the implementation in the repository and the implementation in the paper is not exactly same. Here in the repository, the calibration, or the magic function is separate for position and rotation, it is just a trick that makes it easier to tune.
from hololensartoolkit.
Related Issues (20)
- Got NullReferenceException error with XR Plugin Management HOT 3
- Any method to get RGB video stream in HL2? HOT 8
- 获取相机固有属性和投影矩阵 HOT 1
- HL2: Is the marker tracking working from the play editor of Unity (in holographic remoting)? HOT 3
- About 180 rotation display of target object HOT 1
- Android version HOT 1
- Display the images shot by the camera in HL2?
- Imperfect alignment registration & the cube's pose changes following the head movement HOT 3
- Unity 2020LTS does not build project HOT 6
- Is HoloLensCamCalib enough to calibrate and align the HL2? Do we also need to do display calibration using 20 alignment described in the paper?
- Can't get multi marker to work
- Has anyone succeeded in unity 2020? HOT 3
- how can i get the marker postion
- Get video preview as RGB instead of grayscale
- Add Camera resolution for Hololens 2
- spiel für kinder
- Cube sample is not detected after calibration HoloLens2 HOT 1
- Has anyone succeed in unity2018?
- help me
- Has anyone managed to make it work with openxr?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hololensartoolkit.