i have 4 questions about the paper:
1.The original paper says that the author trained a SVR model from MPIIGaze datasite. How to do with that? Useing the GazeML model to predict landmarks on MPII Gaze data images and use the detection landmarks results as the ground truth to train a SVR?
2.why not directly using Unityeyes dataset 's landmark and its gaze vector ground truth to train the SVR?
3.why use SVR? using several FC layers to regress result is ok or not?
4. what does the "calabration with 20 or more samples" mean in author's paper, calabrating to get
camera paramters or get what? don't understand exactlly.
i would appreciate very very very much if you could answer my question,thank you.
Hi @shaoanlu ,
sorry for disturbing. I am wondering that how to convert trained model from GazeML to Keras, Like you did in this repo? I currently trained a model from GazeML, but I would like to test in your code.
Could you guide me how to achieve this?
Thank you.