swook / eve Goto Github PK
View Code? Open in Web Editor NEWTowards End-to-end Video-based Eye-tracking. ECCV 2020. https://ait.ethz.ch/eve
License: MIT License
Towards End-to-end Video-based Eye-tracking. ECCV 2020. https://ait.ethz.ch/eve
License: MIT License
I have two questions.
Thanks!
Hi, @swook
I am writing to express my interest in the EVE dataset that you have proposed in your recent paper "Estimating Gaze from Visual Stimuli and Eye Images". I have submitted a request for access to the dataset on your website and would like to follow up with you regarding my request.
I am currently working on facial state monitoring and believe that the EVE dataset would be extremely valuable to my work. I am particularly interested in the temporal gaze tracking capabilities of the dataset and the potential applications for label-free refinement of gaze.
Thank you for your time and consideration, and I look forward to hearing from you soon.
Here is my affiliated institution and contact email.
Affiliated institution: South China Agricultural University (SCAU)
E-mail: [email protected]
Best regards,
Yulin Cai
Hi,
Can this code be used at inference time against in-the-wild mp4 that do not necessarily provide an accompanying H5?
The more I work with this codebase, the more it looks obvious that w/o the mp4 being TOBII generated, this will not work. Is this true?
thank you
File name parser can be made more robust to your own dataset files.
Currently doesn't work for both webcam_l.mp4 and webcam_l_eyes.mp4
Please see below for filename and correction I made to make it work.
src/core/inference.py
try:
camera_type = components[-1][:-4]
except AssertionError:
camera_type = camera_type[:-5]
Hi,
Appreciate for your great work.
Could you explain the detail definition of screen coordinate system and camera coordinate system?
The orientations of X Y Z axis and the origin.
Thanks a lot
404 page not found: https://ait.ethz.ch/projects/2020/EVE/
Hi, I am interested in offset augmentation. Where is his implementation code? I can't find this operation
In paper,
For every participant, we determined a person-specific inter-ocular distance value by exploiting our knowledge of relative camera positions. This inter-ocular distance (defined as the Euclidean distance in millimeters between the outer eye corner landmarks) is then used as a target scale value for scaling every fitted 3DMM.
Can you explain the process of determining of the target scale?
Thanks
The original website (https://competitions.codalab.org/competitions/28954) cannot submit results now. Could you provide a new test method?
Hi, @swook .
I use OpenCV to capture the frames, what borthers me is that I don't know how to attach a timestamp to each frame and ensure the interval of each timestamp nearly the same. By using the datetime.time(), I can get the current time and regard it as the timestamp, but the interval between each of the timestamps seems to be different and has a big gap. So could you share me some details about your method which is used to synchronize the data?Or It would be very nice if you can share the source code or your method with me. Thanks.
Hi, EVE is an excellent work, and I benefit a lot from it. But I have a question about HDF file data filed --camera_transformation. I will appreciate it very much if you could show me the way to get this field.
Did anyone train refinenet with multiple GPUs?
I tried to use Dataparallel but seems like it doesn't work.
I trained the eve model with eve data, ran eval_codalab.py and got pkl file as a result. I also ran eval_codalabl.py and got pkl file from the pretrained model weights(from https://github.com/swook/EVE/releases/tag/v0.0 - eve_refinenet_CGRU_oa_skip.pt)
Then, I compared these two results and the numbers seem to match.
For example, from the pretrained model, I got [960. 540.] for PoG_px_final and got [963.0835 650.5635] for my model.
However, in the eve paper, table3 shows that the PoG_px in GRU model with oa+skip is 95.59
Numbers in paper is 1/10 of the numbers i got from eval_codalab and not sure what went wrong.
Are they supposed to match?
If they are not supposed to match, how do you calculate the numbers?
Also, in the result page of codalab, the gaze direction(angular error) is shown, but the eval_codalab.py doesn't store gaze direction. (Keys_to_store=['left pupil size' , 'right pupil', 'pog__px_initial', 'pog_px_final', 'timestamp'])
How should I get gaze direction error in degree?
Hi, @swook
Thanks for your great job, but I have a question about how to get the 3D gaze origin(determined during data pre-processing). The paper said "In pre-processing the EVEdataset, we apply a 3DMM fitting approach with interocular-distance-based scale-normalization to alleviate these issues" . However, I'm not sure about the specific process of this step. What should I do if I want to convert from landmark to 3D gaze origin? Besides, if it is possible to open some code of this part?
Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.