Project to infere emotional expressions and benchmark datasets by Niklas Wagner, Felix Mätzler, Samed R. Vossberg, Helen Schneider and Svetlana Pavlitska.
I am trying to train the models to reproduce the results. I found that the key of images in AffectNet dataset is organized in numbers (as shown in *_set_annotation_without_lnd.csv). I wonder the way to map the original images to the image numbers.
Would you please provide codes on data preprocessing for image data, including renaming and processing? Thanks!
Hi,
I am trying to use these models for valence estimation.
However, I am unable to find the trained weights.
Many scripts are loading the file best_model_affectnet_improved7VA, which is not part of the repository.
Where can I find that file?
Thank.