Comments (18)
Hi @hitymz,
I think it is only centered (it's a bit old now), what kind of preprocessing are you thinking about?
Cheers,
Thibault
from 3d-coded.
Good point, no there are no good reason for keeping --patch_deformation the default , I guess when I refactored the code, i had Learning Elementary Structures en mind but I agree this flag could be disabled since it's 3D-CODED codebase.
from 3d-coded.
Hello, I'm also having some issues with Finetuning/Re-training on FAUST training set. In essence, the accuracy seems to be poorer when I train/FineTune on the FAUST dataset than using the pre-trained weight. Could this be related to preprocessing? In order to preprocess the meshes, I am using the following functions (in the same order) from my_utils
, 1) Scaling, 2) cleaning, 3) centering. Am I missing something?
Thank you
from 3d-coded.
Hi @Sentient07,
Can you clarify what you are trying to do (train set, fine-tuning set, test set)? What is the accuracy you are referring to?
Best regards,
Thibault
from 3d-coded.
Hello @ThibaultGROUEIX ,
Apologies for being unclear, I'm referring to the dense shape correspondence problem. I'm trying to compare 3D Coded with other methods on the FAUST-Remesh dataset. I train on the first 80 meshes and evaluate on the last 20. For this experiment, I consider ZoomOut, BCICP and two versions of 3D Coded. The first, denoted by TDC PTw
refers to establishing correspondence using the weights you and others have released. In the second case, denoted by TDC FTw
, I'm trying to Fine Tune, on the training meshes of the FAUST-Remesh dataset (using Unsupervised Loss). What I observe is that the accuracy sharply deteriorates in the second case. I was wondering what could be the reason for this? Am I pre-processing the dataset incorrectly?
from 3d-coded.
There could be a number of reasons but I think the most probable is that you fine-tuning set is too small. 3D-CODED was trained on 230000 meshes. You could evaluate on you fine-tuning set to check if you observe overfitting.
from 3d-coded.
Thank you @ThibaultGROUEIX for your very prompt response. The reason why I was expecting to see a much better result is due to the Table 1 in this paper and Figure 6. They claim to observe a good performance with 3DCoded when the training shapes match the poses of test shape irrespective of its number. Is this the case with 3D Coded or the AtlasNet? In my case, I observe the following reconstruction between shapes that belong to the training set, where I'm FineTuning , The ground truth meshes are attached below. Correspondence are color coded (from right the target, to left, the source)
from 3d-coded.
Hello,
Just to confirm, the master seems much better than the v2.0.0 tag. I used the latter and the results seem much better and consistent with table 1 of the paper. I'm a little curious as what could be the reason? Thank you
from 3d-coded.
@ThibaultGROUEIX Thanks for your answer, I want to know if the data set downloaded by the download_dataset.sh is 230,000 meshes?
from 3d-coded.
@hitymz : Yes
@Sentient07 : I did a major refacto of the code, you should definitely use the latest version. I don't know why the v2.0.0 tag doesn't work, that should not be the case.
from 3d-coded.
Hello @ThibaultGROUEIX thanks a lot for clarification. What made me use the 2.0.0 branch was that the pretrained weights seem to be a fit for that branch alone. (i.e, the pre-trained weights contain STN of PointNet encoder, which the master branch is missing). Can you also please provide the pretrained weights for the refactored branch if you'd still have them? Thank you.
from 3d-coded.
I am confused, you mean that the pretrained weights provided by the latest commit on master is not compatible with the latest code on master?
from 3d-coded.
Hi @ThibaultGROUEIX , just to be sure if we're referring to the same model, I tried to reload the weights from here https://cloud.enpc.fr/s/n4L7jqD486V8IJn provided in this comment. Is it not the right one? From the name of the directory (and also the size), I assumed the one provided in the master branch is for Learning Elementary structures paper. Please let me know if I'm confused here. 😅
from 3d-coded.
Right, in this comment the user wanted to use v2.0.0 because it has the unsupervised training code. So I provided the old model.
To use the latest code (the one I maintain), you need the latest model. You can get them by running : https://github.com/ThibaultGROUEIX/3D-CODED/blob/master/inference/download_trained_models.sh
Just to clarify, Learning Elementary Structure is a generalization of 3D-CODED. The script will download several models from Learning Elementary Structure. 3D-CODED is one of these models, under the folder /3D-CODED.
from 3d-coded.
Hi @ThibaultGROUEIX thanks again for the elaborate response, I am facing this error while downloading. Actually there is no error, but just the zip
file downloaded is empty.
$ bash -x inference/download_trained_models.sh
+ gdrive_download 1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs learning_elementary_structure_trained_models.zip
++ wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs' -O-
++ sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p'
+ CONFIRM=
+ wget --load-cookies /tmp/cookies.txt 'https://docs.google.com/uc?export=download&confirm=&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs' -O learning_elementary_structure_trained_models.zip
--2021-07-19 17:25:40-- https://docs.google.com/uc?export=download&confirm=&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs
Resolving docs.google.com (docs.google.com)... 142.250.74.238, 2a00:1450:4007:80b::200e
Connecting to docs.google.com (docs.google.com)|142.250.74.238|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘learning_elementary_structure_trained_models.zip’
learning_elementary_structure_trained_models.zip [ <=> ] 3.05K --.-KB/s in 0s
2021-07-19 17:25:40 (44.3 MB/s) - ‘learning_elementary_structure_trained_models.zip’ saved [3123]
+ rm -rf /tmp/cookies.txt
+ unzip learning_elementary_structure_trained_models.zip
Archive: learning_elementary_structure_trained_models.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
note: learning_elementary_structure_trained_models.zip may be a plain executable, not an archive
unzip: cannot find zipfile directory in one of learning_elementary_structure_trained_models.zip or
learning_elementary_structure_trained_models.zip.zip, and cannot find learning_elementary_structure_trained_models.zip.ZIP, period.
The download from the browser seems to work fine, but since I work from home, it'd be great to have this on the server too. Is there any way to fix this script?
from 3d-coded.
Right, this is the same as ThibaultGROUEIX/AtlasNet#61
Can you try manually going to https://docs.google.com/uc?export=download&confirm=&id=1ZAjOuTaeDrKJbFffzLnLn_K-C86fYCXs and clicking download?
from 3d-coded.
Hi @ThibaultGROUEIX yes, the web download worked. Thanks a lot for releasing all the data (including experiments) and not just your model. However, for some who might be in similar situation as me, it'd be easier for them to download the trained models alone for master branch from here. Since it doesn't cost a lot, I'm using my own Google drive : https://drive.google.com/drive/folders/1Fub5lpSrrJmV-kNF6ifQgkIzxqzd6gwr?usp=sharing .
Just a quick follow-up, you seem to not have used the --patch_deformation
option. Is there a specific reason why this is enabled by default in the current code while it seemed to not have been used in the pre-trained weights?
from 3d-coded.
Hello @ThibaultGROUEIX ,
I am now training and testing over (random, smaller subset of ) surreal. I found that the accuracy was quite low and it didn't converge. The dataset was generated by the script, generate_data_humans.py
. When examining further, I found that the template is not in one-one correspondence with the generated surreal. What could I be doing wrong? Can you please help me find my mistake here? Thank you! (Attaching my code snippet)
from 3d-coded.
Related Issues (20)
- Get error by training the train_unsup.py HOT 9
- _pickle.UnpicklingError: invalid load key, '<'. HOT 1
- How to compute the error? HOT 1
- About unsup_training HOT 4
- About dataset processing HOT 3
- data is preprocessed HOT 1
- Results of computing correspondences HOT 5
- Rotation invariance HOT 2
- How is the error for correspondences computed? HOT 1
- Problems with downloading data HOT 2
- SMALL-R animals dataset HOT 2
- average Euclidean error on SCAPE HOT 4
- FileNotFoundError: [Errno 2] No such file or directory: 'chamfer_cuda.cpp' HOT 3
- how can i get the correspondence index? HOT 47
- About unsup_training HOT 1
- Training ERRor HOT 6
- Get terrible result HOT 2
- what's the network.mesh
- Problem executing the demo - `GLIBCXX_3.4.29' not found HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from 3d-coded.