Coder Social home page Coder Social logo

rehg-lab / lowshot-shapebias Goto Github PK

View Code? Open in Web Editor NEW
42.0 5.0 0.0 2.05 MB

Learning low-shot object classification with explicit shape bias learned from point clouds

Home Page: https://rehg-lab.github.io/publication-pages/lowshot-shapebias/

Shell 9.99% Python 90.01%
deep-learning point-cloud pytorch-implementation pytorch low-shot few-shot-learning few-shot 3d convolutional-neural-networks computer-vision

lowshot-shapebias's People

Contributors

sstojanov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

lowshot-shapebias's Issues

example code for generating images/point-clouds from 3D models?

Hi, thanks for the great work and for sharing your code!

I am just wondering, if possible, and convenient for you, could you also release the code for generating 2D-images/points-clouds from 3D models? Or provide the example code for doing these things (for example, the code snapshot for one single 3D model with given camera view parameters)? I am trying to reproduce your dataset building pipeline but I am not sure whether I do it in the same way as your paper. So I think a code example would be really helpful for us to follow your work!

Thanks for the insightful paper and great dataset again!

Conv4 file missing

Hi, thanks for the great code and for sharing your dataset. It seems a file is missing. Can you please look into it?

File "github/lowshot-shapebias/lssb/nets/__init__.py", line 4, in <module>
    from .conv4 import Conv4 as conv4
ModuleNotFoundError: No module named 'lssb.nets.conv4'

attributeError: can't set attribute

Did anyone meet this issue and know how to solve it? Thank you in advance!

Traceback (most recent call last):
File "lssb/lowshot/train.py", line 112, in
main()
File "lssb/lowshot/train.py", line 61, in main
model = getattr(lowshot_models, hparams.model_type)(hparams)
File "./lssb/lowshot/models/image_simpleshot_classifier.py", line 7, in init
super().init(hparams)
File "./lssb/lowshot/models/base_simpleshot_classifier.py", line 19, in init
self.hparams = hparams
File "/home/myang47/anaconda3/envs/ifsl_pytorch1.7.0/lib/python3.8/site-packages/torch/nn/modules/module.py", line 826, in setattr
object.setattr(self, name, value)
AttributeError: can't set attribute

How to pre-train DGCNN

Hi @sstojanov ,

May I know how to pre-train DGCNN? It is hard for me to find the correct .sh and .py files to get DGCNN pretrained. Besides, may I know whether you pre-train DGCNN on all training classes in a traditional way or in a few-shot meta-learning way? Thanks in advance.

Minmin

Issue about performance

Thank you for your work and for sharing the implementation.
But there are some issues, so please share your insight!

(1) Protocol of the whole experiment
Actually, it isn't easy for me to catch up on the proper reproduction protocol.
Please check whether I understand correctly or not.

  1. Train SimpleShot point cloud classifier only.
    (bash training_scripts/train_dgcnn_simpleshot.sh 0)
  2. Extract point cloud embeddings using the trained one.
    (python lssb/feat_extract/extract_simpleshot_pc_feature_modelnet.py --ckpt_path=[])
  3. Train SimpleShot image classifier using the extracted embeddings.
    (bash training_scripts/train_resnet18_joint_simpleshot.sh 0)
    Here, the original code uses the config "modelnet-joint-simpleshot-resnet18-cfg", but it makes error. I think this should be "modelnet-joint-simpleshot-resnet18-w-pc-cfg" (or "modelnet-joint-simpleshot-resnet18-wo-pc-cfg"). Is it right?
  4. Test trained image classifier.
    (bash testing_scripts/test_simpleshot_modelnet.sh)

(2) Performance
The number in the table in ReadMe is different from the table in the main paper.
For example, in the case of ModelNet 1shot-5way, LSSB achieves 61.91 in the main paper.
But in ReadMe, it is written as 57.57, which is much lower than the baseline (SimpleShot, 58.99).
What makes this huge difference?

Further, when I tested the officially provided checkpoints, it returns even lower performance (54~55).
I run five times as the paper recommended.
For this, I used "bash testing_scripts/test_joint_simpleshot_modelnet.sh".
Here, I slightly changed the sh file as follows, since there is no pretrained_models/simpleshot/modelnet/shape-biased directory.
python lssb/lowshot/test.py --log_dir=pretrained_models/simpleshot/modelnet/shape-biased-w-pc/
--name=joint-modelnet-pairwise-simpleshot-w-pc
--version=0
--gpu=1 \

It would be greatly appreciated if you could help me with these issues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.