Comments (4)
Follow-up questions for finetuning PointRCNN on Kitti,
- When finetuning on different splits (5%-50%) do you increase the number of epochs to match the same number of iterations as finetuning on 100% data (to ensure convergence)? For example, if you finetune on 100% data for 80 epochs, did you finetune 5% data for 1600 epochs?
- In your paper, you mention using adamW optimizer for finetuning pointrcnn but in your OpenPCDet repo, the finetuning cfg file says adam_onecycle. Which one did you actually use? If you used adam_onecycle, then dropping learning rate at 30 epochs will not do anything since adam_onecycle has its own onecycle scheduler.
- In your main.py you don't convert batch norm layers to syncBatchNorm
for example: model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model). Is there a reason why you don't do this?
Thanks in advance!
from depthcontrast.
@YurongYou For your questions, 1. yes, those are mean across multiple splits. 2. And yes. I use resampled the gt points from the sub sampled pointclouds for the _db_info files.
@barzanisar For your questions, 1. I did increase the number of iterations but I think I did not increase it to 1600 epochs for 5%. I tried maybe a bit fewer iterations than that because I found that you don't need those many epochs to get the best performance. 2. I used the adam_onecycle, and for that dropping learning rate, that's the default from OpenPCDet. 3. For that syncBatchNorm, I was following the shuffle batchnorm practice from MocoV2 codebase. So I didn't apply that. There was no particular reason for not using it. You are welcome to try!
from depthcontrast.
Thanks @zaiweizhang for replying. How many epochs did you train for on 5% split?
from depthcontrast.
I think I tried 200 epochs, and the loss converges
from depthcontrast.
Related Issues (20)
- Trainining curve for ScanNet HOT 2
- shuffle bn for voxel input HOT 7
- Evaluating contrastive learning HOT 5
- KITTI data splits HOT 5
- CUDA error when run pretrain HOT 11
- Memory leak at the beginning of pretraining HOT 1
- environment install HOT 5
- Install requirements for pip HOT 1
- Why apply slighter data augementation to LiDAR data? HOT 1
- Per point feature vector? HOT 1
- Pre-train with Adam optimizer HOT 1
- environmental install HOT 2
- pre-trained Waymo model HOT 1
- some question about lidar data augmentation HOT 2
- Voxelization in transformer HOT 1
- Prepare Dataset (scannet) HOT 4
- Downstream Task: object detection on S3DIS HOT 1
- OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. HOT 1
- environment install error HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from depthcontrast.