Coder Social home page Coder Social logo

jotabravo / spacecraft-uda Goto Github PK

View Code? Open in Web Editor NEW
19.0 2.0 1.0 1.57 MB

Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus

Home Page: https://ieeexplore.ieee.org/document/10225381

License: MIT License

Python 99.50% Shell 0.50%
spacecraft keypoint keypoint-detection pose-estimation

spacecraft-uda's Introduction

Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus

Results

News

Update: October 2023

We are happy to announce that an extended version of our previous work has been published in the IEEE Transactions in Aerospace and Electronic Systems.

Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus

We have updated the repository to include:

  • Support for a lighter ResNet model from [1].
  • Faster, more efficient ways to generate heatmaps.
  • Bug correction in the pseudo-label generation process.

Deimos Space Logo VPU Lab Logo VPU Lab Logo

Cite

If you find our work or code useful, please cite:

@article{perez2023spacecraft,
  title={Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus},
  author={P{\'e}rez-Villar, Juan Ignacio Bravo and Garc{\'\i}a-Mart{\'\i}n, {\'A}lvaro and Besc{\'o}s, Jes{\'u}s and Escudero-Vi{\~n}olo, Marcos},
  journal={IEEE Transactions on Aerospace and Electronic Systems},
  year={2023},
  publisher={IEEE}
}

1 - Summary

This paper presents the second ranking solution to the Kelvins Pose Estimation 2021 Challenge. The proposed solution has ranked second in both Sunlamp and Lightbox categories, with the best total average error over the two datasets.

The main contributions of the paper are:

  • A spacecraft pose estimation algorithm that incorporates 3D structure information during training, providing robustness to intensity based domain-shift.
  • An unsupervised domain adaptation scheme based on robust pseudo-label generation and self-training.

The proposed architecture with the losses incorporating the 3D information are depicted in the following figure:

2. Setup

This section contains the instructions to execute the code. The repository has been tested in a system with:

  • Ubuntu 18.04
  • CUDA 11.2
  • Conda 4.8.3

2.1. Download the datasets and generate the heatmaps

You can download the original SPEED+ dataset from Zenodo. The dataset has the following structure:

Dataset structure (click to open)
speedplus
│   LICENSE.md
│   camera.json  # Camera parameters 
│
└───synthetic
│   │   train.json
│   │   validation.json
│   │
│   └───images
│       │   img000001.jpg
│       │   img000002.jpg
│       │   ...
│   
└───sunlamp
│   │   test.json
│   │
│   └───images
│       │   img000001.jpg
│       │   img000002.jpg
│       │   ...
│   
└───lightbox
│   │   test.json
│   │
│   └───images
│       │   img000001.jpg
│       │   img000002.jpg
│       │   ...

SPEED+ provides the ground-truth information as pairs of images and poses (relative position and orientation of the spacecraft w.r.t the camera). Our method assumes the ground-truth is provided as key-point maps. We generate the key-point maps prior to the training to improve the speed. You can choose to download our computed key-points or create them manually.

2.1.1. Download the heatmaps

Download and decompress the kptsmap.zip file. Place the kptsmap folder under the synthetic folder of the speedplus dataset.

Notes from update: These heatmaps only work with the data loader "loaders/speedplus_segmentation_precomputed.py".

2.1.2. Generate the heatmaps

We provide two methods to generate the heatmaps:

  • The legacy method based on .npz files:
python create_maps.py --cfg  configs/experiment.json

Note: if heatmaps based on .npz files are to be used, use them in conjuction with the data loader "loaders/speedplus_segmentation_precomputed.py"

  • The new method based on .png files. This method sould be faster:
python create_maps_image.py --cfg  configs/experiment.json

Note: if heatmaps based on .png files are to be used, use them in conjuction with the data loader "loaders/speedplus_segmentation_precomputed_image.py"

Please make sure that the correct "split_submission" field is in the config file before generation.

2.1.3. Keypoints

Place the keypoints file "kpts.mat" into the speed_root folder

2.2. Clone Repository and create a Conda environment

To clone the repository, type in your terminal:

git clone https://github.com/JotaBravo/spacecraft-uda.git

After instaling conda go to the spacecraft-uda folder and type in your terminal:

conda env create -f env.yml
conda activate spacecraft-uda

3. Training process

3.1 Train a baseline model

The training process is controlled with configuration files defined in .json. You can find example configuration files under the folder "configs/".

To train a model simply modify the configuration file with your required values. NOTE: The current implementation only supports squared images.

Configuration example (click to open)
{
    "root_dir"         : "path to your datasets",
    "path_pretrain"    : "path to your pretrained weights",  # Put "" for no weight initalization
    "path_results"     : "./results",
    "device"           : "cuda",

    "start_epoch"      :0,      # Starting epoch
    "total_epochs"     :20,     # Number of total epochs (N-1)
    "save_tensorboard" :100,    # Number of steps to save to tensorboard
    "save_epoch"       :5,      # Save every number of epochs
    "save_optimizer"   :false,  # Flag to save or not the optimzer

    "mean"     :41.3050, # Mean value of the training dataset
    "std"      :37.0706, # Standard deviation of training the dataset   
    "mean_val" :41.1280, # Mean value of the validation dataset
    "std_val"  :36.9064, # Mean value of the validation dataset    

    "batch_size"      :8,  # Batch size to input the GPU during training
    "batch_size_test" :1,  # Batch size to input the GPU during test
    "num_stacks"      :2,  # Number of stacks of the hourglass network
    "lr"              :2.5e-4, # Learning rate

    "num_workers"   :8,    # Number of CPU workers (might fail in Windows)
    "pin_memory"    :true, 
    "rows"          :640,  # Resize input image rows (currently only supporting rows=cols)
    "cols"          :640,  # Resize input image cols (currently only supporting rows=cols)

    "alpha_heatmap":10, # Flag to activate pnp loss

    "activate_lpnp":true, # Flag to activate pnp loss
    "activate_l3d": true, # Flag to activate 3D loss
    "weigth_lpnp": 1e-1,  # Weight of the PnP loss
    "weigth_l3d": 1e-1,   # Weight of the 3D loss

    "split_submission": "synthetic", # Dataset to use to generate labels

    "isloop":false # Flag to true if training with pseudo-labels, false otherwise
}

Then, after properly modifying the configuration file under the repository folder type:

python main.py --cfg "configs/experiment.json"

Notes from update: If you wish to use a simpler ResNet model please execute the following command:

python main_resnet.py --cfg "configs/experiment_resnet34.json"

And make sure that the "resnet_size" field in the config is available.

3.2 Train a model with pseudo-labels

The script will take the initial configuration file and the training weights associated to that training file to generate pseudo-labels and train a new model. Every iteration a new configuration file is generated automatically so the results are not overwritten.

3.2.1 Create the config file

To train the pseudo-labelling loop you first need to configure the "main_loop.py" script by specifying the path to the folder where the configuration files will be stored, the initial configuration file and the number of iterations. In each iteration a new configuration file will be created in the BASE_CONFIG folder with an increased niter counter. For example you first create the folder "configs_loop_sunlamp_10_epoch" and place the config file "loop_sunlamp_niter_0000.json" under it. For the next iteration of the pseudolabelling a new configuration file loop_sunlamp_niter_0001.json will be created.

NITERS      = 100
BASE_CONFIG = "configs_loop_sunlamp_10_epoch" # folder path
BASE_FILE   = "loop_sunlamp_niter_0000.json"

3.2.2 Place the first checkpoint

After you have crated the configuration files, you will need to manually place the weights used for the first iteration of the pseudo-labelling process. Under the "results" folder create a folder with the BASE_CONFIG name, and then another subfolder with the BASE_FILE name. For example "results/configs_loop_sunlamp_10_epoch/loop_sunlamp_niter_0000.json". Under that folder place a new subfolder called "ckpt" containing a file of weights named "init.pth". The final result should look as "results/configs_loop_sunlamp_10_epoch/loop_sunlamp_niter_0000.json/ckpt/init.pth"

The init.pth should be the weights of the model trained over the synthetic domain. If you want to skip that training phase you can use our available weights in Section 5 of this page.

3.2.3 Create the Sunlamp and Lightbox train folders

Go to the folder where you have the dataset saved and duplicate the Sunlamp and Lightbox folders, renaming the new ones as "sunlamp_train" and "lightbox_train". In these folders the new pseudo-labels will be stored and generated.

3.2.4 Run the script main_loop.py

python main_loop.py

4. Use tensorboard to observe the training process

You can monitor the training process via TensorBoard by typing in the command line:

tensorboard --logdir="path to your logs folder"

TensorBoard output

5. Training weights available at:

Acknowledgment

This work is supported by Comunidad Autónoma de Madrid (Spain) under the Grant IND2020/TIC-17515

References

[1] - Xiao, B., Wu, H., & Wei, Y. (2018). Simple baselines for human pose estimation and tracking. In Proceedings of the European conference on computer vision (ECCV) (pp. 466-481).

spacecraft-uda's People

Contributors

jotabravo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

likrp

spacecraft-uda's Issues

The training time of baseline module?

Hi! Thanks for your nice work.
I would like to know the training time of baseline module? I use the hourglass module to train the synthesis image of SPEED+, with batch size 4, and it take 2 hours each epoch. Is it normal for training?

kptsmap.zip download expired

Hi! yr work is so attractive, but when i give a try to download yr kptsmap via Onedrive, i found it has expired sadly! So could u plz update the share link! It will be much appreciated! Thanks!

How to run a test with the training weigths you have already provided

Thank you for the work. I would like just to test this network with the training weights you have already provided and my test dataset would be small containing 20 images from lightbox images. Could you please explain how I can use your model just to test lightbox images and get output with pose-estimated images.

Thank you.

Heatmaps structure and validation

Hey JotaBravo, congratulations for you and your team's work on the spacecraft robust pose estimation.

I am trying to reproduce the result of your work in order to understand the challenges of domain gap adaptation in spatial field. In order to do so, i retrieved the weights of the 2 stacked large Hourglass model on the synthetic dataset and tried to view the results. I struggle a bit on understanding the the output data. Is it possible to have few details about it ?

First, i managed to retrieve the weights and perform a prediction based on a random image of the synthetic dataset. The model then gives me an output as a list and len(output) = 2 :

  • output[0] = {'hm_c' : [...], 'depth' : [...]}
  • output[1] is also {'hm_c' : [...], 'depth' : [...]} but with different values.
    => Does theses two output corresponds to something like "without pseudo-labels" and "with pseudo-labels" ?

Also, i tried to display the heatmaps to identify their keypoints id but i have few concerns :

  • hm_c is shape [n, 64,64] and not normalized. is it correct ?
  • is it normal that multiple keypoints dont match correctly ?
    image

Thank you for your time and your amazing work !

One Drive link is not working

Thank you for your amazing work and for the code. I'm trying to get the heatmap file through the One drive link you provided but the link is not working. Could you please check once. Thanks in advance

Any code to get Pose Estimation image?

Thanks for your nice work.
I have read your paper, but I could not find any code to generate Fig. 9 and Fig. 10 in this repo.
Could you please give me any hints? Thanks a lot.

share your paper ?

hi, nice work.
I find this work useful,and where is your paper link? Is it published online? or anywhere?

The paper link?

hi, <Spacecraft Pose Estimation Based on Unsupervised Domain Adaptation and on a 3D-Guided Loss Combination>is online?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.