Coder Social home page Coder Social logo

levigty / aimclr Goto Github PK

View Code? Open in Web Editor NEW
60.0 60.0 14.0 1.34 MB

This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

License: MIT License

Python 98.77% Shell 1.23%

aimclr's People

Contributors

levigty avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

aimclr's Issues

How to implement t-sne clustering

The cluster diagram in your readme is very attractive. Is there a corresponding part of the code that can draw this class diagram

PermissionError: [Errno 13] Permission denied: '/data/gty'

when i run:
$ python main.py linear_evaluation --config config/ntu60/linear_eval/linear_eval_aimclr_xview_joint.yaml
out:
Traceback (most recent call last):
File "main.py", line 50, in
p = Processor(sys.argv[2:])
File "/home/xxx/project1/AimCLR/processor/processor.py", line 43, in init
self.init_environment()
File "/home/xxx/project1/AimCLR/processor/processor.py", line 76, in init_environment
super().init_environment()
File "/home/xxx/project1/AimCLR/processor/io.py", line 55, in init_environment
self.io.save_arg(self.arg)
File "/home/xxx/anaconda3/envs/TEST2/lib/python3.7/site-packages/torchlight-1.0-py3.7.egg/torchlight/io.py", line 116, in save_arg
File "/home/xxx/anaconda3/envs/TEST2/lib/python3.7/os.py", line 211, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/home/xxx/anaconda3/envs/TEST2/lib/python3.7/os.py", line 211, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/home/xxx/anaconda3/envs/TEST2/lib/python3.7/os.py", line 211, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 1 more time]
File "/home/xxx/anaconda3/envs/TEST2/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/data/gty'

my .yaml:
work_dir: /data/gty/AAAI_github/ntu60_cv/aimclr_joint/linear_eval

weights: /data/gty/released_model/ntu60_xview_joint.pt

weights: /data/gty/AAAI_github/ntu60_cv/aimclr_joint/pretext/epoch300_model.pt

ignore_weights: [encoder_q.fc, encoder_k, queue]

feeder

train_feeder: feeder.ntu_feeder.Feeder_single
train_feeder_args:
data_path: /home/wuyushan/project1/data/NTU60_frame50/xview/train_position.npy
label_path: /home/wuyushan/project1/data/NTU60_frame50/xview/train_label.pkl
shear_amplitude: -1
temperal_padding_ratio: -1
mmap: True
test_feeder: feeder.ntu_feeder.Feeder_single
test_feeder_args:
data_path: /home/xxx/project1/data/NTU60_frame50/xview/val_position.npy
label_path: /home/xxx/project1/data/NTU60_frame50/xview/val_label.pkl
shear_amplitude: -1
temperal_padding_ratio: -1
mmap: True

model

model: net.aimclr.AimCLR
model_args:
base_encoder: net.st_gcn.Model
pretrain: False

feature_dim: 128

queue_size: 32768

momentum: 0.999

Temperature: 0.07

mlp: True

in_channels: 3
hidden_channels: 16
hidden_dim: 256
num_class: 60
dropout: 0.5
graph_args:
layout: 'ntu-rgb+d'
strategy: 'spatial'
edge_importance_weighting: True

optim

nesterov: False
weight_decay: 0.0
base_lr: 3.
optimizer: SGD
step: [80]

training

device: [1] # 3
batch_size: 64 # 128
test_batch_size: 64 # 128
num_epoch: 100
stream: 'joint'

log

save_interval: -1
eval_interval: 5

i don't know why,can you help me,please!

How to solve it that when I verify

[04.26.22|13:33:35] Can not find weights [encoder_q.fc.bias].
[04.26.22|13:33:35] Can not find weights [encoder_q.fc.weight].
Traceback (most recent call last):
File "main.py", line 50, in
p = Processor(sys.argv[2:])
File "/home/lailai/lailai_file/action/AimCLR/processor/processor.py", line 60, in init
self.load_data()
File "/home/lailai/lailai_file/action/AimCLR/processor/processor.py", line 92, in load_data
dataset=train_feeder(**self.arg.train_feeder_args),
File "/home/lailai/lailai_file/action/AimCLR/feeder/ntu_feeder.py", line 17, in init
self.load_data(mmap)
File "/home/lailai/lailai_file/action/AimCLR/feeder/ntu_feeder.py", line 22, in load_data
self.sample_name, self.label = pickle.load(f)
_pickle.UnpicklingError: STACK_GLOBAL requires str

How to calculate the comprehensive results of the three streams

Hello, for the papers,“For all the reported results of three streams, we use the weights of [0.6, 0.6, 0.4] for weighted fusion like other multi-stream GCN methods.”
But I still don’t understand, can you teach me how to calculate the comprehensive results of the three streams?
For example, how did the 83.8 in the last line of the fourth column in Table II calculated?
image

About KNN evaluation and finetune settings

Hi, Thank you for sharing your work! You used KNN evaluation method in your paper, but I did not find the related implementation details in the article, nor did I find the corresponding code. Would please provide some more details of the KNN evaluation? And as to the finetune setting, did you adjust the weight decay when doing finetune? Looking forward to your reply

Request for sharing dataset

Thanks for your time, I encounter some problems using Google Drive to download PKUMMD dataset. It can help a lot if both parts of the datasets can be shared on BaiduYun if it is convenient for you, I appreciate it a lot for your kindness.

By the way, is the preprocessing and experiment settings same for part1 and part2? Looking forward to your reply.

How to fine-tune

How to fine-tune the model after the linear evaluation is completed? I don't see the corresponding yaml file in the project

the train_mean_loss

Hello, thank you for presenting your excellent work!

While pretraining aimclr, how do you measure the progress with respect to the loss function train_mean_loss, what is a good value demonstrating that we can stop the aimclr pretraining to yield a good encoder?

Thanks

Preprocessing NTU dataset

Hi, thank you for sharing your work with the community!
I was intersted in training a model using 150 frames long sequences, but after preprocessing the NTU dataset I ended up with just .npy files, while .pkl are needed for the label. Could you please share the code to convert the .npy files into .pkl?
Thank you,
Paolo

lr

Hello, what is the initial value of the learning rate set for fine tuning?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.