achalddave / segment-any-moving Goto Github PK
View Code? Open in Web Editor NEWCode for "Towards Segmenting Anything That Moves"
Code for "Towards Segmenting Anything That Moves"
Sorry to disturb you again. The subproject fbms
under utils
is not accessible for us. I have tried to search it on bitbucket, but it seems that it's private.
Hello,
First, this is an excellent work ! Thank you for sharing !
I'm very interested in the Motion Network Part. I want to test it and visualize the results but I found some difficulties as I am relatively new to Pytorch. Could you please help me with the instructions needed to test the Motion Stream code part ?
Thank you so much for your collaboration !
Hi, how can I could obtain the datasets called as DAVIS-Moving and YTVOS-Moving?
detectron pytorch link: https://github.com/achalddave/segment-any-moving/blob/master/detectron_pytorch doesnt work.
Will this link work: https://github.com/roytseng-tw/Detectron.pytorch
I am trying to run the code with pytorch 1.9, CUDA 11.4 on a RTX A6000 GPU. Unfortunately I cant downgrade CUDA because that GPU doesnt support lower cuda versions. Can you please let me know if there is an updated version of the code that works with the latest torch.
I have tried to build on CUDA 9.0 with torch 0.4.0, and modifying the make.sh to have compute_86. The detectron_pytorch code you provide it shows unsupported gpu (which is expected) but doesnt give errors though. However when I execute this
python release/custom/run.py --model appearance --frames-dir DIR --output-dir OUTDIR --filename-format frameN
It gives this error
Traceback (most recent call last):
File "tools/infer_simple.py", line 38, in <module>
from modeling.model_builder import Generalized_RCNN
File "/home/msiam/Code/segment-any-moving/detectron_pytorch/lib/modeling/model_builder.py", line 11, in <module>
from model.roi_pooling.functions.roi_pool import RoIPoolFunction
File "/home/msiam/Code/segment-any-moving/detectron_pytorch/lib/model/roi_pooling/functions/roi_pool.py", line 3, in <module>
from .._ext import roi_pooling
File "/home/msiam/Code/segment-any-moving/detectron_pytorch/lib/model/roi_pooling/_ext/roi_pooling/__init__.py", line 3, in <module>
from ._roi_pooling import lib as _lib, ffi as _ffi
ImportError: /home/msiam/Code/segment-any-moving/detectron_pytorch/lib/model/roi_pooling/_ext/roi_pooling/_roi_pooling.so: undefined symbol: __cudaRegisterFatBinaryEnd
Anyway to hack it to work.
Thanks for your help in advance.
python: can't open file 'tools/infer_simple.py': [Errno 2] No such file or directory
Excellent work! Thanks for your contribution! Will the training code be available later? How many computing resources are needed to reproduce your reported results?
Python2.7 or 3?
Can you tell me your configuration environment? Includes python and CUDA versions. I successfully configured flownet2 under python2.7, but python2.7 cannot run your segment-any-moving. And if I use python3, I cannot successfully configure flownet2. Do you have any suggestions?
Hey
cv2 is not in the requiement.txt
What should be the right version?
I'm asking since i'm facing with the error:
10:12:01 log.py: 51: Writing log file to /segment-any-moving-pytorch/output/appearance/tracks/tracker_Mar01-10-12-01.log
fatal: Not a git repository: git-state/../.git/modules/git-state
fatal: 'git status --porcelain' failed in submodule detectron_pytorch
fatal: Not a git repository: git-state/../.git/modules/git-state
fatal: 'git status --porcelain' failed in submodule detectron_pytorch
Traceback (most recent call last): | 0/1 [00:00<?, ?it/s]
File "tracker/track_multiple.py", line 147, in | 70/1999 [00:00<00:08, 225.37it/s]
main()
File "tracker/track_multiple.py", line 143, in main
output_track_file=None)
File "/segment-any-moving-pytorch/tracker/track.py", line 1046, in track_and_visualize
progress=progress)
File "/segment-any-moving-pytorch/tracker/track.py", line 841, in visualize_tracks
visualized = visualize_image(t)
File "/segment-any-moving-pytorch/tracker/track.py", line 811, in visualize_image
vis_label=vis_label)
File "/segment-any-moving-pytorch/tracker/track.py", line 558, in visualize_detections
border_thick=3)
File "/segment-any-moving-pytorch/utils/vis.py", line 86, in vis_mask
mask.copy(), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
ValueError: not enough values to unpack (expected 3, got 2)
Traceback (most recent call last):
File "release/custom/track.py", line 80, in
main()
File "release/custom/track.py", line 76, in main
subprocess_call(cmd)
File "/segment-any-moving-pytorch/release/helpers/misc.py", line 23, in subprocess_call
subprocess.check_call(cmd, **kwargs)
File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['python', 'tracker/track_multiple.py', '--images-dir', '/segment-any-moving-pytorch/input', '--detections-dir', '/segment-any-moving-pytorch/output/appearance/detections', '--output-dir', '/segment-any-moving-pytorch/output/appearance/tracks', '--save-numpy', 'True', '--save-images', 'False', '--save-video', 'True', '--bidirectional', '--score-init-min', '0.9', '--fps', '30', '--filename-format', 'frame', '--quiet']' returned non-zero exit status 1.
Traceback (most recent call last):
File "release/custom/run.py", line 72, in
main()
File "release/custom/run.py", line 66, in main
'--output-dir', tracks_dir
File "/segment-any-moving-pytorch/release/helpers/misc.py", line 23, in subprocess_call
subprocess.check_call(cmd, **kwargs)
File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['python', 'release/custom/track.py', '--frames-dir', '/segment-any-moving-pytorch/input', '--detections-dir', '/segment-any-moving-pytorch/output/appearance/detections', '--filename-format', 'frame', '--config', '/segment-any-moving-pytorch/release/config.yaml', '--model', 'appearance', '--output-dir', '/segment-any-moving-pytorch/output/appearance/tracks']' returned non-zero exit status 1.
Thanks!
Reference: https://github.com/ChaoningZhang/MobileSAM
Our project performs on par with the original SAM and keeps exactly the same pipeline as the original SAM except for a change on the image encode, therefore, it is easy to Integrate into any project.
MobileSAM is around 60 times smaller and around 50 times faster than original SAM, and it is around 7 times smaller and around 5 times faster than the concurrent FastSAM. The comparison of the whole pipeline is summarzed as follows:
Best Wishes,
Qiao
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.