Comments (10)
Hi DoranLyong,
Thanks for your interest in our work.
Please find attached the extracted frames from the YouTube video, used for creating the gif file.
gif_1.zip
Best,
Giorgos
from dronepose.
Thanks for your fast response! @tzole1155
I tested with your pretrained Gauss0.1
model noticed in your ECCVW2020 paper.
However, I got still the wrong outputs...
I input the options as the arguments like:
parser.add_argument("--input_path", type = str, default='./gif_1' , help = "Path to the root folder containing all the images")
parser.add_argument("--output_path", type = str, default='./output_results' ,help = "Path for saving the blended images")
# model
parser.add_argument('--model', type=str, default="resnet34", help='Model name')
parser.add_argument('--head', type=str, default="continuous", help='Head name')
parser.add_argument('--weights', type=str, default="./pretrained_model/Gauss0.1", help='Path to the trained weights file.')
parser.add_argument("--exocentric_w", type=float, default=0.1, help = " Exocentric silhouette supervision loss regulariser.")
parser.add_argument('--colour',type=str,default='red',help = "Colour to be used for the final blended image")
Could you advise me to solve this issue?
from dronepose.
Hi Lyong,
Could you provide some details about your environment (e.g. PyTorch version, Kaolin, OS, etc.)
Could you also verify that you have downloaded the weights and properly set them?
Also, do you use a GPU for running the code?
Apart from these, it would be useful to provide a screenshot from the terminal.
Giorgos
from dronepose.
Good morning @tzole1155
Let me describe my environment setttings:
I did download the weights from below links and chose Gauss0.1 @ epoch 20
version:
and saved it in a directory; pretrained_model
like:
Finally, I ran the inference code giving the arguments like this:
It seems that there is some input size issue?!
from dronepose.
Hi @DoranLyong,
Yes, something does not seem entirely correct.
Could you print the predicted poses per image and send them to me in order to verify that we get the same network predictions?
You could add a print statement after https://github.com/VCL3D/DronePose/blob/master/infer.py#L71.
Best,
Giorgos
from dronepose.
Thanks for your appreciation.
I involved the prediction values in a .pt
file and a test loading code:
- you can download the prediction values from this link.
- the sample code for loading the data is :
import torch
pred = torch.load('prediction_test.pt')
#print(pred.keys())
for key, values in pred.items():
print("image name", key)
pred_rot_matrix = values[0]
pred_translation = values[1]
print("rot: ", pred_rot_matrix)
print("trans: ", pred_translation)
I used the youtube video frames which you gave me before.
I made them by simply editing some parts like in https://github.com/VCL3D/DronePose/blob/master/infer.py#L71:
from dronepose.
Okay, I discovered some reproducibility issues.
I had tested your inference code under my docker container.
Those issues above are the inference results out of the docker container with conda.
After then, I tested under my local HOST with conda with the same environment settings.
I got the true results.
I set both in the same conditions following the requirement instructions
(left) out of the local host , (right) out of the docker container
Even though I'm really really running the same infer.py code with the same model weights,
I have no idea why the inference model returned difference values according to both local HOST and docker container :s.
※ I did mount the local workspace to the docker container, so the code conditions are the same.
from dronepose.
okay ... I think I discovered the reason.
Even though I set the reproducibility set up following:
torch.manual_seed(42)
torch.cuda.manual_seed(42)
torch.cuda.manual_seed_all(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
the results of the docker are different and wrong.
When I checked them in detail, I caught out both cuda version
s were sort of different :
torch==1.8.1+cu102
(local HOST) ; true casetorch==1.8.1+cu111
(docker container) ; false case
I didn't suspect the cuda version may affect the results, but I just reinstalled the torch package from torch==1.8.1+cu111
to torch==1.8.1+cu102
. In ridiculous, I got the correct outputs :s ......
I reinstalled:
torch==1.8.1+cu102
torchvision==0.9.1+cu102
I couldn't anticipate that cuda version
can make pretty much different results.
from dronepose.
Yes, the official repository also said it is okay to install CUDA >= 10.0.130
:s
Why does the model with torch==1.8.1+cu111
version return bad results? ...
from dronepose.
Hi @DoranLyong,
Thanks for spending time figuring this out.
As mentioned in the README we have tested our code with Python 3.6 and CUDA 10.1.
Best,
Giorgos
from dronepose.
Related Issues (3)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dronepose.