Comments (15)
Hi. To test with your own images, simply read them in as a 4-dimensional pytorch floating point tensor of size 1x224x224x3. The raw rgb values [0,255] should be divided by a constant factor 255.0, such that all pixel values fall in the range of [0, 1].
from fast-depth.
I tried doing this but i've got this error "RuntimeError: Given groups=1, weight of size 32 3 3 3, expected input[1, 244, 244, 3] to have 3 channels, but got 244 channels instead". My image is 244x244 and i'm giving the right format, as you can see here:
import matplotlib.pyplot as plt
import numpy as np
img = plt.imread("img.jpg")/255.
img.shape
(244, 244, 3)
img = np.expand_dims(img, axis=0)
img.shape
(1, 244, 244, 3)
i = torch.from_numpy(img)
Traceback (most recent call last):
File "", line 1, in
NameError: name 'torch' is not defined
import torch
i = torch.from_numpy(img)
i.shape
torch.Size([1, 244, 244, 3])
So i don't know why this error is occurring, if i'm giving the exact same format.
from fast-depth.
The PyTorch conv2d function assumes inputs to be in 'NCHW' format, meaning that the tensor you feed into the network should be of shape [1, 3, 224, 224]. From your code snippet, you may be using a 'NHWC' format -- try permuting the tensor dimensions to change to 'NCHW'.
from fast-depth.
My image is 244x244 and i'm giving the right format
Also, the correct image size is 224 x 224, not 244 x 244
from fast-depth.
Oh thanks! it worked, just one more problem, the results were these :
I placed my input code in the "args.evaluate" if condition, and then saved my results in a ply file, so my question is if there is any pos processing missing for the correct prediction of the depth map that i forgoted to do, or it just didn't work for this image.
from fast-depth.
Have you divided the input RGB values by 255.0, as in this line?
Line 56 in b1266da
from fast-depth.
Not exactly like this.
This is my input code :
img = plt.imread("img.jpg")/255. #normalization
img = np.reshape(img, (3, 224, 224))
img = np.expand_dims(img, axis=0)
print(img.shape)
with torch.no_grad():
pred = model(torch.from_numpy(img).float().cuda())
np.save('pred.npy', pred.cpu())
print(pred)
import sys
sys.exit(0)
from fast-depth.
img = np.reshape(img, (3, 224, 224))
I believe it should be permutation of dimensions here, rather than reshaping (which breaks the data ordering). Please try img = np.transpose(img, (2,0,1))
and see if it makes a difference.
from fast-depth.
It worked much better!
I will try better results with other images. Thanks for the help!
from fast-depth.
Thanks for the work @dwofk @fangchangma.
I am trying the same thing as @GustavoCamargoRL did.
while True:
image_cuda = torch.from_numpy(img).float().cuda()
pred = 0
print(pred)
with torch.no_grad():
pred = model(image_cuda)
#np.save('pred.npy', pred.cpu())
print(pred)
The output from the first iteration looks good. But at each iteration, the output is different from the output of other iterations even with the same input image (See below pic).
If I kill the thread and execute the code each time at the first iteration I will get the same output.
I print the pred values and find that it does differ from the previous iteration even with the same input image and the same model.
Is there anything I missed for using the model?
from fast-depth.
@GustavoCamargoRL Do you have the same issue?
from fast-depth.
@mathmax12 Have you done this by using tvm apache?
from fast-depth.
@LulaSan It turns out that this caused by the tvm . the latest tvm solved this
from fast-depth.
@mathmax12 Ok thank you, can I ask you how do you visualize the results? By using their code visualize.py?
from fast-depth.
You can save the results as https://github.com/dwofk/fast-depth/blob/master/main.py#L98
or using cv2.imshow() to display
from fast-depth.
Related Issues (20)
- module **scipy.misc** has no attribute *imresize* HOT 1
- Strange Depth Output Image HOT 3
- Why did you do the processing in this way?
- train transform
- can't decompression model
- nyuv2 data problem
- Distance Calculation
- GPU compilation error
- Why you saved the entire model instead of state_dict .. ? HOT 1
- How to train models? HOT 3
- Is there any way to solve the problem?
- Memory Error
- I have problem with using models, AssertionError: => no model found at
- Can I use Sparse-to-dense on Jetson TX2? HOT 1
- How to inference trained model on local environment(ex.window or ubuntu)
- Is NYUv2 rectified?
- Pretrained model link is broken HOT 1
- NYUv2 preprocessed data link doesn't work HOT 2
- Weights
- weird result after running "Evaluation" HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fast-depth.