Comments (8)
device: cuda
Namespace(Final=True, R0=False, R20=False, colorize_results=False, data_dir='/content/inputs', depthNet=2, max_res=inf, net_receptive_field_size=None, output_dir='/content/drive/MyDrive/outputs_leres/', output_resolution=1, pix2pixsize=1024, savepatchs=0, savewholeest=0)
----------------- Options ---------------
Final: True [default: False]
R0: False
R20: False
aspect_ratio: 1.0
batch_size: 1
checkpoints_dir: ./pix2pix/checkpoints
colorize_results: False
crop_size: 672
data_dir: /content/inputs [default: None]
dataroot: None
dataset_mode: depthmerge
depthNet: 2 [default: None]
direction: AtoB
display_winsize: 256
epoch: latest
eval: False
generatevideo: None
gpu_ids: 0
init_gain: 0.02
init_type: normal
input_nc: 2
isTrain: False [default: None]
load_iter: 0 [default: 0]
load_size: 672
max_dataset_size: 10000
max_res: inf
model: pix2pix4depth
n_layers_D: 3
name: void
ndf: 64
netD: basic
netG: unet_1024
net_receptive_field_size: None
ngf: 64
no_dropout: False
no_flip: False
norm: none
num_test: 50
num_threads: 4
output_dir: /content/drive/MyDrive/outputs_leres/ [default: None]
output_nc: 1
output_resolution: None
phase: test
pix2pixsize: None
preprocess: resize_and_crop
savecrops: None
savewholeest: None
serial_batches: False
suffix:
verbose: False
----------------- End -------------------
initialize network with normal
loading the model from ./pix2pix/checkpoints/mergemodel/latest_net_G.pth
start processing
processing image 0 : 0
wholeImage being processed in : 2688
Adjust factor is: 1.0
Selecting patchs ...
Target resolution: (3024, 5376, 3)
Dynamicly change merged-in resolution; scale: 0.23809523809523808
Resulted depthmap res will be : (720, 1280)
patchs to process: 55
processing patch 0 | [ 0 0 693 693]
processing patch 1 | [ 80 0 693 693]
processing patch 2 | [160 0 693 693]
from boostingmonoculardepth.
I mean actually using the model to predict the result. Not just using resize.
Any suggestions?
from boostingmonoculardepth.
Can you share the command you use to run the model?
from boostingmonoculardepth.
!python run.py --Final --data_dir /content/inputs --output_dir /content/outputs_leres/ --depthNet 2
May I need to add specific parameter that I need to add?
from boostingmonoculardepth.
!python run.py --Final --data_dir /content/inputs --output_dir /content/outputs_leres/ --depthNet 2
May I need to add specific parameter that I need to add?
Just running the code in the co-lab example
from boostingmonoculardepth.
It seems that your input image resolution is (720, 1280). For consistency and avoiding too large output image files we are resizing our methods output to input original resolution [ in this case from (3024, 5376, 3) to (720, 1280, 3) ]. If you dont want the output to be resized to input resolution consider using --output_resolution 0
.
from boostingmonoculardepth.
ok.
I think understand what's happening.
The model is processing the image in it input resolution and to avoid too large output, it automatically resize the image after processing.
That's working.
from boostingmonoculardepth.
Maybe there is a max resolution limit? I definitely uploaded a super high resolution image
This one: (from https://kids.nationalgeographic.com/geography/states/article/new-york)
size: 3072 × 1728
Thank you for your help. I appreciate it very much.
from boostingmonoculardepth.
Related Issues (20)
- GPU thresh hold for nvidia A100 ?? HOT 2
- RuntimeError: The size of tensor a (75) must match the size of tensor b (76) at non-singleton dimension 3 HOT 4
- Code section2 error HOT 5
- Temporal inconsistency on image sequences HOT 4
- Can we use this for video input? HOT 1
- How to speed up the inference? HOT 2
- License of output images HOT 2
- Feature Request: Run on CPU or MPS for Apple M1/M2 HOT 4
- Unable to download mergenet model weights HOT 2
- Bit depth and banding HOT 2
- Which one of MiDas, DPT and LeRes is better for monocular depth estimation and takes the shortest time? HOT 1
- What's the parameter to be set in demo.py for "(NEW!) Now you can set the maximum resolution of the results to reduce runtime" HOT 1
- Why does NAN appear in the model inference process HOT 2
- Depth map by LeRes HOT 1
- LeRes weights download link is not working HOT 1
- How to align estimated depth with GT depth? HOT 1
- Colab error: FileNotFoundError: [Errno 2] No such file or directory: 'res101.pth' HOT 1
- Colab error: FileNotFoundError: [Errno 2] No such file or directory: './pix2pix/checkpoints/mergemodel/latest_net_G.pth' HOT 8
- no NVIDIA driver found error HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from boostingmonoculardepth.