Coder Social home page Coder Social logo

camp_zipnerf's People

Contributors

dpkay avatar duckworthd avatar jonbarron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

camp_zipnerf's Issues

Camera to worlds

Hi,

Thank you for the open-sourced code.

I am digging the code to understand how to render an image.

However, when I checked dataset.camtoworlds matrix, the translations of z-axis are always 0. Below image shows a piece of the matrix value

image

I expect it should be x, y , and z values

Can anyone helps me to understand this.

Thank you

problem while training

oject/gwang/ankan/zipnerf/data/tools/images
*** Loaded 58 images from disk
*** Loaded EXIF data for 58 images
*** Constructed COLMAP-to-world transform.
*** Constructed 120 render poses via ellipse path
*** Constructed train/test split: #train=15 #test=43
*** LLFF successfully loaded!
*** split=DataSplit.TRAIN
*** #images/poses/exposures=15
*** #camtoworlds=15
*** resolution=(2048, 3840)
I0202 13:09:14.556802 23456247948864 train.py:151] Optimization parameter sizes/counts:
I0202 13:09:14.583516 23456247948864 train.py:153] grid_0 6590464
I0202 13:09:14.583668 23456247948864 train.py:153] grid_0/grid_016 (16, 16, 16, 1)
I0202 13:09:14.583768 23456247948864 train.py:153] grid_0/grid_032 (32, 32, 32, 1)
I0202 13:09:14.583875 23456247948864 train.py:153] grid_0/grid_064 (64, 64, 64, 1)
I0202 13:09:14.584216 23456247948864 train.py:153] grid_0/grid_128 (128, 128, 128, 1)
I0202 13:09:14.584261 23456247948864 train.py:153] grid_0/hash_256 (2097152, 1)
I0202 13:09:14.584302 23456247948864 train.py:153] grid_0/hash_512 (2097152, 1)
I0202 13:09:14.585829 23456247948864 train.py:153] MLP_0 897
I0202 13:09:14.585915 23456247948864 train.py:153] MLP_0/Dense_0 832
I0202 13:09:14.585976 23456247948864 train.py:153] MLP_0/Dense_0/kernel (12, 64)
I0202 13:09:14.586011 23456247948864 train.py:153] MLP_0/Dense_0/bias (64,)
I0202 13:09:14.586069 23456247948864 train.py:153] MLP_0/Dense_1 65
I0202 13:09:14.586101 23456247948864 train.py:153] MLP_0/Dense_1/kernel (64, 1)
I0202 13:09:14.586128 23456247948864 train.py:153] MLP_0/Dense_1/bias (1,)
I0202 13:09:14.586176 23456247948864 train.py:153] grid_1 10784768
I0202 13:09:14.586209 23456247948864 train.py:153] grid_1/grid_0016 (16, 16, 16, 1)
I0202 13:09:14.586237 23456247948864 train.py:153] grid_1/grid_0032 (32, 32, 32, 1)
I0202 13:09:14.586264 23456247948864 train.py:153] grid_1/grid_0064 (64, 64, 64, 1)
I0202 13:09:14.586291 23456247948864 train.py:153] grid_1/grid_0128 (128, 128, 128, 1)
I0202 13:09:14.586317 23456247948864 train.py:153] grid_1/hash_0256 (2097152, 1)
I0202 13:09:14.586342 23456247948864 train.py:153] grid_1/hash_0512 (2097152, 1)
I0202 13:09:14.586367 23456247948864 train.py:153] grid_1/hash_1024 (2097152, 1)
I0202 13:09:14.586392 23456247948864 train.py:153] grid_1/hash_2048 (2097152, 1)
I0202 13:09:14.587173 23456247948864 train.py:153] MLP_1 1153
I0202 13:09:14.587485 23456247948864 train.py:153] MLP_1/Dense_0 1088
I0202 13:09:14.587525 23456247948864 train.py:153] MLP_1/Dense_0/kernel (16, 64)
I0202 13:09:14.587555 23456247948864 train.py:153] MLP_1/Dense_0/bias (64,)
I0202 13:09:14.587608 23456247948864 train.py:153] MLP_1/Dense_1 65
I0202 13:09:14.587638 23456247948864 train.py:153] MLP_1/Dense_1/kernel (64, 1)
I0202 13:09:14.587666 23456247948864 train.py:153] MLP_1/Dense_1/bias (1,)
I0202 13:09:14.587712 23456247948864 train.py:153] grid_2 59916288
I0202 13:09:14.587743 23456247948864 train.py:153] grid_2/grid_0016 (16, 16, 16, 4)
I0202 13:09:14.587779 23456247948864 train.py:153] grid_2/grid_0032 (32, 32, 32, 4)
I0202 13:09:14.587811 23456247948864 train.py:153] grid_2/grid_0064 (64, 64, 64, 4)
I0202 13:09:14.587838 23456247948864 train.py:153] grid_2/grid_0128 (128, 128, 128, 4)
I0202 13:09:14.587872 23456247948864 train.py:153] grid_2/hash_0256 (2097152, 4)
I0202 13:09:14.587897 23456247948864 train.py:153] grid_2/hash_0512 (2097152, 4)
I0202 13:09:14.587923 23456247948864 train.py:153] grid_2/hash_1024 (2097152, 4)
I0202 13:09:14.587947 23456247948864 train.py:153] grid_2/hash_2048 (2097152, 4)
I0202 13:09:14.587972 23456247948864 train.py:153] grid_2/hash_4096 (2097152, 4)
I0202 13:09:14.587997 23456247948864 train.py:153] grid_2/hash_8192 (2097152, 4)
I0202 13:09:14.588317 23456247948864 train.py:153] MLP_2 225877
I0202 13:09:14.588375 23456247948864 train.py:153] MLP_2/Dense_0 3264
I0202 13:09:14.588407 23456247948864 train.py:153] MLP_2/Dense_0/kernel (50, 64)
I0202 13:09:14.588435 23456247948864 train.py:153] MLP_2/Dense_0/bias (64,)
I0202 13:09:14.588488 23456247948864 train.py:153] MLP_2/Dense_1 65
I0202 13:09:14.588523 23456247948864 train.py:153] MLP_2/Dense_1/kernel (64, 1)
I0202 13:09:14.588550 23456247948864 train.py:153] MLP_2/Dense_1/bias (1,)
I0202 13:09:14.588595 23456247948864 train.py:153] MLP_2/Dense_2 16640
I0202 13:09:14.588625 23456247948864 train.py:153] MLP_2/Dense_2/kernel (64, 256)
I0202 13:09:14.588653 23456247948864 train.py:153] MLP_2/Dense_2/bias (256,)
I0202 13:09:14.588695 23456247948864 train.py:153] MLP_2/Dense_3 72704
I0202 13:09:14.588736 23456247948864 train.py:153] MLP_2/Dense_3/kernel (283, 256)
I0202 13:09:14.588765 23456247948864 train.py:153] MLP_2/Dense_3/bias (256,)
I0202 13:09:14.589181 23456247948864 train.py:153] MLP_2/Dense_4 65792
I0202 13:09:14.589230 23456247948864 train.py:153] MLP_2/Dense_4/kernel (256, 256)
I0202 13:09:14.589262 23456247948864 train.py:153] MLP_2/Dense_4/bias (256,)
I0202 13:09:14.589311 23456247948864 train.py:153] MLP_2/Dense_5 65792
I0202 13:09:14.589342 23456247948864 train.py:153] MLP_2/Dense_5/kernel (256, 256)
I0202 13:09:14.589369 23456247948864 train.py:153] MLP_2/Dense_5/bias (256,)
I0202 13:09:14.589750 23456247948864 train.py:153] MLP_2/Dense_6 1620
I0202 13:09:14.589790 23456247948864 train.py:153] MLP_2/Dense_6/kernel (539, 3)
I0202 13:09:14.589818 23456247948864 train.py:153] MLP_2/Dense_6/bias (3,)
I0202 13:09:15.044798 23456247948864 checkpoints.py:1101] Found no checkpoint files in /project/gwang/ankan/zipnerf/ckpt/tools with prefix checkpoint_
/home/ad892/anaconda3/envs/camp_zipnerf/lib/python3.11/site-packages/jax/_src/xla_bridge.py:945: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code.
warnings.warn(
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/mmfs1/project/gwang/ankan/zipnerf/train.py", line 557, in
app.run(main)
File "/home/ad892/anaconda3/envs/camp_zipnerf/lib/python3.11/site-packages/absl/app.py", line 308, in run
run_main(main, args)
File "/home/ad892/anaconda3/envs/camp_zipnerf/lib/python3.11/site-packages/absl/app.py", line 254, in run_main
sys.exit(main(argv))
^^^^^^^^^^
File "/mmfs1/project/gwang/ankan/zipnerf/train.py", line 210, in main
state, stats, rngs = train_pstep(rngs, state, batch, cameras, train_frac) # pytype: disable=wrong-arg-types # jnp-type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mmfs1/project/gwang/ankan/zipnerf/internal/train_utils.py", line 476, in train_step
(
, (stats, mutable_camera_params)), grad = loss_grad_fn(state.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mmfs1/project/gwang/ankan/zipnerf/internal/train_utils.py", line 357, in loss_fn
renderings, ray_history = model.apply(
^^^^^^^^^^^^
File "/mmfs1/project/gwang/ankan/zipnerf/internal/models.py", line 279, in call
ray_results = mlp(
^^^^
File "/mmfs1/project/gwang/ankan/zipnerf/internal/models.py", line 779, in call
raw_density, x = predict_density(means, covs, **predict_density_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mmfs1/project/gwang/ankan/zipnerf/internal/models.py", line 682, in predict_density
grid(
File "/mmfs1/project/gwang/ankan/zipnerf/internal/grid_utils.py", line 228, in call
values = self.param(f'{datastructure}
{grid_size_str}', init_fn)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
flax.errors.ScopeParamNotFoundError: Could not find parameter named "grid_0000" in scope "/grid_1". (https://flax.readthedocs.io/en/latest/api_reference/flax.errors.html#flax.errors.ScopeParamNotFoundError)

For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.

Test-time optimization

Hi,

Really nice work on pose optimization. However, in order to evaluate the final rendering quality, I believe there will be a test-time optimization carried after training. I wonder if you can provide the config that we need for synthetic nerf perturbation?

Thanks!

Apple Metal support

Very excited this has been released.
Will it work on a M3 Max?
If not, any plans to support Macs?

Final result after render

I have run:

python -m train \
    --gin_configs=configs/zipnerf/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.checkpoint_dir = '${CHECKPOINT_DIR}'"

python -m eval \
    --gin_configs=configs/zipnerf/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.checkpoint_dir = '${CHECKPOINT_DIR}'"

python -m render \
    --gin_configs=configs/zipnerf/360.gin \
    --gin_bindings="Config.data_dir = '${DATA_DIR}'" \
    --gin_bindings="Config.checkpoint_dir = '${CHECKPOINT_DIR}'" \
    --gin_bindings="Config.render_dir = '${CHECKPOINT_DIR}/render/'" \
    --gin_bindings="Config.render_path = True" \
    --gin_bindings="Config.render_path_frames = 240" \
    --gin_bindings="Config.render_video_fps = 30"

python scripts/zipnerf/generate_tables_360.py 

Now I have 4 video's in the render folder that was created.

Screenshot from 2024-01-22 20-24-38

The output says there are 2 videos missing as certain files were not created during the process.

I0122 20:01:14.943336 140617853252672 render.py:110] Rendered in 11.274s
I0122 20:01:15.526105 140617853252672 render.py:138] Creating videos.
I0122 20:01:15.769567 140617853252672 videos_utils.py:95] Video shape is (840, 1297)
I0122 20:01:15.769668 140617853252672 videos_utils.py:134] Making video /home/user/home/user/Documents/workspace/camp_zipnerf/checkpoints/360_v2_garden_1297x840//render/path_renders_step_200000/videos/360_v2_garden_1297x840_checkpoints_path_renders_step_200000_color.mp4...
I0122 20:01:29.177273 140617853252672 videos_utils.py:132] Images missing for tag normals
I0122 20:01:29.177362 140617853252672 videos_utils.py:132] Images missing for tag normals_rectified
I0122 20:01:29.177408 140617853252672 videos_utils.py:134] Making video /home/user/home/user/Documents/workspace/camp_zipnerf/checkpoints/360_v2_garden_1297x840//render/path_renders_step_200000/videos/360_v2_garden_1297x840_checkpoints_path_renders_step_200000_acc.mp4...
I0122 20:01:32.633051 140617853252672 videos_utils.py:134] Making video /home/user/home/user/Documents/workspace/camp_zipnerf/checkpoints/360_v2_garden_1297x840//render/path_renders_step_200000/videos/360_v2_garden_1297x840_checkpoints_path_renders_step_200000_distance_mean.mp4...
I0122 20:01:49.162443 140617853252672 videos_utils.py:134] Making video /home/user/home/user/Documents/workspace/camp_zipnerf/checkpoints/360_v2_garden_1297x840//render/path_renders_step_200000/videos/360_v2_garden_1297x840_checkpoints_path_renders_step_200000_distance_median.mp4...

All 4 videos are just blank back screens for a few seconds, files seem corrupt according to Github as you can see below. The images in the render folder show stuff, but also probably not as desired. I don't know what went wrong. The dataset and colmap output has been used thoroughly with other repo's without issues.

360_v2_garden_1297x840_checkpoints_path_renders_step_200000_distance_mean.mp4

color_213
color_214
color_215

Datasets

Thanks for the great work and code release. Currently trying out a test with the gardenvase dataset.

Not an "issue" with your code to report at the moment, but in this repo you do not refer to some of the datasets which are used in the examples, namely the NYC apartment, Alameda home, etc. Those are restricted?

About Large-scale Dataset Release

Congrats on the impressive works! I believe that they will inspire numerous future works.
Also, sorry for the unrelated question, but I would like to know whether there are any plans to release the large-scale dataset showed in the Zip-NeRF teaser video and described in SMERF (berlin, nyc, alameda, london)?
Thanks!

Is there any interactive viewer to see results?

Hi,

Thank you so much for your work.

I am testing and get nice rendering results.

However, it will be wonderful if I can render and check using an interactive viewer similar to nerfstudio. Is there any viewer which help me to do that, or guide me how to do?

Thank you in advance and hope get your help.

Noob questions

Hi, I hope you don't mind, but could you tell me if any of the errors/warnings underneath, selected and copied during the process, are problematic and/or potentially do hurt the performance of your code on my system?

E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered

E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered

E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered

W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

I0124 13:07:20.764660 140638609667136 xla_bridge.py:660] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: CUDA

I0124 13:07:20.768469 140638609667136 xla_bridge.py:660] Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory

Warning: image_path not found for reconstruction

I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

And a last question: the output of generate_tables_360.py looks like this in my terminal, formatting is off due to only one result in my set?

Average Results:
\hline
Our Model &    \cellcolor{red}28.30 &    \cellcolor{red}0.864 & 8.02
Per-Scene Results:
psnr
 & \textit{360_v2_garden_1297x840/checkpoints} \\\hline
\hline
Our Model &    \cellcolor{red}28.30
ssim
 & \textit{360_v2_garden_1297x840/checkpoints} \\\hline
\hline
Our Model &    \cellcolor{red}0.864

Jax Environment for CUDA 11.6

Thanks for the authors' outstanding work. However, due to JAX's high dependency on CUDA, the default environment is only compatible with CUDA 11.8 and CUDA 12.2. Here, I provide an environment that can be applied to CUDA 11.3-CUDA 11.7.

conda create -n camp_zipnerf python=3.10
conda activate camp_zipnerf 

Then, pip install the following requirements:

numpy==1.26.3
jax==0.4.6
jaxlib==0.4.6
flax==0.6.1
opencv-python==4.9.0.80
pillow==10.2.0
tensorboard==2.10.1
tensorflow==2.10.0
gin-config==0.5.0
dm-pix==0.4.2
rawpy==0.19.0
mediapy==1.2.0
immutabledict==4.1.0
ml_collections
jaxcam==0.1.1
chex==0.1.7

Finally, it's necessary to install the CUDA version of jaxlib to enable GPU-accelerated training.

# python 3.10
wget https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.4.6+cuda11.cudnn82-cp310-cp310-manylinux2014_x86_64.whl
pip install jaxlib-0.4.6+cuda11.cudnn82-cp310-cp310-manylinux2014_x86_64.whl

# python 3.11
wget https://storage.googleapis.com/jax-releases/cuda11/jaxlib-0.4.6+cuda11.cudnn82-cp311-cp311-manylinux2014_x86_64.whl
pip intall jaxlib-0.4.6+cuda11.cudnn82-cp311-cp311-manylinux2014_x86_64.whl

After that, you may need to replace flax.core.copy with flax.core.FrozenDict.copy in internal/train_utils.py to save checkpoints correctly.

flax.errors.ScopeParamShapeError at executing 360_eval.sh or 360_render.sh after executing camp/360_train.sh

Hi, thanks for this great paper and code.
After running camp/360_train.sh, I am trying to run 360_eval.sh or 360_render.sh, a flax.errors.ScopeParamShapeError always occur.
Is there a solution?
Below is the console log when running 360_render.sh.

`
2024-04-25 05:47:31.916448: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-04-25 05:47:31.916484: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-04-25 05:47:31.917323: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-04-25 05:47:32.577797: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
I0425 05:47:41.449953 140104660227264 render.py:144] Rendering config:

Parameters for eval/Config:

==============================================================================

eval/Config.adam_beta1 = 0.9
eval/Config.adam_beta2 = 0.99
eval/Config.adam_eps = 1e-15
eval/Config.arcore_format_pose_file = None
eval/Config.autoexpose_renders = False
eval/Config.batch_size = 8192
eval/Config.batching = 'all_images'
eval/Config.camera_perturb_dolly_use_average = False
eval/Config.camera_perturb_intrinsic_single = True
eval/Config.camera_perturb_sigma_dolly_z = 0.0
eval/Config.camera_perturb_sigma_focal_length = 0.0
eval/Config.camera_perturb_sigma_look_at = 0.0
eval/Config.camera_perturb_sigma_position = 0.0
eval/Config.camera_perturb_zero_distortion = False
eval/Config.cast_rays_in_eval_step = True
eval/Config.cast_rays_in_train_step = True
eval/Config.charb_padding = 0.001
eval/Config.checkpoint_dir =
'/home/user/camp_zipnerf_output/zipnerf/360/garden'
eval/Config.checkpoint_every = 10000
eval/Config.checkpoint_init = False
eval/Config.checkpoint_keep = 2
eval/Config.colmap_subdir = None
eval/Config.compute_disp_metrics = False
eval/Config.compute_normal_metrics = False
eval/Config.compute_procrustes_metric = False
eval/Config.data_coarse_loss_mult = 0.0
eval/Config.data_dir = '/home/user/data/360_v2/garden'
eval/Config.data_loss_mult = 1.0
eval/Config.data_loss_type = 'charb'
eval/Config.dataset_loader = 'llff'
eval/Config.debug_mode = False
eval/Config.deterministic_showcase = True
eval/Config.disable_multiscale_loss = False
eval/Config.disable_pmap_and_jit = False
eval/Config.distortion_loss_curve_fn =
(@math.power_ladder, {'p': -0.25, 'premult': 10000.0})
eval/Config.distortion_loss_mult = 0.01
eval/Config.distortion_loss_target = 'tdist'
eval/Config.donate_args_to_train = True
eval/Config.dtu_light_cond = 3
eval/Config.early_exit_steps = None
eval/Config.eikonal_coarse_loss_mult = 0.0
eval/Config.eikonal_loss_mult = 0.0
eval/Config.enable_grid_c2f = False
eval/Config.enable_loss_scaler = False
eval/Config.eval_crop_borders = 0
eval/Config.eval_dataset_limit = 2147483647
eval/Config.eval_only_once = True
eval/Config.eval_quantize_metrics = True
eval/Config.eval_raw_affine_cc = False
eval/Config.eval_render_interval = 1
eval/Config.eval_save_output = True
eval/Config.eval_save_ray_data = False
eval/Config.exposure_percentile = 97.0
eval/Config.factor = 4
eval/Config.far = 1000000.0
eval/Config.far_plane_meters = None
eval/Config.focal_length_var_loss_mult = 0.0
eval/Config.forward_facing = False
eval/Config.gc_every = 10000
eval/Config.grad_max_norm = 0.0
eval/Config.grad_max_val = 0.0
eval/Config.grid_c2f_weight_method = 'cosine_sequential'
eval/Config.image_subdir = None
eval/Config.jax_rng_seed = 20200823
eval/Config.llff_load_from_poses_bounds = False
eval/Config.llff_use_all_images_for_training = False
eval/Config.llffhold = 8
eval/Config.load_alphabetical = True
eval/Config.load_colmap_points = False
eval/Config.load_ngp_format_poses = False
eval/Config.lock_up = False
eval/Config.loss_scale = 1000.0
eval/Config.lr_delay_mult = 1e-08
eval/Config.lr_delay_steps = 20000
eval/Config.lr_final = 0.000125
eval/Config.lr_final_grid = None
eval/Config.lr_init = 0.00125
eval/Config.lr_init_grid = None
eval/Config.max_steps = 200000
eval/Config.multiscale_train_factors = None
eval/Config.near = 0.0
eval/Config.near_plane_meters = None
eval/Config.np_rng_seed = 20201473
eval/Config.num_border_pixels_to_mask = 0
eval/Config.num_showcase_images = 5
eval/Config.optimize_cameras = False
eval/Config.optimize_test_cameras = False
eval/Config.optimize_test_cameras_batch_size = 10000
eval/Config.optimize_test_cameras_for_n_steps = 200
eval/Config.optimize_test_cameras_lr = 0.001
eval/Config.orientation_coarse_loss_mult = 0.0
eval/Config.orientation_loss_mult = 0.0
eval/Config.orientation_loss_target = 'normals_pred'
eval/Config.param_regularizers =
{'grid_0': (0.1, @jnp.mean, 2, 1),
'grid_1': (0.1, @jnp.mean, 2, 1),
'grid_2': (0.1, @jnp.mean, 2, 1)}
eval/Config.patch_size = 1
eval/Config.predicted_normal_coarse_loss_mult = 0.0
eval/Config.predicted_normal_loss_mult = 0.0
eval/Config.principal_point_reg_loss_mult = 0.0
eval/Config.principal_point_var_loss_mult = 0.0
eval/Config.print_camera_every = 500
eval/Config.print_every = 100
eval/Config.rad_mult_max = 1.0
eval/Config.rad_mult_min = 1.0
eval/Config.radial_distortion_var_loss_mult = 0.0
eval/Config.randomized = True
eval/Config.rawnerf_mode = False
eval/Config.render_calibration_distance = 3.0
eval/Config.render_calibration_keyframes = None
eval/Config.render_camtype = None
eval/Config.render_chunk_size = 32768
eval/Config.render_delete_images_when_done = True
eval/Config.render_dir =
'/home/user/camp_zipnerf_output/zipnerf/360/garden/render/'
eval/Config.render_dist_adaptive = False
eval/Config.render_dist_percentile = 0.5
eval/Config.render_focal = None
eval/Config.render_looped_videos = False
eval/Config.render_path = True
eval/Config.render_path_file = None
eval/Config.render_path_frames = 480
eval/Config.render_resolution = None
eval/Config.render_rgb_only = False
eval/Config.render_rotate_xaxis = 0.0
eval/Config.render_rotate_yaxis = 0.0
eval/Config.render_spherical = False
eval/Config.render_spline_const_speed = False
eval/Config.render_spline_degree = 5
eval/Config.render_spline_fixed_up = False
eval/Config.render_spline_interpolate_exposure = False
eval/Config.render_spline_interpolate_exposure_smoothness = 20
eval/Config.render_spline_keyframes = None
eval/Config.render_spline_keyframes_choices = None
eval/Config.render_spline_lock_up = False
eval/Config.render_spline_lookahead_i = None
eval/Config.render_spline_meters_per_sec = None
eval/Config.render_spline_n_buffer = None
eval/Config.render_spline_n_interp = 30
eval/Config.render_spline_outlier_keyframe_multiplier = None
eval/Config.render_spline_outlier_keyframe_quantile = None
eval/Config.render_spline_rot_weight = 0.1
eval/Config.render_spline_smoothness = 0.03
eval/Config.render_video_crf = 18
eval/Config.render_video_exts = ('mp4',)
eval/Config.render_video_fps = 60
eval/Config.robust_loss_scale = 0.01
eval/Config.save_calibration_to_disk = False
eval/Config.scene_bbox = None
eval/Config.spline_interlevel_params = {'blurs': (0.03, 0.003), 'mults': 0.01}
eval/Config.train_render_every = 0
eval/Config.transform_poses_fn = None
eval/Config.use_exrs = False
eval/Config.use_identity_cameras = False
eval/Config.use_perturbed_cameras = False
eval/Config.use_tiffs = False
eval/Config.vis_decimate = 0
eval/Config.vis_num_rays = 16
eval/Config.visualize_every = 10000
eval/Config.vocab_tree_path = None
eval/Config.world_scale = 1.0
eval/Config.z_max = None
eval/Config.z_min = None
eval/Config.z_phase = 0.0
eval/Config.z_variation = 0.0

I0425 05:47:41.546334 140104660227264 xla_bridge.py:660] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: CUDA
I0425 05:47:41.548914 140104660227264 xla_bridge.py:660] Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory
*** using 4x downsampled images
*** Finding COLMAP data
*** Constructing NeRF Scene Manager
Warning: image_path not found for reconstruction
*** Processing COLMAP data
*** Loaded camera parameters for 185 images
*** image names sorted alphabetically
*** Loading images from /home/user/data/360_v2/garden/images_4
*** Loaded 185 images from disk
*** Loaded EXIF data for 185 images
*** Constructed COLMAP-to-world transform.
*** Constructed 480 render poses via ellipse path
*** Constructed train/test split: #train=161 #test=24
*** LLFF successfully loaded!
*** split=DataSplit.TEST
*** #images/poses/exposures=24
*** #camtoworlds=480
*** resolution=(840, 1297)
I0425 05:49:08.581712 140104660227264 checkpoints.py:1062] Restoring orbax checkpoint from //home/user/camp_zipnerf_output/zipnerf/360/garden/checkpoint_200000
I0425 05:49:08.584176 140104660227264 checkpointer.py:164] Restoring item from /home/user/camp_zipnerf_output/zipnerf/360/garden/checkpoint_200000.
W0425 05:49:09.925947 140104660227264 transform_utils.py:229] The transformations API will eventually be replaced by an upgraded design. The current API will not be removed until this point, but it will no longer be actively worked on.
I0425 05:49:09.927694 140104660227264 transform_utils.py:286] The following keys are not loaded from the original tree after applying specified transforms: opt_state/0/count, opt_state/0/mu/params/MLP_0/Dense_0/bias, opt_state/0/mu/params/MLP_0/Dense_0/kernel, opt_state/0/mu/params/MLP_0/Dense_1/bias, opt_state/0/mu/params/MLP_0/Dense_1/kernel, opt_state/0/mu/params/MLP_1/Dense_0/bias, opt_state/0/mu/params/MLP_1/Dense_0/kernel, opt_state/0/mu/params/MLP_1/Dense_1/bias, opt_state/0/mu/params/MLP_1/Dense_1/kernel, opt_state/0/mu/params/MLP_2/Dense_0/bias, opt_state/0/mu/params/MLP_2/Dense_0/kernel, opt_state/0/mu/params/MLP_2/Dense_1/bias, opt_state/0/mu/params/MLP_2/Dense_1/kernel, opt_state/0/mu/params/MLP_2/Dense_2/bias, opt_state/0/mu/params/MLP_2/Dense_2/kernel, opt_state/0/mu/params/MLP_2/Dense_3/bias, opt_state/0/mu/params/MLP_2/Dense_3/kernel, opt_state/0/mu/params/MLP_2/Dense_4/bias, opt_state/0/mu/params/MLP_2/Dense_4/kernel, opt_state/0/mu/params/MLP_2/Dense_5/bias, opt_state/0/mu/params/MLP_2/Dense_5/kernel, opt_state/0/mu/params/MLP_2/Dense_6/bias, opt_state/0/mu/params/MLP_2/Dense_6/kernel, opt_state/0/mu/params/grid_0/grid_016, opt_state/0/mu/params/grid_0/grid_032, opt_state/0/mu/params/grid_0/grid_064, opt_state/0/mu/params/grid_0/grid_128, opt_state/0/mu/params/grid_0/hash_256, opt_state/0/mu/params/grid_0/hash_512, opt_state/0/mu/params/grid_1/grid_0016, opt_state/0/mu/params/grid_1/grid_0032, opt_state/0/mu/params/grid_1/grid_0064, opt_state/0/mu/params/grid_1/grid_0128, opt_state/0/mu/params/grid_1/hash_0256, opt_state/0/mu/params/grid_1/hash_0512, opt_state/0/mu/params/grid_1/hash_1024, opt_state/0/mu/params/grid_1/hash_2048, opt_state/0/mu/params/grid_2/grid_0016, opt_state/0/mu/params/grid_2/grid_0032, opt_state/0/mu/params/grid_2/grid_0064, opt_state/0/mu/params/grid_2/grid_0128, opt_state/0/mu/params/grid_2/hash_0256, opt_state/0/mu/params/grid_2/hash_0512, opt_state/0/mu/params/grid_2/hash_1024, opt_state/0/mu/params/grid_2/hash_2048, opt_state/0/mu/params/grid_2/hash_4096, opt_state/0/mu/params/grid_2/hash_8192, opt_state/0/nu/params/MLP_0/Dense_0/bias, opt_state/0/nu/params/MLP_0/Dense_0/kernel, opt_state/0/nu/params/MLP_0/Dense_1/bias, opt_state/0/nu/params/MLP_0/Dense_1/kernel, opt_state/0/nu/params/MLP_1/Dense_0/bias, opt_state/0/nu/params/MLP_1/Dense_0/kernel, opt_state/0/nu/params/MLP_1/Dense_1/bias, opt_state/0/nu/params/MLP_1/Dense_1/kernel, opt_state/0/nu/params/MLP_2/Dense_0/bias, opt_state/0/nu/params/MLP_2/Dense_0/kernel, opt_state/0/nu/params/MLP_2/Dense_1/bias, opt_state/0/nu/params/MLP_2/Dense_1/kernel, opt_state/0/nu/params/MLP_2/Dense_2/bias, opt_state/0/nu/params/MLP_2/Dense_2/kernel, opt_state/0/nu/params/MLP_2/Dense_3/bias, opt_state/0/nu/params/MLP_2/Dense_3/kernel, opt_state/0/nu/params/MLP_2/Dense_4/bias, opt_state/0/nu/params/MLP_2/Dense_4/kernel, opt_state/0/nu/params/MLP_2/Dense_5/bias, opt_state/0/nu/params/MLP_2/Dense_5/kernel, opt_state/0/nu/params/MLP_2/Dense_6/bias, opt_state/0/nu/params/MLP_2/Dense_6/kernel, opt_state/0/nu/params/grid_0/grid_016, opt_state/0/nu/params/grid_0/grid_032, opt_state/0/nu/params/grid_0/grid_064, opt_state/0/nu/params/grid_0/grid_128, opt_state/0/nu/params/grid_0/hash_256, opt_state/0/nu/params/grid_0/hash_512, opt_state/0/nu/params/grid_1/grid_0016, opt_state/0/nu/params/grid_1/grid_0032, opt_state/0/nu/params/grid_1/grid_0064, opt_state/0/nu/params/grid_1/grid_0128, opt_state/0/nu/params/grid_1/hash_0256, opt_state/0/nu/params/grid_1/hash_0512, opt_state/0/nu/params/grid_1/hash_1024, opt_state/0/nu/params/grid_1/hash_2048, opt_state/0/nu/params/grid_2/grid_0016, opt_state/0/nu/params/grid_2/grid_0032, opt_state/0/nu/params/grid_2/grid_0064, opt_state/0/nu/params/grid_2/grid_0128, opt_state/0/nu/params/grid_2/hash_0256, opt_state/0/nu/params/grid_2/hash_0512, opt_state/0/nu/params/grid_2/hash_1024, opt_state/0/nu/params/grid_2/hash_2048, opt_state/0/nu/params/grid_2/hash_4096, opt_state/0/nu/params/grid_2/hash_8192, opt_state/1/count
I0425 05:49:09.967541 140104660227264 checkpointer.py:166] Finished restoring checkpoint from /home/user/camp_zipnerf_output/zipnerf/360/garden/checkpoint_200000.
I0425 05:49:09.968436 140104660227264 render.py:61] Rendering checkpoint at step 200000.
/home/user/miniconda3/envs/camp_zipnerf/lib/python3.11/site-packages/jax/_src/xla_bridge.py:945: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code.
warnings.warn(
I0425 05:49:10.007920 140104660227264 render.py:96] Evaluating image 1/480
I0425 05:49:10.008137 140104660227264 models.py:1046] Rendering chunk 1/34
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/home/user/_NeRF_Test/camp_zipnerf/render.py", line 199, in
app.run(main)
File "/home/user/miniconda3/envs/camp_zipnerf/lib/python3.11/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/user/miniconda3/envs/camp_zipnerf/lib/python3.11/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
^^^^^^^^^^
File "/home/user/_NeRF_Test/camp_zipnerf/render.py", line 194, in main
render_config(config)
File "/home/user/_NeRF_Test/camp_zipnerf/render.py", line 155, in render_config
render_pipeline(config)
File "/home/user/_NeRF_Test/camp_zipnerf/render.py", line 99, in render_pipeline
rendering = models.render_image( # pytype: disable=wrong-arg-types # jnp-array
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/_NeRF_Test/camp_zipnerf/internal/models.py", line 1085, in render_image
chunk_renderings, _ = render_fn(rng, chunk_rays)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/_NeRF_Test/camp_zipnerf/internal/train_utils.py", line 770, in render_eval_fn
model.apply(
File "/home/user/_NeRF_Test/camp_zipnerf/internal/models.py", line 279, in call
ray_results = mlp(
^^^^
File "/home/user/_NeRF_Test/camp_zipnerf/internal/models.py", line 779, in call
raw_density, x = predict_density(means, covs, **predict_density_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/_NeRF_Test/camp_zipnerf/internal/models.py", line 733, in predict_density
x = density_dense_layer(self.net_width)(x)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/camp_zipnerf/lib/python3.11/site-packages/flax/linen/linear.py", line 235, in call
kernel = self.param(
^^^^^^^^^^^
flax.errors.ScopeParamShapeError: Initializer expected to generate shape (36, 64) but got shape (12, 64) instead for parameter "kernel" in "/MLP_0/Dense_0". (https://flax.readthedocs.io/en/latest/api_reference/flax.errors.html#flax.errors.ScopeParamShapeError)
For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.

`

Two possible bugs found when training CamP

Hi, I have stumbled upon two problems when training CamP. I was using zipnerf/360.gin and camp/camera_optim_perturbed.gin.
The two Problems are:

  1. I got the error module 'camp_zipnerf.internal.math' has no attribute 'normalize'. I was able to fix this by changing math.normalize to spin_math.normalize here:
    perturb_dir = math.normalize(random.normal(key, camera_positions.shape))
  2. After resolving the first problem I got the error "Model" object has no attribute "grid_representation" raised here:
    if (
    model.grid_representation is None
    or model.grid_representation.lower() not in ['ngp', 'hash']
    ):
    raise ValueError('Only HashEncoding supports with coarse to fine.')
    Simply commenting out the lines fixed the issue and training worked.

Are these actually Bugs or am I missing something?

Running Zip-NeRF + CamP on TPUs

Hi there,
What are the configurations that need to be changed to get Zip-NeRF and CamP to work on the TPUs? I tried to use the default configs/zipnerf/360.gin to train the Mip-NeRF 360 datasets with, but I get the following error and can't quite figure out how to disable the hash encoding. In that case, would it be better if I ran Mip-NeRF 360 on the TPUs instead? Thanks!

Traceback (most recent call last):
  File "/home/tnguyen1/camp_zipnerf/train.py", line 557, in <module>
    app.run(main)
  File "/home/tnguyen1/.local/lib/python3.11/site-packages/absl/app.py", line 308, in run
    _run_main(main, args)
  File "/home/tnguyen1/.local/lib/python3.11/site-packages/absl/app.py", line 254, in _run_main
    sys.exit(main(argv))
             ^^^^^^^^^^
  File "/home/tnguyen1/camp_zipnerf/train.py", line 142, in main
    model, state, render_eval_pfn, train_pstep, lr_fn = train_utils.setup_model(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tnguyen1/camp_zipnerf/internal/train_utils.py", line 828, in setup_model
    model, variables = models.construct_model(
                       ^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tnguyen1/camp_zipnerf/internal/models.py", line 444, in construct_model
    init_variables = model.init(
                     ^^^^^^^^^^^
  File "/home/tnguyen1/camp_zipnerf/internal/models.py", line 279, in __call__
    ray_results = mlp(
                  ^^^^
  File "/home/tnguyen1/camp_zipnerf/internal/models.py", line 779, in __call__
    raw_density, x = predict_density(means, covs, **predict_density_kwargs)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tnguyen1/camp_zipnerf/internal/models.py", line 680, in predict_density
    raise ValueError('Hash Encodings should not be used on a TPU.')
ValueError: Hash Encodings should not be used on a TPU.
--------------------
For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.```

Issues with JAX and Orbax Dependencies in CUDA 11.8 Environment

Hello,

I'm encountering issues related to the dependencies between JAX and Orbax in my current setup. My environment uses CUDA 11.8, and after running pip install -e ., I configured the GPU version of JAX with the command pip install -U "jax[cuda11]". The process of running ./scripts/run_all_unit_tests.sh completed without errors. However, during the training and saving stages, I encountered the following error:

File "/path_to_anaconda/envs/my_env/lib/python3.11/site-packages/orbax/checkpoint/multihost/utils.py", line 90, in broadcast_one_to_some
    in_tree = jax.tree.map(pre_jit, in_tree)
              ^^^^^^^^
File "/path_to_anaconda/envs/my_env/lib/python3.11/site-packages/jax/_src/deprecations.py", line 53, in getattr
    raise AttributeError(f"module {module!r} has no attribute {name!r}")
AttributeError: module 'jax' has no attribute 'tree'

After investigating, I found that the code in orbax-checkpoint requires JAX version 0.4.6 or higher. However, when I manually upgraded JAX and JAXlib to version 0.4.6, I received the following error:

camp-zipnerf 0.0.2 requires jax==0.4.23, but you have jax 0.4.6 which is incompatible.

As a result, the training code could not run correctly:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/path_to_project/my_project/train.py", line 26, in <module>
    import flax
  File "/path_to_anaconda/envs/my_env/lib/python3.11/site-packages/flax/__init__.py", line 23, in <module>
    from . import core
  File "/path_to_anaconda/envs/my_env/lib/python3.11/site-packages/flax/core/__init__.py", line 15, in <module>
    from .axes_scan import broadcast as broadcast
  File "/path_to_anaconda/envs/my_env/lib/python3.11/site-packages/flax/core/axes_scan.py", line 22, in <module>
    from jax.extend import linear_util as lu
ModuleNotFoundError: No module named 'jax.extend'

I would appreciate more detailed guidance on how to properly configure my environment. I have already tried using CUDA versions 12.3, 11.8, and even 11.6, but none of them worked successfully.

If possible, could you please provide some assistance or point me toward the correct setup steps?

Thank you in advance for your help!

Failing or Skipping Unit Tests

Hi, I am trying to run the set up and I decided to run the unit tests. I see the following outputs where the tests have failed or skipped:

(camp_zipnerf) a@test-nerf-server:~/camp_zipnerf$ ./scripts/run_all_unit_tests.sh 
Running camera_utils_test.py
./home/a/camp_zipnerf/internal/camera_utils.py:1178: RuntimeWarning: divide by zero encountered in divide
  t1 = (corners[0] - ray_o) / ray_d
/home/a/camp_zipnerf/internal/camera_utils.py:1179: RuntimeWarning: divide by zero encountered in divide
  t2 = (corners[1] - ray_o) / ray_d
..........
----------------------------------------------------------------------
Ran 11 tests in 19.966s

OK
Running coord_test.py
..s............F............s....s....s....s....s....s......s....s....s....s.....F....s....s.....
======================================================================
FAIL: test_construct_ray_warps_is_finite_and_in_range6 (0.5) (tests.coord_test.CoordTest)
tests.coord_test.CoordTest.test_construct_ray_warps_is_finite_and_in_range6 (0.5)
test_construct_ray_warps_is_finite_and_in_range(0.5)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/conda/envs/camp_zipnerf/lib/python3.11/site-packages/absl/testing/parameterized.py", line 323, in bound_param_test
    return test_method(self, testcase_params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a/camp_zipnerf/tests/coord_test.py", line 93, in test_construct_ray_warps_is_finite_and_in_range
    self.assertTrue(jnp.all(s_recon <= 1))
AssertionError: Array(False, dtype=bool) is not true

======================================================================
FAIL: test_pos_enc_20_0.2 (tests.coord_test.CoordTest)
tests.coord_test.CoordTest.test_pos_enc_20_0.2
test_pos_enc_20_0.2(20, 0.2)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/conda/envs/camp_zipnerf/lib/python3.11/site-packages/absl/testing/parameterized.py", line 321, in bound_param_test
    return test_method(self, *testcase_params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/a/camp_zipnerf/tests/coord_test.py", line 305, in test_pos_enc
    self.assertLess(max_err, tol)
AssertionError: 0.23938593 not less than 0.2

----------------------------------------------------------------------
Ran 97 tests in 199.062s

FAILED (failures=2, skipped=13)
Running datasets_test.py
.
----------------------------------------------------------------------
Ran 1 test in 4.156s

OK
terminate called without an active exception
./scripts/run_all_unit_tests.sh: line 17: 70089 Aborted                 python -m unittest "$file"
Running geometry_test.py
........................
----------------------------------------------------------------------
Ran 24 tests in 4.341s

OK
Running geopoly_test.py
.........
----------------------------------------------------------------------
Ran 9 tests in 15.739s

OK
Running grid_utils_test.py
.........................................................................
----------------------------------------------------------------------
Ran 73 tests in 13.668s

OK
Running hash_resample_test.py
....
----------------------------------------------------------------------
Ran 4 tests in 2.613s

OK
Running image_utils_test.py
..........
----------------------------------------------------------------------
Ran 10 tests in 8.186s

OK
Running linspline_test.py
..s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s..
----------------------------------------------------------------------
Ran 145 tests in 15.251s

OK (skipped=29)
Running loss_utils_test.py
.......
----------------------------------------------------------------------
Ran 7 tests in 3.810s

OK
Running math_test.py
/home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
/home/a/camp_zipnerf/internal/math.py:340: RuntimeWarning: overflow encountered in square
  return jnp.sign(x) * jnp.sqrt(1 - jnp.exp(-(4 / jnp.pi) * x**2))
/home/a/camp_zipnerf/internal/math.py:340: RuntimeWarning: overflow encountered in multiply
  return jnp.sign(x) * jnp.sqrt(1 - jnp.exp(-(4 / jnp.pi) * x**2))
./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
....s....s....s....s....s....s....s....s....s....s....s......./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
.....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s.....s....s....s....s....s....s....s....s....s....s....s......./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
...s../home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: divide by zero encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: overflow encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: invalid value encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:299: RuntimeWarning: divide by zero encountered in divide
  dn_true = scale / d
/home/a/camp_zipnerf/tests/math_test.py:299: RuntimeWarning: overflow encountered in divide
  dn_true = scale / d
/home/a/camp_zipnerf/tests/math_test.py:303: RuntimeWarning: overflow encountered in multiply
  dd_true = -scale * r / d
/home/a/camp_zipnerf/tests/math_test.py:303: RuntimeWarning: overflow encountered in divide
  dd_true = -scale * r / d
./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: divide by zero encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: overflow encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: invalid value encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:299: RuntimeWarning: divide by zero encountered in divide
  dn_true = scale / d
/home/a/camp_zipnerf/tests/math_test.py:299: RuntimeWarning: overflow encountered in divide
  dn_true = scale / d
/home/a/camp_zipnerf/tests/math_test.py:303: RuntimeWarning: overflow encountered in multiply
  dd_true = -scale * r / d
/home/a/camp_zipnerf/tests/math_test.py:303: RuntimeWarning: overflow encountered in divide
  dd_true = -scale * r / d
.s/home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: divide by zero encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: overflow encountered in divide
  r = np.float32(n) / np.float32(d)
/home/a/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: invalid value encountered in divide
  r = np.float32(n) / np.float32(d)
...../home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
.../home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./home/a/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
....
----------------------------------------------------------------------
Ran 247 tests in 23.489s

OK (skipped=43)
Running quaternion_test.py
.......................................................................................................................................
----------------------------------------------------------------------
Ran 135 tests in 5.985s

OK
Running ref_utils_test.py
/home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
./home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s.../home/a/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
.s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s../home/a/camp_zipnerf/tests/ref_utils_test.py:68: RuntimeWarning: invalid value encountered in divide
  xyz / np.sqrt(np.sum(xyz**2, axis=-1, keepdims=True))
..s/home/a/camp_zipnerf/tests/ref_utils_test.py:68: RuntimeWarning: invalid value encountered in divide
  xyz / np.sqrt(np.sum(xyz**2, axis=-1, keepdims=True))
....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s..........
----------------------------------------------------------------------
Ran 648 tests in 25.003s

OK (skipped=128)
Running render_test.py
..............................................................
----------------------------------------------------------------------
Ran 62 tests in 47.985s

OK
Running resample_test.py
....../home/a/camp_zipnerf/internal/resample.py:128: UserWarning: Explicitly requested dtype float64 requested in zeros is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
  output = jnp.zeros((*locations.shape[:-1], data.shape[-1]), dtype=data.dtype)
/opt/conda/envs/camp_zipnerf/lib/python3.11/site-packages/jax/_src/numpy/array_methods.py:66: UserWarning: Explicitly requested dtype float64 requested in astype is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
  return lax_numpy.astype(arr, dtype)
........
----------------------------------------------------------------------
Ran 14 tests in 9.194s

OK
Running rigid_body_test.py
..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
----------------------------------------------------------------------
Ran 522 tests in 21.137s

OK
Running spin_math_test.py
........................................................................................................................................................................................................................................................................................................................................
----------------------------------------------------------------------
Ran 328 tests in 21.759s

OK
Running stepfun_test.py
...................................................................
----------------------------------------------------------------------
Ran 67 tests in 60.031s

OK
Running train_utils_test.py
..
----------------------------------------------------------------------
Ran 2 tests in 1.513s

OK
Running utils_test.py
.......
----------------------------------------------------------------------
Ran 7 tests in 4.942s

OK
(camp_zipnerf) a@test-nerf-server:~/camp_zipnerf$ 

This is my machine:

(camp_zipnerf) a@test-nerf-server:~/camp_zipnerf$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

Can you tell the reason for this failure and will this effect the final result?

How to get 3D model (OBJ)

Hi!
Thank you for your great work!
Is there any way to get a 3D model of the result to watch it in Blender for example.
Thank you.

Applying CamP to mipnerf360 datasets degrades PNSR and SSIM values

Hi, first of all thanks for this great paper and code. I finally managed to get your code to work!!
First, I tried to a task of replicating Table 1 in your paper for Single-Scale mipnerf360 dataset(7samples (bicycle,bonsai,counter,garden,kitchen,room,stump)), comparing with to without CamP.
Unfortunately, the average value of psnr:30.68,ssim:0.89(without CamP) degraded to psnr:27.43,ssim:0.82(with Camp)!
I used zipnerf/360_train.sh, zipnerf/360_eval.sh and zipnerf/360_render.sh, only rewriting DATA_DIR and CHECKPOINT_DIR, for w/o CamP.
And I used camp/360_train.sh, zipnerf/360_eval.sh and zipnerf/360_render.sh, rewriting DATA_DIR and CHECKPOINT_DIR and add option "-gin_configs=configs/camp/camera_optim.gin" for zipnerf/360_eval.sh and zipnerf/360_render.sh , for with CamP.

Unit test is all OK(with JAX), and Single-Scale mip-NeRF 360 Dataset Results of zipnerf(psnr,ssim) are roughly the same or better.
What do you think could be the reason it isn't working?

Visualize training results

Is there any straightforward way to visualize the training result by moving the camera manually instead of rendering videos?

Thank you in advance.

Resume possible?

I have run a test with the gardenvase dataset, did train, evaluate and then rendered the 480 frames, to then end abruptly, as it expected ffmpeg to be installed, which was not in the requirements. RuntimeError: Program 'ffmpeg' is not found

Rather not go through the +-90 minutes to render the frames again, I wondered if I can resume in some way from that point. Any advice? I now have a render-folder with 1920 files (4.3 GB) of generated data, for acc_, color_, distance_mean_ and distance_median_.

Unable to create env for camp_zipnerf

There are many warning coming while running the unit test any fix

/mnt/sdb1/sumant/zip_camp/camp_zipnerf$ ./scripts/run_all_unit_tests.sh
./mnt/sdb1/sumant/zip_camp/camp_zipnerf/camp_zipnerf/internal/camera_utils.py:1178: RuntimeWarning: divide by zero encountered in divide
  t1 = (corners[0] - ray_o) / ray_d
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/camp_zipnerf/internal/camera_utils.py:1179: RuntimeWarning: divide by zero encountered in divide
  t2 = (corners[1] - ray_o) / ray_d
............s.........................s....s....s....s....s....s......s....s....s....s..........s....s................................................................................................................................s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s........./mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/camp_zipnerf/internal/math.py:340: RuntimeWarning: overflow encountered in square
  return jnp.sign(x) * jnp.sqrt(1 - jnp.exp(-(4 / jnp.pi) * x**2))
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/camp_zipnerf/internal/math.py:340: RuntimeWarning: overflow encountered in multiply
  return jnp.sign(x) * jnp.sqrt(1 - jnp.exp(-(4 / jnp.pi) * x**2))
.....s....s....s....s....s....s....s....s....s....s....s......./mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
......s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s.....s....s....s....s....s....s....s....s....s....s....s..........s../mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: divide by zero encountered in divide
  r = np.float32(n) / np.float32(d)
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: overflow encountered in divide
  r = np.float32(n) / np.float32(d)
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: invalid value encountered in divide
  r = np.float32(n) / np.float32(d)
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:299: RuntimeWarning: divide by zero encountered in divide
  dn_true = scale / d
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:299: RuntimeWarning: overflow encountered in divide
  dn_true = scale / d
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:303: RuntimeWarning: overflow encountered in multiply
  dd_true = -scale * r / d
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:303: RuntimeWarning: overflow encountered in divide
  dd_true = -scale * r / d
./mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: divide by zero encountered in divide
  r = np.float32(n) / np.float32(d)
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: overflow encountered in divide
  r = np.float32(n) / np.float32(d)
/mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:282: RuntimeWarning: invalid value encountered in divide
  r = np.float32(n) / np.float32(d)
.s......../mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
./mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/math_test.py:46: RuntimeWarning: overflow encountered in cast
  x = np.float32(np.exp(log_x))
.........................................................................................................................................../mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/ref_utils_test.py:89: RuntimeWarning: invalid value encountered in divide
  grad_true = (xyz[:, 1] ** 2 + xyz[:, 2] ** 2) / (scale * denom**3)
..s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s../mnt/sdb1/sumant/zip_camp/camp_zipnerf/tests/ref_utils_test.py:68: RuntimeWarning: invalid value encountered in divide
  xyz / np.sqrt(np.sum(xyz**2, axis=-1, keepdims=True))
..s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s....s............................................................................../mnt/sdb1/sumant/zip_camp/camp_zipnerf/camp_zipnerf/internal/resample.py:128: UserWarning: Explicitly requested dtype float64 requested in zeros is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
  output = jnp.zeros((*locations.shape[:-1], data.shape[-1]), dtype=data.dtype)
/mnt/sdb1/sumant/zip_camp/lib/python3.11/site-packages/jax/_src/numpy/array_methods.py:66: UserWarning: Explicitly requested dtype float64 requested in astype is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
  return lax_numpy.astype(arr, dtype)
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................://github.com/google/jax#current-gotchas..........
----------------------------------------------------------------------
Ran 2413 tests in 185.657s

OK (skipped=213)




Navigate Zip-NeRF output

Hey NeRF Community,

I am currently running the Zip-NeRF model and have successfully trained it to generate a video (.mp4) using my custom dataset. However, I would like to create a navigation experience similar to what is shown here: SMERF 3D Berlin Scene.

Does anyone know how I can achieve this?

Thanks!

Multi-Resolution Hash Grid Implementation

Hey,

I have been looking forward to this code release for the past few months and now it is finally here! Thank you for your awesome work. The code looks really clean and I think it is great that you uploaded test set renderings for comparisons!

I have a few questions regarding the hash grid implementation.

  1. Where exactly can I find the triton_grids_inferface that is used within grid_utils.py? I might be missing something very simple here, sorry.
  2. I assume I could easily figure this out myself with the answer to 1., but what exactly is the idea behind the precondition_scaling?
  3. Did you somehow evaluate your hash grid implementation compared to the one from tiny-cuda-nn wrt. reconstruction quality and speed?

Looking forward to your answer!

CamP egocentric vs. allocentric parameterization

Hi, first of all thanks for this grat paper and code.
I have a few questions regarding the paper:

  1. In the paper you mention the two possible ways of modelling the cameras extrinsic pose (egocentric vs. allocentric), however I can't seem to find which one you actually used in the end. Looking through camera_delta.py it seems to me that the residuals are modelled as egocentric parameters. Is that correct or am I misunderstanding something here?
  2. As far as I understand the preconditioning matrix is derived from how much a projected points "pixel-coordinates" change based on the change of the residuals. If the points to compute this matrix are sampled from each cameras frustum, meaning that they are essentially the same for each camera in its own reference frame, and the residuals are also defined in the cameras reference frame, would that not mean that the preconditioning matrices should be more or less equal for all cameras? Assuming they have identical focal length and intrinsics of course.

I hope I am not completely misunderstanding something here. I would really appreciate if you could find the time to answer these question!

how to generate this ''render_path_file.npy'

this requires as input in smerf to zipnerf models how do i feed this ? - https://github.com/google-research/google-research/blob/master/smerf/configs/zipnerf/london.gin

# london
smerf.internal.configs.Config.data_dir = "datasets/london"
smerf.internal.configs.Config.distill_teacher_ckpt_dir = "teachers/london"

# Downsample pixels
smerf.internal.configs.Config.factor = 2

# Render path config
**camp_zipnerf.internal.configs.Config.render_path_file = 'render_path_file.npy'**
camp_zipnerf.internal.configs.Config.render_resolution = (1392, 793)  # (width, height)
camp_zipnerf.internal.configs.Config.render_focal = 606.465479  # in pixels
camp_zipnerf.internal.configs.Config.render_camtype = 'perspective'

Run only CamP and extract optimized camera poses

Hi, thank you for making such a great achievement public!
This is not an issue, but a question. Can I run only CamP and obtain optimized camera poses?
I quickly checked the code, but it seems deeply integrated ZipNeRF and CamP. If there are any options which allow me to do it, it would be great if you could share it.
Thank you for your help in advance.

Results on Tanks & Temples Dataset

Hi, thanks for the wonderful job again.

I first conducted tests on the Mip-360 dataset, and the experimental results completely matched those in the paper and README. Subsequently, I tested the deep blending and tanks & Templates datasets used in 3D-GS.

The results for the DB (Deep Blending) dataset are as shown in the following table, and they represent the state-of-the-art rendering quality:

drjohnson playroom
PSNR 30.150 30.946
SSIM 0.90905 0.90808
LPIPS (VGG) 0.21985 0.20522
drjohnson_db_test_preds_step_200000_color.mp4
playroom_db_test_preds_step_200000_color.mp4

However, when running the Tanks & Temples dataset, ZipNeRF appeared unable to converge, leading to highly fragmented outcomes. Theoretically, given that the format of the Tanks dataset is identical to that of DB, and considering the perfect results obtained on DB, this implies that my data loading process should be right. I am uncertain whether this issue pertains to the robustness of the ZipNeRF method itself or if the Tanks dataset necessitates a different configuration to achieve convergence.

I executed the following command:

DATA_DIR=/my/path/to/the/dataset
CHECKPOINT_DIR=./logs/zipnerf/tanks_db
SCENE=truck # train, drjohnson and playroom are in the same way

CUDA_VISIBLE_DEVICES=0 python -m train \
--gin_configs=configs/zipnerf/360.gin \
--gin_bindings="Config.data_dir = '${DATA_DIR}/${SCENE}'" \
--gin_bindings="Config.checkpoint_dir = '${CHECKPOINT_DIR}/${SCENE}'" \
--gin_bindings="Config.factor = 1"

Looking forward to your reply!

train_tandt_test_preds_step_200000_color.mp4
truck_tandt_test_preds_step_200000_color.mp4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.