owl-project / nvisii Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
When loading meshes from URDF (this the robot scene definition) we sometimes have to redfine the origin of the mesh, could we do that in visii as well so it would make it easier to align visii with pybullet in the future.
It would be great if there was a visii.add_search_path that we could use to lookup textures and meshes.
However, the order of these search paths might add a level of complexity, and we increase technical debt for any potential file loading operation we might need to maintain or create in the future.
instead, transmission currently uses normal roughness, which is a bug
remove the visii.vec4 for position needed turn into a visii.vec3 or even better numpy array for all of that function.
or update the documentation to describe what it needs.
we should instead throw a more helpful error message
So we can check the visii version
I want to do
visii.material.roughness_from_tex(tex,channel=0)
So I have scenes in pybullet saved by pybullet.saveWorld function, is there a simple way to load this into ViSII? Or I have to convert each object in the scene into obj file and record their positions and orientations and recreate them?
This would make it easier to generate colliders with pybullet
Right now they're kinda hidden
When the user has an outdated driver (less than 450), they're given the following error message:
Optix call (optixInit()) failed with code 7801 (line 150)
terminate called without an active exception
Aborted (core dumped)
We should give a more helpful error message to guide users to upgrade their driver.
When generating training data for, e.g., denoising, I want to be able to initialize the render with different seeds for the noisy input vs. reference image, to ensure that the two renders of the same frame use uncorrelated random numbers.
TODO: make them appear in the documentation
Right now, functions and components are in the global namespace, but should be moved to a "visii" namespace
Need to include exception.i
run this script, https://github.com/owl-project/ViSII/blob/dr/install/simple_animation.py you should get the error around frame 100.
Would be nice to support other python vector libraries like pyquaternion.
This can waste a lot of time if you render out a series of images, only to find that none of them actually saved out to disk....
I would like to be able to move up and down the tree. get_children should return a list of transform, get parent should return a single transform. If the either exists, just return None.
should probably throw an exception instead
Can we change the color of the dome, e.g., set_color_dome.
The visii.load_obj one separates objects by material ID, creating a list of entities containing transform, mesh, and material components.
The visii.mesh.create_from_obj ignores the corresponding .mtl file, and instead merges all shapes in the OBJ into a single mesh component.
This would be how we might compute those.
AffineSpace3fa mtxT0 = m_background->getProbeToWorldT0();
AffineSpace3fa mtxT1Inv = m_background->getTransformT1();
const float envDist = 10000.0f; // large value
// Point far away
Vec3fa pfar = ray.org + ray.dir * envDist;
// Apply transform
Vec3fa pT0 = xfmPoint(mtxT1Inv, xfmPoint(mtxT0, pfar));
Vec3fa pT1 = pfar;
// World to camera space
Vec3fa p0c = xfmPoint(world2local, pT0);
Vec3fa p1c = xfmPoint(world2local, pT1);
// Project to screen space
float aspect = (float)m_width / (float)m_height;
float x0s = m_fovscale / aspect * p0c.x / p0c.z;
float y0s = m_fovscale * p0c.y / p0c.z;
float x1s = m_fovscale / aspect * p1c.x / p1c.z;
float y1s = m_fovscale * p1c.y / p1c.z;
float CoC = m_render_param.apertureSize / m_render_param.focusDepth;
MV = Vec3ff(0.5f * (x0s - x1s), 0.5f * (y1s - y0s), CoC, 1.0f); // Screen space MV
return true is we can see the object, otherwise returns False.
Right now, multi-GPU configurations have only been tested on Linux distributions of ViSII. We should also test on multi-GPU windows machines, since that hasn't been tested yet.
It's easy to think that the following is correct:
a = entity.get_name_to_id_map()
for i,v in enumerate(a):
entity_id = i
However! The "a" here that's returned by get_name_to_id_map is a python dictionary containing both keys as well as values.
And so when doing enumerate(a), the "i" that's returned by enumerate is not the ID of the entity, but rather, some arbitrary incrementing index.
Instead, you should do:
a = entity.get_name_to_id_map()
for key, value in a.items():
entity_id = key
entity_name = value
It might be a good idea to spell this out a bit more in the documentation and/or examples.
For animated sequences, add support for dumping the motion vectors
right now visii.texture.remove() does not release the memory of that texture.
Something to do with mat4[x] returning by value instead of reference...
maya / fbx would be nice as well
When I installed ViSII using artifact, I met some issues. Here are some solutions:
First, make sure VISII_HOME
includes the directory that contains the artifact files. Then add these lines to your .bashrc
:
export PATH="${PATH}:${VISII_HOME}"
export PYTHONPATH=${VISII_HOME}
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${VISII_HOME}"
(1) If you are using conda, make sure your LD_LIBRARY_PATH
includes anaconda libraries. One way to guarantee that is to add something like this line in your .bashrc
:
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/home/guanya/anaconda3/lib"
(2) Make sure your NVIDIA driver and CUDA are updated. See https://ingowald.blog/installing-the-latest-nvidia-driver-cuda-and-optix-on-linux-ubuntu-18-04/ about how to install the latest NVIDIA driver and CUDA on Ubuntu. For me (GTX 1080), my driver version is 450.36.06 and my CUDA version is 11.0. You can check them using nvidia-smi
and nvcc --version
.
(3) One way to check the installation is to do
import visii
visii.initialize_interactive()
in Python. You will see a box like this if installing succesfully.
so we dont always have the spp count displayed.
voila
Ability to not sample the pixel area, i.e. always shoot rays from center of pixel but still sample bsdf
Some race condition... Adding a pause between initialize_headless and read_framebuffer seems to fix it...
This causes race conditions between the renderer and python.
Eg, creating an empty mesh, then loading an OBJ, the mutex is temporarily unlocked between these two steps.
from visii import *
import visii
visii.Initialize()
a = visii.Entity_Create("john")
print(a.to_string())
visii.Entity_Delete("john")
print(a)
print(a.to_string())
Or maybe an example from creating a mesh from raw data. or maybe a stl loading from python to visii. Or stl loading directly in visii.
For computer vision problems, it's often expected to use an "intrinsic" matrix instead of a "projection" matrix.
https://en.wikipedia.org/wiki/Camera_resectioning
We're thinking converting a projection matrix to an intrinsic matrix might be as simple as the following:
[width, 0, 0 ] [ 2n / (r - l) , 0 , (r + l) / (r - l) ]
[0, height, 0 ] * [ 0 , 2 n / (t - b) , (t + b) / (t - b) ]
[0, 0, 1 ] [ 0 , 0 , 1 ]
Should be building with 3.7, 3.6, 2.7.
Looks like cmake is always picking up 3.8 from somewhere else....
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.