gltf-rs / gltf Goto Github PK
View Code? Open in Web Editor NEWA crate for loading glTF 2.0
License: Apache License 2.0
A crate for loading glTF 2.0
License: Apache License 2.0
The library would be much more useful if it included the extensions and extras fields that are found on most objects in the specification. Until version 0.4.0
this was achieved by having an untyped Map<String, Value>
structure that was taken directly from serde_json
. This was a very lazy implementation and was not the best option in terms of user-friendliness and efficiency.
I propose we add support for extensions and extras through Extensions
* and Extras
traits respectively. Each trait will contain user-defined types to be (de)serialised by the library. These types could then be passed through the glTF tree structure as follows:
*Edit: See updates posted below.
// User-defined data is declared with the `Extras` trait
trait Extras {
type Accessor: Clone + Debug + Default + Deserialize + Serialize;
...
}
// Data targeting official extensions is declared with the `Extensions` trait
trait Extensions {
type Accessor: Clone + Debug + Default + Deserialize + Serialize;
...
}
// This is the 'entry point' for the `Extras` and `Extensions` definitions
pub fn import<P, E, X>(path: P) -> Result<Root<E, X>, ImportError>
where P: AsRef<Path>, E: Extensions, X: Extras
{
...
}
// The `Extensions` and `Extras` types are passed down through the glTF tree structure
struct Root<E: Extensions, X: Extras> {
accessors: Vec<Accessor<E, X>>,
...
}
// Here we resolve the user-provided extensions and extras
struct Accessor<E: Extensions, X: Extras> {
...
extensions: <E as Extensions>::Accessor,
extras: <X as Extras>::Accessor,
}
The implementation might be a bit bonkers, but it does allow for user-data (de)serialisation to be a zero-cost abstraction.
As far as extensions are concerned, it will be the responsibility of the library to provide the structures necessary to represent official extensions. These structures will be defined in an extensions
module. The user should not be able to implement their own Extensions
; only those offered by the library will be supported. If an extension is required but not supported, the library will return an appropriate error during importing / exporting.
Biuld error:
Running `rustc --crate-name gltf src/lib.rs --crate-type lib -g -C metadata=b55367fa1a6bd86a -C extra-filename=-b55367fa1a6bd86a --out-dir /Users/user/Repos/daeTo3DTiles/gltf/target/debug/deps --emit=dep-info,link -L dependency=/Users/user/Repos/daeTo3DTiles/gltf/target/debug/deps --extern gl=/Users/user/Repos/daeTo3DTiles/gltf/target/debug/deps/libgl-24b429b1b60cf506.rlib --extern serde_json=/Users/user/Repos/daeTo3DTiles/gltf/target/debug/deps/libserde_json-3d35c676bbe42cf7.rlib --extern serde=/Users/user/Repos/daeTo3DTiles/gltf/target/debug/deps/libserde-256e72629d0f4e21.rlib --extern serde_derive=/Users/user/Repos/daeTo3DTiles/gltf/target/debug/deps/libserde_derive-64a60f35c65de494.dylib`
error: custom derive attribute panicked
--> src/lib.in.rs:53:17
|
53 | #[derive(Debug, Deserialize, Serialize)]
| ^^^^^^^^^^^
|
= help: message: assertion failed: !p.is_null()
error: Could not compile `gltf`.
Currently, one can't own the children of nodes due to incorrect lifetimes.
fn process(node: gltf::Node) {
struct Item {
node: gltf::Node,
}
let mut stack = vec![Item { node }];
while let Some(item) = stack.pop() {
for child in item.node.children() {
// Error: lifetime of `child` bound to `item.node`.
stack.push(Item { node: child });
}
}
}
Currently the Material
type has Option<PbrMetallicRoughness>
instead of PbrMetallicRoughness
with default values. This is an oversight which needs to be fixed.
This should be pub(crate)
- link to documentation.
Each of those dependencies have their own license associated with it. Since you aren't including their source code, it isn't necessary to have all of them included.
@warmwaffles brought up the topic of references to resources in #4. Whilst it's probably infeasible to return such references directly from import()
, it is certainly possible to write a wrapper around Root
to 'walk the tree' using iterators.
Here is experimental wrapper interface and an example of its usage.
The question is, should the library provide this functionality and, if so, what design considerations need to be accounted for?
Generating tangents and normals has been an elephant-in-the-room for a while. The specification says we should use the mikktspace algorithm when tangents are not provided and compute flat normals when normals are not provided. There is an important caveat, namely the mikktspace algorithm reorders the existing geometry, and flattens any original index list - that is, the output data is suitable for glDrawArrays
. We have to recreate a new index list ourselves if wanted.
Read ahead for new ideas and updates
I propose the following ideas to deal with these concerns. (N.B. this is pseudo-code and probably won't compile.)
/// Geometry to be rendered with the given material.
#[derive(Clone, Debug)]
pub struct Primitive<'a> {
/// The parent `Mesh` struct.
mesh: &'a Mesh<'a>,
/// The corresponding JSON index.
index: usize,
/// The corresponding JSON struct.
json: &'a json::mesh::Primitive,
/// New: Computed vertex normals.
computed_normals: Option<Vec<[f32; 3]>>,
}
impl<'a> Primitive<'a> {
/// Same as before, returning `None` if there are no tangents available.
pub fn tangents(&self) -> Option<Tangents<'a>> {
self.find_accessor_with_semantic(Semantic::Tangents)
}
/// New function: Generates flat normals, returning `None` if there are no positions available.
pub fn normals(&mut self) -> Option<Normals<'a>> {
if let Some(iter) = self.find_accessor_with_semantic(Semantic::Normals) {
Normals(Either::Left(iter))
} else {
let normals = self.computed_normals
.as_ref()
.unwrap_or_else(|| self.compute_normals());
Normals(Either::Right(normals.iter()))
}
}
/// New function: Generates tangents, returning `None` if there the primitive lacks positions
/// or texture co-ordinates - we can generate the normals if necessary.
pub fn mikk_tspace(self) -> MikkTSpace<'a> {
// *Magic*
}
}
/// We might have to duplicate all the other iterators.
/// Alternatively, iterators could take an ordering argument instead. (`Incremental` or `Explicit`)
pub mod mikk_tspace {
/// New: Geometry guaranteed to have positions, normals, and tangents, but not in original order.
#[derive(Clone, Debug)]
pub struct MikkTSpace<'a> {
/// The original `Primitive` struct.
original: &'a Primitive<'a>,
/// New ordering of vertex data.
ordering: Vec<u32>,
/// Newly computed tangents.
tangents: Vec<[f32; 4]>,
/// Newly computed index list.
indices: Vec<u32>,
}
/// New: Example color iterator.
pub struct Colors<'a> {
original: &'a super::Colors<'a>,
order: &'a [u32],
}
impl<'a> Iterator for Colors<'a> {
type Item = [f32; 4];
fn next(&mut self) -> Option<Self::Item> {
self.order.map(|index| self.original.iter().nth(index).unwrap())
}
}
}
This thread exists as a place to discuss the modules, functions, data structures, and their names included in the library. In version 1.0.0
names must be agreed upon and remain immutable for backward compatibility. Until then, we should experiment as much as possible and find out what works best.
gltf
.gltf
.semver
matches that of gltf
for convenience but is otherwise unstable.gltf
crate.gltf 1.0
.The new model introduced in KhronosGroup/glTF-Sample-Models#84 makes the tests fail with
---- import stdout ----
"./glTF-Sample-Models/2.0/BoxInterleaved/glTF/BoxInterleaved.gltf": Io(
Error {
repr: Os {
code: 2,
message: "No such file or directory"
}
}
)
thread 'import' panicked at 'explicit panic', tests/import_examples.rs:18
The first bufferview in this file does not have a byteOffset.
https://github.com/KhronosGroup/glTF-Sample-Models/blob/master/2.0/Lantern/glTF/Lantern.gltf#L237
I am not sure if there is a formal spec yet, but it probably should be default to 0 or optional. I'll understand if you want to wait until the spec has been finalized.
This is pretty much done, but needs some testing.
Another case where there's an Option
, but the spec defines a default value:
If material is not specified, then a default material is used.
The default material, used when a mesh does not specify a material, is defined to be a material with no properties specified. All the default values of material apply. Note that this material does not emit light and will be black unless some lighting is present in the scene.
(src)
See https://github.com/alteous/gltf/blob/master/src/v1/json/accessor.rs#L11 and https://github.com/alteous/gltf/blob/master/src/v1/json/accessor.rs#L24
Is there a reason why v2 doesn't have those enums?
gltf-viewer /home/alteous/gltf/glTF-Sample-Models/2.0/VC/glTF-Embedded/VC.gltf
fails with error Source(Io(Error { repr: Os { code: 36, message: "File name too long" } }))
. This is because the URI has MIME type data:image/jpeg;base64
, which isn't explicitly handled by FromPath
.
Suggested fix: Use the mime
crate or otherwise to parse the data URI and handle loading in a more robust way.
Hi,
Have you considered generating the Rust code from the official json schema?
I did a quick experiment with schemafiy
, but immediately ran into Marwes/schemafy#3 (panic on external references).
I got the idea initially from https://github.com/slack-rs/slack-rs-api/tree/master/codegen. They use a custom generator to generate bindings to the Slack API. It works pretty well there (I've worked with the generated code).
Side question: How far along is the 2.0 branch?
As the title suggests, I'm experimenting with a new strategy for validating the glTF JSON data. This new strategy uses the proc_macro
feature to generate implementations of the Validate
trait. You can browse the code in the validation
branch.
The Validate
trait ignores 'untestable' values and propagates through containers such as Vec
, Option
, and HashMap
. The real power of the trait comes with non-trivial Validate
implementations such as Index
. This new strategy uses the type system to guarantee that all Index
types are tested.
So far only index bounds checking has been implemented but the trait should be flexible enough to handle any future use case.
I found these implementation notes in the 2.0 spec draft (under Meshes):
Implementation note: When normals are not specified, implementations should calculate flat normals.
Implementation note: When tangents are not specified, implementations should calculate tangents using default MikkTSpace algorithms. For best results, the mesh triangles should also be processed using default MikkTSpace algorithms.
Implementation note: When normals and tangents are specified, implementations should compute the bitangent by taking the cross product of the normal and tangent xyz vectors and multiplying against the w component of the tangent:
bitangent = cross(normal, tangent.xyz) * tangent.w
This should happen when importing images and buffers with data URIs. Use the base64
crate or otherwise.
Tracking progress of new-structure
.
A large part of the wrapper interface was generated from a script and so contains minor errors here and there.
I think considering that gltf is non-streaming format it's fair to implement ExactSizeIterator for iterators that are giving out gltf's objects
Issue https://github.com/alteous/gltf/issues/32 arose due to the naïve way the basic import test (tests/import_examples.rs
) searches for glTF to load. The tests currently assume .gltf / .glb files of the form x/glTF/x.gltf
or x/glTF-Binary/x.glb
where x
is the name of the sample model. https://github.com/alteous/gltf/issues/32 failed because one model had the path BoxInterleaved/glTF/Box_Interleaved.gltf
, which led to a file not found error.
The suggested solution is to rewrite the tests by recursively scanning the sample models directory (using walkdir
or otherwise) and search for .gltf / .glb files to load. This should future-proof the basic import tests.
It is worth noting that these tests serve more as sanity checks rather than actual rigorous testing of the crate. Testing the importer and wrapper is a difficult task which I'm not sure how to approach yet.
The crate build time is getting out of hand (again), and it affects the iteration time of other projects too. People may be dissuaded from using gltf
if this continues for too long, so it's important we do something about this sooner rather than later. The trouble is, I have no idea what to do about it. So, if you have any ideas, please discuss!
https://github.com/alteous/gltf/blob/master/Cargo.toml#L26
[features]
default = []
extras = ["gltf-json/names"]
names = ["gltf-json/extras"]
Buffer data is required to be loaded so that accessors can iterate over the data etc. Image data however has no real reason to be preloaded other than for convenience. Some rendering APIs, notably three-rs, only allows loading images from a path, so preloading this data is a complete waste of resources.
The new Loaded
type was designed to add extra functionality to the basic 'tree-traversal' library (gltf
). This functionality may be better suited in its own crate.
Example contents:
gltf_utils::iter_accessor<T: Copy>(accessor: &gltf::Primitive, source: &gltf_utils::Source) -> Iter<T>
extern crate gltf;
extern crate gltf_utils;
use gltf_utils::iter_accessor;
for position in iter_accessor::<cgmath::Point3<f32>>(&position_accessor, &buffer_source) {
...
}
gltf_utils::colors(primitive: &Primitive) -> gltf_utils::Colors
gltf_utils::generate_flat_normals(primitive: &Primitive) -> Vec<(usize, [f32; 3])>
Source would be moved to gltf_utils.
From the 2.0 draft spec:
Matrices must be decomposable to TRS. This implies that transformation matrices cannot skew or shear.
(Source)
I think it would be good to check the matrices on load. Perhaps this could be optional, so it can be skipped e.g. on release builds.
Relevent discussion on the spec repo: KhronosGroup/glTF#892
There are multiple methods from Texture
that conflict with Info
which is confusing. Suggested action would be to remove in favour of a fn texture(&self) -> Texture<'a>
method instead.
This should happen during the import process. The only valid MIME types in core glTF are image/jpeg
and image/png
, however we should try to abstract over image decoding to account for new formats, user specific formats, and image extensions.
This pattern arises when working with data with multiple representations:
let indices: Vec<u32> = match primitive.indices().unwrap() {
Indices::U8(iter) => iter.map(|x| x as u32).collect(),
Indices::U16(iter) => iter.map(|x| x as u32).collect(),
Indices::U32(iter) => iter.collect(),
};
I would like to remove the user of this burden with the help of some extra iterator adaptors. The following comes to mind:
let indices: Vec<u32> = primitive.indices().unwrap().map_u32().collect();
The current Image
abstraction is too basic. It ought to be redesigned with at least the following features:
Rgb
.A nice-to-have feature would be to be able to choose the pixel type, i.e. u8
or f32
etc.
I considered briefly re-exporting DynamicImage
from the image
crate but gut feeling tells me that's not a good idea. We could design the image abstraction closely following this type, however - perhaps even providing From
/ Into
conversions.
Here's a draft of my ideas:
#[derive(Clone, Debug)]
pub enum Image<T: Copy> {
Gray(Vec<[T; 1]>),
GrayAlpha(Vec<[T; 2]>),
Rgb(Vec<[T; 3]>),
Rgba(Vec<[T; 4]>),
}
impl From<image::DynamicImage> for Image<u8> { ... }
impl Into<image::DynamicImage> for Image<u8> { ... }
On a side note there are lots of image types now, for example image::Image
, json::image::Image
, extensions::image::Image
, json::extensions::image::Image
, import::data::AsyncImage
, import::data::EncodedImage
, etc. Is this getting out of hand?!
Had two failed builds in my project (got a dependecy on the wrapper
branch) and saw it also happened once here:
https://travis-ci.org/alteous/gltf/jobs/242077737
Perhaps it would be an alternative to download the zip file (https://github.com/KhronosGroup/glTF-Sample-Models/archive/master.zip). The repo is a 1GB after cloning, the .zip only 540MB and downloads in ~2min for me.
An easy task for a new contributor / Rust beginner.
fn default_scene(&self) -> Option<Scene> {
// If `json.scene` is present, return the indexed scene, otherwise return `None`.
}
Currently implementations exist for loading both the 1.0 and draft 2.0 version of the glTF specification in the incoming
branch. In #8 I mention the possibility of implementing conversion functions between different glTF versions. However after some experimentation I don't believe a simple 1.0 -> 2.0 conversion is feasible since we can't choose the PBR values that determine look of the models the without extra input from the user. Therefore I propose the library does not attempt to make such conversions and instead forces the user to choose a version upfront.
The implementation will be very simple. The 2.0 glTF tree is implemented in a v2
module alongside the existing v1
module. We provide that functions v1::import()
and v2::import()
that load only glTF version 1.0 and version 2.0 respectively. An error is returned if the user tries to call import()
on glTF that doesn't match the required version.
I am tempted to deprecate the v1
module and remove it in the planned 1.0.0
release. The contents of the v2
module would then be moved to the crate root.
Rationale
v1
module feels like legacy code in which I am not personally interested in maintaining.v1
module since it's far more restrictive than 2.0.Until now there has been a lot of hard work to get the 1.0 version up to scratch. It would be a shame to abandon it entirely. Perhaps it could be migrated to another crate, say gltf-legacy
?
Personally, I would like to see glTF 2.0 become a widely adopted 3D interchange format. I think having a solid idiomatic Rust library for glTF 2.0 will encourage game / graphics development in Rust and help support glTF 2.0 adoption, which is one of the main reasons I began working on gltf
in the first place.
If you are actively using the v1
module, please speak out now!
Many consumers require width and height of images to be specified. This should be provided by the crate importer.
In the upcoming version (0.9) some of the existing functionality, notably accessor iteration, will be moved to a new crate, namely gltf-utils
. This issue exists to discuss the design and implementation of this new crate.
gltf
crate may stabilise more quickly.gltf 1.0
.gltf-importer
may stabilise with gltf
.Source
trait for sourcing pre-existing buffer data. This must be implemented by the user.Iterator
that visits the components of an Accessor
.Primitive
.The gltf-json
crate will continue to exist in order to reduce compile times. Its semver post gltf 1.0
will match that of gltf
although the crate is not intended to stabilise. This should be mentioned in the documentation.
Tracking progress of mint
integration.
The final design of the Source
trait needs to be agreed upon prior to the planned 1.0 release.
The final design should handle at least:
Source::source_external_data
)Edit: https://github.com/alteous/gltf/issues/31 discusses whether a Source
implementation should be mutable, which is an interesting consideration.
Now that the implementation of the glTF 1.0 specification is complete, the next steps are to consolidate the library to a production standard. 1.0.0
will be the version where this condition is met.
Below is an non-exhaustive list of things I would expect from a 1.0.0
release:
1.x.x
releasesKHR_binary_glTF
supportValidate
implementations for all 2.0 data structuresThe current draft of the 2.0 specification is mostly a refinement on the current 1.0 specification. One of the major changes is that shaders are removed from materials and replaced with Physically-Based Rendering (PBR) values. Once the 2.0 specification is released, the question is how should the library continue to support 1.0, if at all? I'm considering one of three options:
version
field of the asset
object to return either a Gltf::V1
or Gltf::V2
and let the user deal with itThe Validate
trait is used for checking the correctness of glTF JSON. The wrapper library assumes validation has succeeded as an invariant. This ensures classic bugs such as out-of-range indices are checked upfront on behalf of the user. The final design of the Validate
trait needs to be agreed upon prior to the 1.0 release.
Final features:
Index range checks for all top-level glTF objects.
Index range checks for animation samplers.
Semantic name correctness.
Restrictions on parameter values.
Ability to opt-out of validation.
Ability to ignore 'less severe' validation errors - implemented with minimal / complete flavours.
Ability to terminate validation early (more important.) (see https://github.com/alteous/gltf/pull/42#issuecomment-312511952)
@bwasty quite rightly reported that tests/import_v2.rs
is failing on the latest 2.0 sample models.
On a side note, it would be worth rewriting the test to read the directory entries.
Right now something like Indices is defined like this:
pub enum Indices<'a> {
U8(Iter<'a, u8>),
U16(Iter<'a, u16>),
U32(Iter<'a, u32>),
}
And because it exposes Iter, we can use .len()
easily, but that is not possible for Positions
- it hides the Iter field.
I'm interested in working to get this lib up to GLTF 1.0 compliant.
Definitely need to add more tests in to ensure that it is 1.0 compliant.
How are buffer binary files going to be handled?
Should this library load them up into memory? Lazy load?
Buffers can be base64 encoded uri's which I don't know if it is decoded.
There is a spec for 2.0 that is currently under way and they introduce a .glb
format which from the looks of it, is just the binary gltf extension.
In 2.0 it looks like they are going to pull the shader code stuff out of the spec and make it an extension instead. https://github.com/KhronosGroup/glTF/tree/2.0/extensions/Khronos/KHR_technique_webgl
Reproduce with:
fn main() {
let gltf = gltf::Import::from_path("glTF-Sample-Models/2.0/Avocado/glTF/Avocado.gltf").sync().unwrap();
let mesh = gltf.meshes().nth(0).unwrap();
let primitive = mesh.primitives().nth(0).unwrap();
let mut tex_coords = vec![];
match primitive.tex_coords(0).unwrap() {
gltf::mesh::TexCoords::U8(iter) => {
for x in iter {
tex_coords.push([x[0] as f32 / 255.0, x[1] as f32 / 255.0]);
}
},
gltf::mesh::TexCoords::U16(iter) => {
for x in iter {
tex_coords.push([x[0] as f32 / 65535.0, x[1] as f32 / 65535.0]);
}
},
gltf::mesh::TexCoords::F32(iter) => {
for x in iter {
tex_coords.push([x[0], x[1]]);
}
},
}
println!("{:?}", tex_coords);
}
Note that all the x[1]
values are negative but otherwise valid in unit range.
https://github.com/alteous/gltf/blob/wrapper/src/import.rs#L79
Is there a reason to have &mut self instead of &self? Logically, we are not changing the Source when reading from it. If implementer would need it to be mutable, they could use RefCell, Mutex or else inside of their implementation. This would make user code working with functions that take Source cleaner as the locks in non-concurrent Sources would be hidden inside the implementation instead of being propagated to the user code.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.