nasirkhalid24 / clip-mesh Goto Github PK
View Code? Open in Web Editor NEWOfficial implementation of CLIP-Mesh: Generating textured meshes from text using pretrained image-text models
License: MIT License
Official implementation of CLIP-Mesh: Generating textured meshes from text using pretrained image-text models
License: MIT License
Hello the colab links are dead, I get the message: "Sorry, the file you have requested does not exist." could you please provide new ones, thank you in advance.
I see you have no LICENSE file for this project. The default is copyright.
I would suggest releasing the code under the GPL-3.0-or-later or Apache-2.0 license so that others are encouraged to contribute changes back to your project.
Dear Prof;
Thanks for your excellent paper. I just wanted to know how to visualize the multiple generated objects simultaneously just like your paper Figure 1
Does the diffusion prior need to come from the pretrained model in the README, or can we swap in any prior?
That is, assuming the prior works with whatever code is loading it. So for instance if I wanted to swap in another Latent Diffusion model than DALLE-2-pytorch.
Hi, I notice that there is a typo in the bibtex of the project page: journal
is misspelled as joural
. This will cause a citation problem when others want to cite CLIP-Mesh in their papers and copy the bibtex from the project page. I encountered this problem in a previous paper submission :(. I hope you can fix it soon.
Hi Nasir,
Thanks for the great work! This is super creative and the results look better than DreamFields.
Quick question: how do I visualize the generate meshes on a website, like the meshes on your project website (https://github.com/NasirKhalid24/CLIP-Mesh)?
I tried to convert the .obj to .gltf but without any success. The converted texture looks very different on https://modelviewer.dev/.
Thanks!
Getting errors when introduncing a new mesh for texture generation. The in-house meshes like spot.obj etc are working fine, but meshes generated from OpenAI's Point-E are not working well.
Hi, congratulations on your fantastic work! Could you please share the code for training the diffusion prior? I want to be more clear about some training details. Besides, I'm curious if the CLIP text embeddings and image embeddings are normalized when training the diffusion prior network. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.