yanght321 / detailed3dface Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
It seems pix2pixHD model didn't solve the condition with cpu. Maybe it can be solved by add input_concat = input_concat.to(torch.device("cpu"))
in the code.
class Pix2PixHDModel(BaseModel):
def name(self):
return 'Pix2PixHDModel'
def initialize(self, opt):
BaseModel.initialize(self, opt)
torch.backends.cudnn.benchmark = True
input_nc = opt.input_nc
##### define networks
# Generator network
netG_input_nc = input_nc
self.netG = networks.define_G(netG_input_nc, 1, 64, 4, 9, 'instance', gpu_ids=self.gpu_ids)
# load networks
self.load_network(self.netG, 'G', '')
def encode_input(self, label_map, inst_map=None, real_image=None, feat_map=None):
input_label = label_map.data.cuda()
input_label = Variable(input_label)
return input_label, inst_map, real_image, feat_map
def inference(self, label, inst, image=None):
# Encode Inputs
image = Variable(image) if image is not None else None
input_label, inst_map, real_image, _ = self.encode_input(Variable(label), Variable(inst), image)
input_concat = input_label
if torch.__version__.startswith('0.4'):
with torch.no_grad():
fake_image = self.netG.forward(input_concat)
else:
if not self.gpu_ids:
input_concat = input_concat.to(torch.device("cpu"))
fake_image = self.netG.forward(input_concat)
return fake_image
Thank you very much for sharing your great work on face reconstruction.
As for the aspect that your method can both reconstruct the similar facial pose inside the input image, and also can generate the other riggable blendshapes.
So can I ask how to achieve the riggable blendshapes?
Does it means after the fitting process, the identity weights of the 50 different identities have been determined, then just fix the identity weights for this specific person, then set every this person's specific blendshape weight as 1, for example set blendshape A's weight as one, the other 51 blendshapes' weights as zero, to get the blendshape A's shape?
I am very grateful for your time.
Really looking forward to your reply.
您好,测试了您提供的代码,但貌似只生成了facescape论文中描述的生成的base 三维人脸模型。并没有生成论文中展示的带细节的三维人脸模型呀。 请问是没有公开这个生成细节三维人脸模型的代码吗,期待您的回复
Hello I'm running your python file as per instructions on Pytorch 1.12.1 CU102How can I resolve this please?
When the program run to render.py,
srf = pygame.display.set_mode(viewport, pygame.OPENGL | pygame.DOUBLEBUF)
I meeted the error "pygame.error: No available video device". I searched for a lot of solutions like "os.environ["SDL_VIDEODRIVER"] = "dummy", but raised another error "pygame.error: OpenGL not available" .
I installed all the packages as requirements.txt, could anyone give any advice to fix this problem?
Hi, thanks for your wonderful work!
I have some questions when running main.py.
When I load "front_texcoords.pkl", pickle returns an error "ValueError: could not convert string to float"
When I load "front_faces.pkl", pickle returns an error "_pickle.UnpicklingError: the STRING opcode argument must be quoted"
The other .pkl files are ok.
I am using Ubuntu 18.04, Python 3.6 and PyTorch 1.6.0
Thank you very much!
Hi,
Thank you for your awesome paper and code.
I have few questions related to base model fitting.
Photometric loss. When looking into your code, the bilinear_model, I found that the base model fitting part is not as same as described in your paper which suppose to combine pixel-level consistency and regularizations.
Landmark loss. Another part is the landmark detector as you mentioned. I wonder how much the performance will degrade using dlib's algorithm. I've tried dlib's, and FAN-2d. both are not very accurate and thus affects the model fitting.
I'm thinking of using FAN detector and differentiable rendering (pytorch3d) to do base model fitting. I've tried model fitting using Basel face model using pytorch3d, but it's very difficult to control the lambdas-{id, exp, alb} of regularization terms.
The bilinear model looks better than BFM. I would like to know more details about the model fitting.
In other case, If I use an inaccurate base model fitting, do I need to re-prepare deforming maps and others to re-train pix2pixHD.
Thanks.
Huyi
It seems that the bilinear_model.fit_image operation takes a lot of time in CPU,is there any way to handle this in GPU or speed up?
Thanks for your great work first! I notice that you have made your bilinear model v1.6 public. However, I did not found any file that corresponds to the 'core_847_50_52.npy' as stated in your instruction. Can you please tell me how to use your bilinear model v1.6 for this code?
I can see correct result from pygame( face with model), but during save step, I just got a black image. Thanks for your contrubution, hope you can give me some advice.
In renderer.py :
pygame.display.flip()
pygame.image.save(srf, out_path)
Hello! Thanks for sharing your grate work! Would you please provide a link for Bilinear model ver 1.3 download? This will make it more convenient for us to evaluate this work. Since i saw the Bilinear model ver 1.6 has an external link. So i put forward this proposal :)
平时用BFM或Face Warehouse人脸模型时候,都是有现成的 meanshape, 身份 blendshape (身份basis个数 * 顶点),表情blendshape (表情base个数 * 顶点)。如何从(50, 52, 78951)这里提取出 20个身份 blendshape呢?我可否用 [:50, 0, :78951] 做下PCA就得到了身份basis 呢。对于mean shape,是这里做完的PCA的第一个吗?谢谢大佬奠基性工作~
did any one know how to use python to render reconstructed mesh with generated texture? @yanght321
Hi,
I want to test some other 3d face reconstruction methods on the FaceScape dataset.
As the predicted meshes from different methods are in different coordinate systems , how can I align the predicted meshes to the groud truth meshes?
Can you share the code to calculate the point-to-plane reconstruction error as the paper mentioned ?
Also, Can you share some tips about how to draw the error heatmap like below? I find there is no much reference code.
Any help would be appreciated! 😘
hi, when i run main.py, there is a problem: "no such file or directory: './predef/core_847_50_52.npy'". how can i get this file? any body can share this?
Running the demo code seems to require the v1.3 model, now updated to v1.6, how should I get the v1.3 model?The new version of the model doesn't seem to work with the demo code anymore.
As the title suggests.
For example, what is the meaning of the data in front_textcoords.pkl? Are these. Pkl files from some kind of 3D models? Is there a documentation to view?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.