aiff22 / pynet-bokeh Goto Github PK
View Code? Open in Web Editor NEWRendering Realistic Bokeh Images with PyNET
Home Page: http://people.ee.ethz.ch/~ihnatova/pynet-bokeh.html
License: Other
Rendering Realistic Bokeh Images with PyNET
Home Page: http://people.ee.ethz.ch/~ihnatova/pynet-bokeh.html
License: Other
When I want to download the pretrain model from Google Drive,it declares that I need some authority to access.And I apply for it,but one day later,I can not access the link,either.Could you please check if the link was set by you or not when someone need to download the model.Thank you very much!
Hi @aiff22,
Im training from scratch with EBB dataset, result at train_iters 40000 of level 1 is wrong, like this:
https://ibb.co/bByGP4T
Is it normally?
Should i continue training to 100000 iters as your recommend?
Thanks.
if yes then how
in detail,
i want to use pretrained model for converting my normal clicked image to a bokeh image without any depth map provided to the model
hello, I downloaded the EBB dataset, but I didn't see the depth-map. Does it need to be generated by ourselves?
Dear Mr. Ignatov,
I have been doing comparative experiments on bokeh rendering, and the lack of groundtruth makes my work not convincing enough. I am wondering if you could kindly share with me the groundtruth of the No.0-10 of Test set. I promise they will be used only for research purposes.
Thank you very much for your kind consideration
Hi @aiff22,
Thanks for your great work and I am quite enlightened by your insights. I attempted to generate images as Fgure 8 in your paper. You disabled PyNET levels from the 7th to the 4th on by seting their outputs/activations to zero. Could you provide the code for this operation? Thank you very much.
Hi, Where can I find the EBB complete dataset?
The competition site, to which I'm directed to get the test dataset at-leats to is not updated since 2020. So it does not provide the complete dataset.
This is for a university project. Would be appreciated very much if this could be addressed ASAP.
Hi,
I Want to run this model on my android phone.
I convert it to frozen graph,and use follow code to convert to tflite.
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph('./models/pb/output_graph.pb', input_arrays=["Placeholder"], output_arrays=['save/restore_all'], input_shapes={"Placeholder":[1,768,512,4]})
But a error occured :ValueError: Input 0 of node save/AssignVariableOp was passed float from generator/Variable:0 incompatible with expected resource.
Could you give me a hand.
you say that this method does not require any special hardware, but the codes seem like it must have depth information, how to test the pre-trained models without depth map?
Hello, I hope you are doing well. I was attempting to implement the work done here, but the .meta file necessary to restore the original model is missing from the .zip provided with the original PyNet metadata. Might I ask why this is, and will the folder be updated with all the necessary files. Also, after registering for the competition, the page says the competition is not accepting new participants. This seems to be preventing me from acquiring the dataset. Is there a workaround for this?
Can you please provide a few more details on how to download the EBB dataset and arrange the files properly? It looks like certain parts of the EBB dataset are currently with held on the website, so its not totally clear what parts are required to actually train the model.
if its possible to download a fully trained PyNet-Bokeh model, that would also be super helpful.
I use :os.environ["CUDA_VISIBLE_DEVICES"] = " ",but it was not work?
Will you provide Pytorch version of the code and model in the future?
Hi,
All requests are for academic research.
Dear @aiff22,
Im trying to convert the checkpoint to TFlite format w/o Instance Norm like 4.4 Section in paper, but not successful.
Tried to train the model over EBB but could not find original_depth files, so I assumed according to the paper that I had to generate them using Megadepth. Upon doing so, I am getting multiple errors within the code, kindly explain as to how to generate the original_depth images.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.