Comments (15)
Hi @okasanasan!
tools/classify.py
is a simple example-demonstrator. It does not use GPU at all, does not run inference in batches and it also has to load the checkpoint and initialize the graph before it has a chance to classify an image.
So, the answer is yes, it's possible to speed it up.
from dataset.
Hi @gkrasin!
Thanks for your answer!
Can you tip me how to force using GPU?
from dataset.
@okasanasan this is very well covered in Using GPUs docs.
from dataset.
I also suspect that currently the majority of the time goes into loading the checkpoint. If you're doing a bulk classification, getting rid of this overhead might be even more important than running on GPU.
from dataset.
Thank you @gkrasin.
I changed the script a little bit - added a possibility to put multiple paths at once and process them in a loop.
from dataset.
Hi @okasanasan, I'm working on this right now, and was curious about the speed improvement you achieved.
from dataset.
@gkrasin In order to avoid the I/O latency resulting from tensorflow slim loading the architecture on each run, I first dumped the architecture initialized by slim into a protobuf file, say 'init_graph.pb'. I then used freeze_graph.py to combine the architecture from 'init_graph.pb' with the trained weights from data/model.ckpt and dumped them into a second .pb file, 'graph.pb'. I then loaded graph.pb in a session, and ran the session feeding in a single pre-processed image. The resulting output however, seems binary.
from dataset.
@AdityaChaganti how do I reproduce your issue? Any code I can patch and run? Also, based on the description, I am unsure what do you mean by the resulting output being binary.
from dataset.
@gkrasin I can email you my code with steps to reproduce it. Would that work?
And sorry for the ambiguity! I meant that label classifications coming out of the graph (The output of the final sigmoid layer) are made with a confidence of either 0, or 1.
from dataset.
I can email you my code with steps to reproduce it. Would that work?
@AdityaChaganti probably not, as that will move the whole discussion into a private area, and therefore will not help others with the same issue in the future.
And sorry for the ambiguity! I meant that label classifications coming out of the graph (The output of the final sigmoid layer) are made with a confidence of either 0, or 1.
That's an interesting glitch! I don't have any guesses at the moment.
from dataset.
@gkrasin Ah, of course. I'll post the code here in a while. Thank you!
from dataset.
Hi @gkrasin,
Run the attached script from the root of the dataset directory, specifying the path to the image you're using. Any JPEG image should work, but I've also attached the specific image I was using for testing. My output is as follows:
from dataset.
@AdityaChaganti I have taken a quick look, and I find the following snippet very confusing. Can you please clarify why is it implemented like that?
image = tf.gfile.FastGFile(FLAGS.image_path, 'rb').read()
image = PreprocessImage(image)
image = image.eval(session=sess) # Returns a numpy array
image = image[0] # Extracts Row*Column*Channel
print(image.shape)
image = Image.fromarray(image, 'RGB')
image.save('./tmp/image.jpg')
with tf.gfile.FastGFile('./tmp/image.jpg', 'r') as f:
image = f.read()
predictions = sess.graph.get_tensor_by_name('multi_predictions:0')
predictions = np.squeeze(sess.run(predictions, {'input:0': image}))
I am also confused by the fact that 'input:0' is feeded with an already preprocessed JPEG-encoded image. Would not it be preprocessed by the graph again?
from dataset.
@gkrasin You're right, it would. My bad. I missed the fact that the preprocessing is already fed into the graph while saving it. Accounting for that change, my modified script is attached. The output is still similar:
freeze.zip
from dataset.
My recommendation would be to compare the layer activations on freezed vs normal graph and find the first layer with a material difference.
from dataset.
Related Issues (20)
- OpenImages V6 data set HOT 1
- there are no cat and dog coarse-grain category. HOT 1
- Image 01a624308e2f8c5d in oidv6-train-annotations-bbox.csv is mislabled
- Mislabeled Images HOT 1
- segmentations.csv mask 3 coordinates HOT 1
- Decoding Openimages v6 mask coordinates HOT 2
- BadZipFile Error HOT 3
- Soil-dataset
- L
- Golf rounds
- OIDv4 Tool Kit Windows 10 Python 3.7 HOT 2
- Extended dataset download per category? HOT 1
- (V5) Mismatched image and mask resolutions. HOT 2
- Explore UI does not load images HOT 2
- How to report invalid/questionable images? HOT 5
- Open Image Dataset V5 to COCO JSON format
- Why not build a video instance segmentation dataset?
- Where can I download the OpenImage V2 dataset? HOT 1
- Hierarchy question
- Request to add pretrained large-scale object detector to "Community Contributions" HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dataset.