Comments (7)
Tensorflow VGG evaluation code has some issues:
Below is the log when using tf keras API (fit) to train for the first 50 epochs
78/78 [==============================] - 3s 43ms/step - loss: 4.6548 - acc: 0.0205 - val_loss: 4.6029 - val_acc: 0.0094
Epoch 2/1500
78/78 [==============================] - 1s 15ms/step - loss: 4.4059 - acc: 0.0509 - val_loss: 4.6167 - val_acc: 0.0146
Epoch 3/1500
78/78 [==============================] - 1s 15ms/step - loss: 4.2427 - acc: 0.0768 - val_loss: 4.5593 - val_acc: 0.0271
Epoch 4/1500
78/78 [==============================] - 1s 15ms/step - loss: 4.1213 - acc: 0.0952 - val_loss: 4.5247 - val_acc: 0.0318
Epoch 5/1500
78/78 [==============================] - 1s 15ms/step - loss: 4.0066 - acc: 0.1188 - val_loss: 4.3855 - val_acc: 0.0474
Epoch 6/1500
78/78 [==============================] - 1s 16ms/step - loss: 3.9035 - acc: 0.1377 - val_loss: 4.3042 - val_acc: 0.0490
Epoch 7/1500
78/78 [==============================] - 1s 16ms/step - loss: 3.7998 - acc: 0.1591 - val_loss: 4.1838 - val_acc: 0.0719
Epoch 8/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.6984 - acc: 0.1784 - val_loss: 4.1883 - val_acc: 0.0760
Epoch 9/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.6032 - acc: 0.1982 - val_loss: 4.1549 - val_acc: 0.0812
Epoch 10/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.5146 - acc: 0.2225 - val_loss: 4.1804 - val_acc: 0.0786
Epoch 11/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.4256 - acc: 0.2387 - val_loss: 4.0551 - val_acc: 0.0891
Epoch 12/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.3483 - acc: 0.2549 - val_loss: 3.9600 - val_acc: 0.1068
Epoch 13/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.2747 - acc: 0.2725 - val_loss: 4.0166 - val_acc: 0.1052
Epoch 14/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.2003 - acc: 0.2882 - val_loss: 4.1438 - val_acc: 0.0906
Epoch 15/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.1426 - acc: 0.3012 - val_loss: 4.0813 - val_acc: 0.0938
Epoch 16/1500
78/78 [==============================] - 1s 15ms/step - loss: 3.0739 - acc: 0.3124 - val_loss: 4.1553 - val_acc: 0.0818
Epoch 17/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.9793 - acc: 0.3322 - val_loss: 4.2024 - val_acc: 0.0812
Epoch 18/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.9173 - acc: 0.3499 - val_loss: 4.2972 - val_acc: 0.0880
Epoch 19/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.8376 - acc: 0.3712 - val_loss: 4.2470 - val_acc: 0.0786
Epoch 20/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.7817 - acc: 0.3832 - val_loss: 4.0044 - val_acc: 0.1052
Epoch 21/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.7294 - acc: 0.3922 - val_loss: 3.9743 - val_acc: 0.1052
Epoch 22/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.6682 - acc: 0.4050 - val_loss: 4.4385 - val_acc: 0.0776
Epoch 23/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.5921 - acc: 0.4271 - val_loss: 4.5028 - val_acc: 0.0750
Epoch 24/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.5386 - acc: 0.4356 - val_loss: 4.0264 - val_acc: 0.1214
Epoch 25/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.4445 - acc: 0.4580 - val_loss: 4.1477 - val_acc: 0.0974
Epoch 26/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.3762 - acc: 0.4760 - val_loss: 4.4207 - val_acc: 0.1021
Epoch 27/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.2880 - acc: 0.4990 - val_loss: 4.4048 - val_acc: 0.0922
Epoch 28/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.1901 - acc: 0.5275 - val_loss: 4.3756 - val_acc: 0.0885
Epoch 29/1500
78/78 [==============================] - 1s 15ms/step - loss: 2.1030 - acc: 0.5519 - val_loss: 4.3486 - val_acc: 0.0938
Epoch 30/1500
78/78 [==============================] - 1s 16ms/step - loss: 2.0261 - acc: 0.5718 - val_loss: 4.3443 - val_acc: 0.0938
Epoch 31/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.9430 - acc: 0.5968 - val_loss: 4.2290 - val_acc: 0.0943
Epoch 32/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.8926 - acc: 0.6058 - val_loss: 4.2823 - val_acc: 0.1021
Epoch 33/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.8525 - acc: 0.6130 - val_loss: 4.1456 - val_acc: 0.1151
Epoch 34/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.7682 - acc: 0.6381 - val_loss: 4.1099 - val_acc: 0.1203
Epoch 35/1500
78/78 [==============================] - 1s 17ms/step - loss: 1.6829 - acc: 0.6610 - val_loss: 4.0539 - val_acc: 0.1312
Epoch 36/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.5999 - acc: 0.6825 - val_loss: 4.3446 - val_acc: 0.1099
Epoch 37/1500
78/78 [==============================] - 1s 17ms/step - loss: 1.5149 - acc: 0.7082 - val_loss: 4.3879 - val_acc: 0.1026
Epoch 38/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.4506 - acc: 0.7226 - val_loss: 4.1975 - val_acc: 0.1167
Epoch 39/1500
78/78 [==============================] - 1s 17ms/step - loss: 1.3767 - acc: 0.7432 - val_loss: 4.1234 - val_acc: 0.1193
Epoch 40/1500
78/78 [==============================] - 1s 15ms/step - loss: 1.3143 - acc: 0.7571 - val_loss: 4.3449 - val_acc: 0.1083
Epoch 41/1500
78/78 [==============================] - 1s 15ms/step - loss: 1.2781 - acc: 0.7585 - val_loss: 4.4556 - val_acc: 0.1000
Epoch 42/1500
78/78 [==============================] - 1s 16ms/step - loss: 1.2420 - acc: 0.7735 - val_loss: 4.2794 - val_acc: 0.1219
Epoch 43/1500
78/78 [==============================] - 1s 17ms/step - loss: 1.1939 - acc: 0.7777 - val_loss: 4.2860 - val_acc: 0.1255
Epoch 44/1500
78/78 [==============================] - 1s 18ms/step - loss: 1.1164 - acc: 0.8031 - val_loss: 4.5744 - val_acc: 0.1021
Epoch 45/1500
78/78 [==============================] - 1s 17ms/step - loss: 1.0834 - acc: 0.8104 - val_loss: 4.4412 - val_acc: 0.0984
Epoch 46/1500
78/78 [==============================] - 1s 17ms/step - loss: 1.0332 - acc: 0.8190 - val_loss: 4.4948 - val_acc: 0.1151
Epoch 47/1500
78/78 [==============================] - 1s 17ms/step - loss: 0.9426 - acc: 0.8431 - val_loss: 4.3600 - val_acc: 0.1286
Epoch 48/1500
78/78 [==============================] - 1s 17ms/step - loss: 0.8718 - acc: 0.8648 - val_loss: 4.3338 - val_acc: 0.1271
Epoch 49/1500
78/78 [==============================] - 1s 16ms/step - loss: 0.8247 - acc: 0.8768 - val_loss: 4.4281 - val_acc: 0.1286
Epoch 50/1500
78/78 [==============================] - 1s 16ms/step - loss: 0.8040 - acc: 0.8792 - val_loss: 4.6010 - val_acc: 0.1187
Final training accuracy of 87.92 and loss os 0.8040
The same data when used with manual training result is also shown -
#1/1500 - Training Loss: 4.650303 - Training Accuracy: 1.893029 >> [ Accuracy: 0.989583% - Validation Loss : 4.609450 ]
#2/1500 - Training Loss: 4.405456 - Training Accuracy: 5.268429 >> [ Accuracy: 0.989583% - Validation Loss : 4.598562 ]
#3/1500 - Training Loss: 4.242596 - Training Accuracy: 8.082933 >> [ Accuracy: 1.666667% - Validation Loss : 4.592079 ]
#4/1500 - Training Loss: 4.117529 - Training Accuracy: 10.136218 >> [ Accuracy: 1.041667% - Validation Loss : 4.589799 ]
#5/1500 - Training Loss: 4.001147 - Training Accuracy: 12.069311 >> [ Accuracy: 1.250000% - Validation Loss : 4.587155 ]
#6/1500 - Training Loss: 3.889400 - Training Accuracy: 14.132612 >> [ Accuracy: 1.770833% - Validation Loss : 4.585478 ]
#7/1500 - Training Loss: 3.776264 - Training Accuracy: 16.686699 >> [ Accuracy: 1.718750% - Validation Loss : 4.586060 ]
#8/1500 - Training Loss: 3.676095 - Training Accuracy: 19.290865 >> [ Accuracy: 1.979167% - Validation Loss : 4.584372 ]
#9/1500 - Training Loss: 3.573223 - Training Accuracy: 21.424279 >> [ Accuracy: 1.770833% - Validation Loss : 4.583361 ]
#10/1500 - Training Loss: 3.481684 - Training Accuracy: 23.407452 >> [ Accuracy: 1.458333% - Validation Loss : 4.585159 ]
#11/1500 - Training Loss: 3.400188 - Training Accuracy: 25.170272 >> [ Accuracy: 1.666667% - Validation Loss : 4.584146 ]
#12/1500 - Training Loss: 3.327522 - Training Accuracy: 26.221955 >> [ Accuracy: 2.239583% - Validation Loss : 4.583616 ]
#13/1500 - Training Loss: 3.257803 - Training Accuracy: 28.195112 >> [ Accuracy: 2.031250% - Validation Loss : 4.582268 ]
#14/1500 - Training Loss: 3.193364 - Training Accuracy: 29.026442 >> [ Accuracy: 2.552083% - Validation Loss : 4.577587 ]
#15/1500 - Training Loss: 3.110100 - Training Accuracy: 31.330128 >> [ Accuracy: 2.343750% - Validation Loss : 4.579284 ]
#16/1500 - Training Loss: 3.028710 - Training Accuracy: 32.942708 >> [ Accuracy: 2.447917% - Validation Loss : 4.579006 ]
#17/1500 - Training Loss: 2.928692 - Training Accuracy: 35.797276 >> [ Accuracy: 3.020833% - Validation Loss : 4.578534 ]
#18/1500 - Training Loss: 2.850614 - Training Accuracy: 36.919071 >> [ Accuracy: 3.281250% - Validation Loss : 4.577612 ]
#19/1500 - Training Loss: 2.768969 - Training Accuracy: 39.883814 >> [ Accuracy: 2.083333% - Validation Loss : 4.577630 ]
#20/1500 - Training Loss: 2.711938 - Training Accuracy: 40.825321 >> [ Accuracy: 2.135417% - Validation Loss : 4.576182 ]
#21/1500 - Training Loss: 2.643485 - Training Accuracy: 42.377804 >> [ Accuracy: 2.708333% - Validation Loss : 4.573058 ]
#22/1500 - Training Loss: 2.577051 - Training Accuracy: 43.790064 >> [ Accuracy: 2.604167% - Validation Loss : 4.572488 ]
#23/1500 - Training Loss: 2.464163 - Training Accuracy: 46.854968 >> [ Accuracy: 2.500000% - Validation Loss : 4.571971 ]
#24/1500 - Training Loss: 2.365925 - Training Accuracy: 49.188702 >> [ Accuracy: 2.500000% - Validation Loss : 4.571719 ]
#25/1500 - Training Loss: 2.337371 - Training Accuracy: 49.358974 >> [ Accuracy: 2.239583% - Validation Loss : 4.571905 ]
#26/1500 - Training Loss: 2.308354 - Training Accuracy: 50.190304 >> [ Accuracy: 1.979167% - Validation Loss : 4.567413 ]
#27/1500 - Training Loss: 2.257775 - Training Accuracy: 50.701122 >> [ Accuracy: 1.614583% - Validation Loss : 4.566603 ]
#28/1500 - Training Loss: 2.117624 - Training Accuracy: 54.717548 >> [ Accuracy: 1.614583% - Validation Loss : 4.567722 ]
#29/1500 - Training Loss: 2.022456 - Training Accuracy: 57.181490 >> [ Accuracy: 1.510417% - Validation Loss : 4.570506 ]
#30/1500 - Training Loss: 1.981240 - Training Accuracy: 58.213141 >> [ Accuracy: 1.562500% - Validation Loss : 4.572815 ]
#31/1500 - Training Loss: 1.903474 - Training Accuracy: 60.416667 >> [ Accuracy: 1.406250% - Validation Loss : 4.574159 ]
#32/1500 - Training Loss: 1.830212 - Training Accuracy: 62.540064 >> [ Accuracy: 1.510417% - Validation Loss : 4.572250 ]
#33/1500 - Training Loss: 1.764051 - Training Accuracy: 64.132612 >> [ Accuracy: 1.510417% - Validation Loss : 4.576595 ]
#34/1500 - Training Loss: 1.697665 - Training Accuracy: 65.234375 >> [ Accuracy: 1.458333% - Validation Loss : 4.579023 ]
#35/1500 - Training Loss: 1.640916 - Training Accuracy: 66.897035 >> [ Accuracy: 1.354167% - Validation Loss : 4.578002 ]
#36/1500 - Training Loss: 1.579246 - Training Accuracy: 68.399439 >> [ Accuracy: 1.666667% - Validation Loss : 4.578105 ]
#37/1500 - Training Loss: 1.455882 - Training Accuracy: 72.365785 >> [ Accuracy: 1.875000% - Validation Loss : 4.578586 ]
#38/1500 - Training Loss: 1.357298 - Training Accuracy: 74.899840 >> [ Accuracy: 1.666667% - Validation Loss : 4.579042 ]
#39/1500 - Training Loss: 1.279279 - Training Accuracy: 76.913061 >> [ Accuracy: 1.354167% - Validation Loss : 4.581800 ]
#40/1500 - Training Loss: 1.212666 - Training Accuracy: 78.445513 >> [ Accuracy: 1.250000% - Validation Loss : 4.584252 ]
#41/1500 - Training Loss: 1.170250 - Training Accuracy: 79.547276 >> [ Accuracy: 1.093750% - Validation Loss : 4.584248 ]
#42/1500 - Training Loss: 1.149094 - Training Accuracy: 79.557292 >> [ Accuracy: 1.197917% - Validation Loss : 4.582602 ]
#43/1500 - Training Loss: 1.106521 - Training Accuracy: 80.899439 >> [ Accuracy: 1.093750% - Validation Loss : 4.582510 ]
#44/1500 - Training Loss: 1.058206 - Training Accuracy: 81.450321 >> [ Accuracy: 1.145833% - Validation Loss : 4.582915 ]
#45/1500 - Training Loss: 1.037295 - Training Accuracy: 81.430288 >> [ Accuracy: 1.093750% - Validation Loss : 4.585850 ]
#46/1500 - Training Loss: 1.025492 - Training Accuracy: 81.260016 >> [ Accuracy: 1.458333% - Validation Loss : 4.589176 ]
#47/1500 - Training Loss: 1.016521 - Training Accuracy: 80.849359 >> [ Accuracy: 1.666667% - Validation Loss : 4.586395 ]
#48/1500 - Training Loss: 1.006226 - Training Accuracy: 81.320112 >> [ Accuracy: 1.927083% - Validation Loss : 4.587876 ]
#49/1500 - Training Loss: 0.955629 - Training Accuracy: 82.211538 >> [ Accuracy: 1.979167% - Validation Loss : 4.586110 ]
#50/1500 - Training Loss: 0.916124 - Training Accuracy: 83.153045 >> [ Accuracy: 1.666667% - Validation Loss : 4.587056 ]
As can be seen, the training result is quite close. However, validation has some bugs (batch normalization mode), which are being fixed. #657 fixes this.
from nntrainer.
cibot: Thank you for posting issue #200. The person in charge will reply soon.
from nntrainer.
- Wait for batch normalization
from nntrainer.
Close with #639
from nntrainer.
@jijoongmoon Can you please post the results here from training of both from tensorflow and nntrainer?
from nntrainer.
Lets keep this open till we post the final comparison.
from nntrainer.
I think it is now okay to close this.
from nntrainer.
Related Issues (20)
- Tizen 7.5 Items HOT 1
- Different output between NNTrainer and Tensorflow models after weights initialization HOT 4
- Upgrade Tensorflow 1.x --> Tensorflow 2.x HOT 1
- Support tensor data structure in API HOT 1
- Support free_mem option on inference function even if there is no loss layer. HOT 1
- Need to support the better gradient clipping method. HOT 1
- [graph] Undefined behavior HOT 1
- Print out primary properties when training start HOT 1
- Add Benchmark applications HOT 1
- Support Export & Import Flatbuffer Foramt HOT 2
- Refactoring for several issues HOT 2
- Support swish activation function on LSTM, Attention layers. HOT 1
- Update Android Tensorflow lite version & Flatbuffers version HOT 2
- Errors while building NNTrainer from deb HOT 2
- Errors while building NNTrainer from deb HOT 7
- Cannot create dataset for SimpleShot application HOT 5
- Errors while compiling a file with NNtrainer include HOT 3
- Select ops are not supported in TFlite interpreter HOT 24
- [Pooling] Stride default size issue HOT 1
- Issues while building NNtrainer HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nntrainer.