Comments (7)
Hi @coolrishi2005 ,
The input layer of keras InceptionV3 doesn't contain shapes info, which is placed in model config batch_input_shape. So it can't be parsed correctly.
To fix it, please change the code
BackEndModel = applications.InceptionV3(include_top=False, weights='imagenet')
to
BackEndModel = applications.InceptionV3(input_shape=(299, 299, 3), include_top=False, weights='imagenet')
And I tested it with follow scripts
$ python -m mmdnn.conversion._script.convertToIR -f keras -d ./kit_imagenet -n incpetionTopModel.json -w incpetionTopModel.h5
$ python -m mmdnn.conversion._script.IRToCode --dstModelFormat cntk --IRModelPath kit_imagenet.pb --dstModelPath kit_imagenet.py --IRWeightPath kit_imagenet.npy
$ python -m mmdnn.conversion.examples.cntk.imagenet_test -n kit_imagenet.py -w kit_imagenet.npy --dump cntkmodel.dnn
CNTK model file is saved as [cntkmodel.dnn], generated by [kit_imagenet.py] and [kit_imagenet.npy].
Hope it can help you.
from mmdnn.
Hi kitstar,
Thanks for the help. The model has been converted to *.dnn, but with the following warning:
(C:\Program Files\Anaconda3\envs\py35) D:\mmdnn\conversion\cntk>python -m mmdnn.conversion.examples.cntk.imagenet_test
-n cntkInceptionTopModel.py -w IRInceptionTopModel.npy --dump cntkInceptionTopMo
del.dnn
Selected CPU as the process wide default device.
C:\Program Files\Anaconda3\envs\py35\lib\site-packages\cntk\core.py:82: RuntimeW
arning: data is not C contiguous; rearrange your data/computation to avoid costl
y data conversions
RuntimeWarning)
CNTK model file is saved as [cntkInceptionTopModel.dnn], generated by [cntkIncep
tionTopModel.py] and [IRInceptionTopModel.npy].
Is it fine?
from mmdnn.
Supposed to be fine.
from mmdnn.
Great!! Thanks.
from mmdnn.
Hi kitstar,
I have saved the created and saved the Inception model with following parameters:
BackEndModel = applications.InceptionV3(input_shape=(150, 150, 3), include_top=False, weights='imagenet')
I am trying to load the above saved model (cntkInceptionTopModel.dnn) using cntk CPP Evaluation Example provided by Microsoft (https://github.com/Microsoft/CNTK/blob/release/2.3/Examples/Evaluation/CNTKLibraryCPPEvalCPUOnlyExamples/CNTKLibraryCPPEvalCPUOnlyExamples.cpp)
Now, for evaluation, I am providing binary RGB data as input (which is of size 150x150x3, i.e. 67500 bytes data). Following is the code for the same:
void EvaluationSingleSampleUsingDense(const wchar_t* modelFile, const DeviceDescriptor& device)
{
printf("\n===== Evaluate single sample using dense format.\n");
// Load the model.
// The model is trained by <CNTK>/Examples/Image/Classification/ResNet/Python/TrainResNet_CIFAR10.py
// Please see README.md in <CNTK>/Examples/Image/Classification/ResNet about how to train the model.
FunctionPtr modelFunc = Function::Load(modelFile, device);
// Get input variable. The model has only one single input.
Variable inputVar = modelFunc->Arguments()[0];
// The model has only one output.
// If the model has more than one output, use modelFunc->Outputs to get the list of output variables.
Variable outputVar = modelFunc->Output();
NDShape outputShape = outputVar.Shape();
std::vector<size_t> outputShapeDim = outputShape.Dimensions();
// Prepare input data.
// For evaluating an image, you first need to perform some image preprocessing to make sure that the input image has the correct size and layout
// that match the model inputs.
// Please note that the model used by this example expects the CHW image layout.
// inputVar.Shape[0] is image width, inputVar.Shape[1] is image height, and inputVar.Shape[2] is channels.
// For simplicity and avoiding external dependencies, we skip the preprocessing step here, and just use some artificially created data as input.
NDShape inputShape = inputVar.Shape();
std::vector<size_t> inputShapeDim = inputShape.Dimensions();
unsigned char inputDataChar[67500];
std:string filepath = "D:\\Rishi\\CNTK Evaluation\\TestCNTK\\TestCNTK\\videoRGBData.rgb";
std::basic_ifstream<unsigned char> infile(filepath, std::ios::binary | std::ios::in);
infile.read(inputDataChar, 67500);
infile.close();
std::vector<float> inputData(inputVar.Shape().TotalSize());
/*for (size_t i = 0; i < inputData.size(); ++i)
{
inputData[i] = static_cast<float>(i % 255);
}*/
for (int i = 0; i < 67500; i++)
inputData[i] = (float)inputDataChar[i];
// Create input value and input data map
cout << inputVar.Shape();
ValuePtr inputVal = Value::CreateBatch(inputVar.Shape(), inputData, device);
std::unordered_map<Variable, ValuePtr> inputDataMap = { { inputVar, inputVal } };
// Create output data map. Using null as Value to indicate using system allocated memory.
// Alternatively, create a Value object and add it to the data map.
std::unordered_map<Variable, ValuePtr> outputDataMap = { { outputVar, nullptr } };
// Start evaluation on the device
modelFunc->Evaluate(inputDataMap, outputDataMap, device);
// Get evaluate result as dense output
ValuePtr outputVal = outputDataMap[outputVar];
std::vector<std::vector<float>> outputData;
outputVal->CopyVariableValueTo(outputVar, outputData);
PrintOutput<float>(outputVar.Shape().TotalSize(), outputData);
}
But, during the call of modelFunc->Evaluate(inputDataMap, outputDataMap, device), I am getting following runtime error:
Unhandled exception at 0x00007FFE1527FEDE (Cntk.Core-2.3d.dll) in CNTKLibraryCPPEvalCPUOnlyExamples.exe: 0xC00000FD: Stack overflow (parameters: 0x0000000000000001, 0x000000474F603FD0). occurred
Can you please help?
from mmdnn.
Hi,
Does the converted model with original input shape (299, 299, 3) work?
If it works, maybe resizing your image in your application is a workaround.
from mmdnn.
Hey, no issue. I am working on it.
Thanks :)
from mmdnn.
Related Issues (20)
- Handling multiple inputs in keras
- load higher version pytorch model error
- AttributeError: 'NoneType' object has no attribute 'name' in FusedBatchNorm HOT 3
- KeyError while converting PyTorch model to IR
- How to make modifications?
- Make upgrades?
- Fail to convert resnet101 from PyTorch to IR
- Convert Inception_v3 from Caffe to PyTorch
- SpaceToBatchND/BatchToSpaceND not supported while converting from tensorflow to caffe
- Fail to Convert resnet from Caffe to onnx and Pytorch
- mmtoir from PyTorch 2 IR missing .json, .npy and .pb file
- Getting ValueError: axes don't match array
- Pytorch model GFPGANv1.3.pth
- onnx2IR? HOT 1
- This repo is missing important files HOT 2
- How can I convert tf(.pb) to pytorch?
- Error to convert a model from IR to Pytorch
- Error converting caffe to torch model HOT 1
- pytorch to onnx
- pytorch to onnx
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mmdnn.