EN|CN
The classification classification application runs on the Atlas 200 DK or the AI acceleration cloud server and implements the inference function by using a common classification network, and the first n inference results are output.
Before using an open source application, ensure that:
- MindSpore Studio has been installed.
- The Atlas 200 DK developer board has been connected to MindSpore Studio, the cross compiler has been installed, the SD card has been prepared, and basic information has been configured.
Before running the application, obtain the source code package and configure the environment as follows.
-
Obtain the source code package.
Download all the code in the sample-classification repository at https://github.com/Ascend/sample-classification to any directory on Ubuntu Server where MindSpore Studio is located as the MindSpore Studio installation user, for example, /home/ascend/sample-classification.
-
Log in to Ubuntu Server where MindSpore Studio is located as the MindSpore Studio installation user and set the environment variable DDK_HOME.
vim ~/.bashrc
Run the following commands to add the environment variables DDK_HOME and LD_LIBRARY_PATH to the last line:
export DDK_HOME=/home/XXX/tools/che/ddk/ddk
export LD_LIBRARY_PATH=$DDK_HOME/uihost/lib
- XXX indicates the MindSpore Studio installation user, and /home/XXX/tools indicates the default installation path of the DDK.
- If the environment variables have been added, skip this step.
Enter :wq! to save and exit.
Run the following command for the environment variable to take effect:
source ~/.bashrc
-
Access the root directory where the classification application code is located as the MindSpore Studio installation user, for example, /home/ascend/sample-classification.
-
Run the deployment script to prepare the project environment, including compiling and deploying the ascenddk public library and application.
bash deploy.sh host_ip model_mode
- host_ip: For the Atlas 200 DK developer board, this parameter indicates the IP address of the developer board.For the AI acceleration cloud server, this parameter indicates the IP address of the host.
- model_mode indicates the deployment mode of the model file. The default setting is internet.
- local: If the Ubuntu system where MindSpore Studio is located is not connected to the network, use the local mode. In this case, download the dependent common code library ezdvpp to the sample-classification/script directory by referring to the Downloading Network Models and and Dependency Code Library.
- internet: Indicates the online deployment mode. If the Ubuntu system where MindSpore Studio is located is connected to the network, use the Internet mode. In this case, download the dependency code library ezdvpp online.
Example command:
bash deploy.sh 192.168.1.2 internet
-
Upload the offline model file to be used and the image which requires inference to the directory of the HwHiAiUser user on the Host. For details, see Downloading Network Models and and Dependency Code Library.
For example, upload the model file alexnet.om to the /home/HwHiAiUser/models directory on the host.
The image requirements are as follows:
- Format: JPG, PNG, and BMP.
- Width of the input image: the value is an integer ranging from 16px to 4096px.
- Height of the input image: the value is an integer ranging from 16px to 4096px.
-
Log in to the Host as the HwHiAiUser user in SSH mode on Ubuntu Server where MindSpore Studio is located.
ssh HwHiAiUser@host_ip
For the Atlas 200 DK, the default value of host_ip is 192.168.1.2 (USB connection mode) or 192.168.0.2 (NIC connection mode).
For the AI acceleration cloud server, host_ip indicates the IP address of the server where MindSpore Studio is located.
-
Go to the path of the executable file of classification application.
cd ~/HIAI_PROJECTS/ascend_workspace/classification/out
-
Run the application.
Run the run_classification.py script to print the inference result on the execution terminal.
Example command:
python3 run_classification.py -m ~/models/vgg16.om -w 224 -h 224 -i
./example.jpg -n 10
- -m/--model_path: path for storing offline models
- -w/model_width: width of the input image of a model. The value is an integer ranging from 16 to 4096.
- -h/model_height: height of the input image of a model. The value is an integer ranging from 16 to 4096.
- -i/input_path: path of the input image. It can be a directory, indicating that all images in the current directory are used as input. (Multiple inputs can be specified).
- -n/top_n: the first n inference results that are output
For other parameters, run the python3 run_classification.py --help command. For details, see the help information.
-
Downloading network models
The models used in the application are converted models that adapt to the Ascend 310 chipset. For details about how to download this kind of models and the original network models, see Table 1. If you have a better model solution, you are welcome to share it at https://github.com/Ascend/models.
Upload the network model files (.om files) to the directory of the HwHiAiUser user on the Host.
Table 1 Models used in Classification Application
This model is used in the classification application application.
Download the model from the computer_vision/classification/alexnet directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/alexnet directory in the https://github.com/Ascend/models/ repository.
Precautions during model conversion:
The classification application processes one picture at a time. Therefore, the value of N in Input Shaple needs to be changed to 1 during conversion, as shown in Figure 1.
Download the model from the computer_vision/classification/caffenet directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/caffenet directory in the https://github.com/Ascend/models/ repository.
Precautions during model conversion:
The classification application processes one picture at a time. Therefore, the value of N in Input Shaple needs to be changed to 1 during conversion, as shown in Figure 1.
Download the model from the computer_vision/classification/densenet directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/densenet directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/googlenet directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/googlenet directory in the https://github.com/Ascend/models/ repository.
Precautions during model conversion:
The classification application processes one picture at a time. Therefore, the value of N in Input Shaple needs to be changed to 1 during conversion, as shown in Figure 1.
Download the model from the computer_vision/classification/inception_v2 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/inception_v2 directory in the https://github.com/Ascend/models/ repository.
Precautions during model conversion:
The classification application processes one picture at a time. Therefore, the value of N in Input Shaple needs to be changed to 1 during conversion, as shown in Figure 1.
Download the model from the computer_vision/classification/inception_v3 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/inception_v3 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/inception_v4 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/inception_v4 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/mobilenet_v1 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/mobilenet_v1 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/mobilenet_v2 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/mobilenet_v2 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/resnet18 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/resnet18 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/resnet50 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/resnet50 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/resnet101 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/resnet101 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/resnet152 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/resnet152 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/vgg16 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/vgg16 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/vgg19 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/vgg19 directory in the https://github.com/Ascend/models/ repository.
Download the model from the computer_vision/classification/squeezenet directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/squeezenet directory in the https://github.com/Ascend/models/ repository.
Precautions during model conversion:
The classification application processes one picture at a time. Therefore, the value of N in Input Shaple needs to be changed to 1 during conversion, as shown in Figure 1.
Download the model from the computer_vision/classification/dpn98 directory in the https://github.com/Ascend/models/ repository.
For the version description, see the README.md file in the current directory.
For details, see the README.md file of the computer_vision/classification/dpn98 directory in the https://github.com/Ascend/models/ repository.
Figure 1 Configuration for the classification model during conversion
The classification application processes one picture each time. Therefore, the value of batch needs to be changed to 1 during model conversion.
-
Download the dependent software library
Download the dependent software libraries to the sample-classification/script directory.
Table 2 Download the dependent software library
Encapsulates the dvpp interface and provides image and video processing capabilities, such as color gamut conversion and image / video conversion