is here to talk with you from time to time.
With this app your Xiaomi Vacuum Cleaner be able to react on the things that he encounters during his daily routine. Forget about limited voice packages. Define any sound you like to hear from your Cleaner simply adding it to the ftp folder on your computer. It will react differently on person, dog, chair or any recognizable object in your place. In training mode you can collect annotated data for transfer learning and then train your Xiaomi Vacuum Cleaner to recognize family members or specific objects.
- Your Xiaomi Vacuum Cleaner V1 must be rooted. All procedures were done for V1. I'm not sure if all the same is applicable for later versions
- Webcam
- Local http or ftp Server (optional)
- Joystick for movements control (optional)
1. Root Xiaomi Vacuum cleaner
-
Obtain token.
There are two options here. The easiest way to install MiHome_5.6.10_vevs.apk on your Android device (from robot_root folder). The apk is taken from this guy. Once installed and linked with your Vacuum open MiHome app, go to Access -> Access to the Device. You will see token under your Vacuum name. -
Obtain package with root access.
Copy v11_003532.fullos_root.pkg from robot_root folder to root folder of sdcard on your Android device. The package is taken from here -
Flash rooted firmware.
Install XVacuum_Firmware_3.3.0.apk (from robot_root folder) on your Android device and follow these instructions. Once you have your firmware installed login to your Vacuum via ssh (user: cleaner, password: cleaner) and change the password!!! Don't make hacker's life too easy.
-
Obtain token.
If you don't trust those evil capybara, here is another way. Install version 5.4.49 of MiHome app, then connect to your Vacuum, then find MiHome logs somewhere on your Android device and search for token. -
Obtain package with root access.
If you don't trust those evil Russians follow the instructions from here to create the package file -
Flash rooted firmware.
The same instructions to install created package
2. Install requirements.
pip install -r requirements.txt
IMPORTANT: python 3.6 (64bit in case of Windows), tensorflow 1.15, opencv 3.4
3. Setup sound capabilities.
-
Setup local http/ftp server on your local machine and share sounds directory. You may choose other files and create other folders. The idea is that each folder corresponds to detected object class.
-
On your Vacuum install sox:
sudo apt-get update sudo apt-get install sox sudo apt-get install libsox-fmt-mp3 sudo apt-get install wget
-
Grant user: cleaner rights to play sounds:
sudo usermod -a -G audio cleaner sudo reboot
-
Copy sound_server.pl (from sounds folder) to
/usr/bin
directory of your Vacuum. Then run it:cd /usr/bin perl sound_server.pl
4. Edit config.yaml with your local settings.
VIDEO_SOURCE:
IP:
TOKEN:
SOUND_DIR_HTTP:
SOUND_DIR_FTP:
FAN_SPEED: #I reduce it to 1 in mi_control.py. it could be setup to 70 max
MODEL_NAME:
you may find models here You may use segmentation models, but they wouldn't show mask and they are too slow anyway.
5. Open start.ipynb in Jupyter and make sure all pieces of the puzzle are in place.
-
Webcam is connected and recognition is working (run recognition_thread)
-
Vacuum plays the sounds (run sound_thread), which means sound_server and http/ftp server are running. Don't forget about
SOUND_PROBABILITY
parameter fromconfig.yaml
. It says how often your Vacuum will be react on detected objects. -
Joystick is connected and you can control Vacuum with it (run moving_thread)
-
After you satisfied with the result, kill the kernel in Jupyter.
6. Run: python start.py
via CLI and have fun.
1. Collecting data for training.
-
Create label map with your own labels in
~/object_detection/data
directory. Seeafternoon_cleaner_label_map.pbtxt
as example. It is important to start from id = 1. -
Setup TRAINING_MODE parameter in
config.yaml
to 1 -
If you have subclasses, define
MAIN_CLASS
andSUBCLASS
parameters inconfig.yaml
. Default models from tensorflow model zoo do not identify subclasses. You need to collect data for only one subclass in a session i.e. if you want to train VC to recognize three persons, you need to run separate photo session for each person. If you don't have subclasses put these parameters to ''. -
Run
object_detection
andmoving_thread
. Catch objects that you would like to train Vacuum Cleaner on.SAVE_FRAME
parameter defines how often frame with detected object will be saved to the dataset folder. You can gain a lot of data really fast without this limitation, but it will affect variety of data. -
All data collected in
~/object_detection/datasets/my_dataset
folder have annotations inannotations.json
file. -
Once you finished collecting data for all classes in your label map, run
create_ft_records.py
. It will divide your data to train and validation sets (with val_size=0.25 by default) and create tf_records from these sets.
2. Run training.
-
Run
python setup.py install
fromslim
folder. -
Open
pipeline.config
in your model folder and editnum_classes:
,fine_tune_checkpoint:
,label_map_path:
andinput_path:
for train and validation reader. -
Open
train.py
and setup'num_train_steps'
parameter. Then runtrain.py
. -
To see the progress in tensorboard. Open another command line and run
tensorboard --logdir=${MODEL_DIR}
where${MODEL_DIR}
is the directory with train and eval datasets.
3. Run inference on trained data.
-
When training is over, number of checkpoints saved in
results
folder. You need to createfrozen_inference_graph.pb
from one of them. To do this, setup:PIPELINE_CONFIG_PATH
,CHECKPOINT
andGRAPH_DIRECTORY
inconfig.yaml
, then runexport_inference_graph.py
-
If problems with import from
object_detection
folder occur, install all missing dependencies withpip install tensorflow-object-detection-api
-
Put
my_model
underMODEL_NAME
parameter in config.yaml and editNUM_CLASSES
with your number of classes. -
Setup 'PATH_TO_LABELS' in
config.yaml
with path to your labels. -
Run
start.py
as usual.