Coder Social home page Coder Social logo

xiamenwcy / irisparsenet Goto Github PK

View Code? Open in Web Editor NEW
97.0 4.0 42.0 58 MB

[TIFS 2020]Towards Complete and Accurate Iris Segmentation Using Deep Multi-Task Attention Network for Non-Cooperative Iris Recognition

License: GNU General Public License v3.0

MATLAB 31.20% Makefile 1.81% C++ 51.00% Python 15.82% Shell 0.17%

irisparsenet's Introduction

Towards Complete and Accurate Iris Segmentation Using Deep Multi-Task Attention Network for Non-Cooperative Iris Recognition

Created by Caiyong Wang @ Institute of Automation, Chinese Academy of Sciences (CASIA)

Call for participating in the IJCB 2021 Official Competition about NIR Iris Challenge Evaluation in Non-cooperative Environments: Segmentation and Localization (NIR-ISL 2021), welcome to visit our competition website: https://sites.google.com/view/nir-isl2021/home. Competition starting-ending: February 15 - April 30, 2021. [Registration closed!]

We propose a high-efficiency deep learning based iris segmentation approach, named IrisParseNet. The proposed approach first applies a multi-task attention network to simultaneously predict the iris mask, pupil mask and iris outer boundary. Then, based on the predicted pupil mask and iris outer boundary, parameterized inner and outer iris boundaries are achieved by a simple yet effective post-processing method. Overall, the proposed approach is a complete iris segmentation solution, i.e., iris mask and parameterized inner and outer iris boundaries are jointly achieved, which facilitates the subsequent iris normalization as well as iris feature extraction and matching. Hence the proposed approach can be used for iris recognition as a general drop-in replacement. To help reproduce our method, we have made the models, manual annotations and evaluation protocol codes freely available to the community.

In the above figure, (a) original iris images from CASIA.v4-distance (top two), MICHE-I (middle three), and UBIRIS.v2 (bottom three) iris databases, (b) ground truth iris mask (blue), and inner (green) and outer (red) boundaries of the iris, (c) segmentation results of IrisParseNet (false positive error pixel (green), false negative error pixel (red), and true positive pixel (blue)), (d) iris outer boundary predicted by IrisParseNet, (e) pupil mask predicted by IrisParseNet, and (f) localization results of IrisParseNet after post-processing (inner boundary (green) and outer boundary (red)) are shown from left to right.

Citation

If you use this model or corresponding codes/datas for your research, please cite our papers.

@article{wang2020iris,
  title={Towards Complete and Accurate Iris Segmentation Using Deep Multi-Task Attention Network for Non-Cooperative Iris Recognition}, 
  author={Wang, Caiyong and Muhammad, Jawad and Wang, Yunlong and He, Zhaofeng and Sun, Zhenan},
  journal={IEEE Transactions on Information Forensics and Security}, 
  year={2020},
  volume={15},
  pages={2944-2959},
  publisher={IEEE}
}

Anyone is permitted to use, distribute and change this program for any non-commercial usage. However each of such usage and/or publication must include above citation of this paper. For any commercial usage, please send an email to [email protected] or [email protected].

Prerequisites

  • linux
  • Python 2.7 ( python3 is not supported!)
  • CPU or NVIDIA GPU + CUDA CuDNN
  • Caffe
  • matlab R2016a
  • Halcon 10.0/13.0 or above

Getting Started

Installing Caffe

We have provided the complete Caffe codes. Just install it following the official guide. You can also refer to our another extended Caffe Version.

We have built an out-of-the-box Docker Caffe image (https://www.codewithgpu.com/i/xiamenwcy/IrisParseNet/casia_caffe_tifs) which is deployed on the AutoDL cloud server. You can enjoy it. There are no more worries stopping you from using IrisParseNet. 👏👏

Model training and testing

The complete codes for training and testing the model are placed at Codes/IrisParseNet, and the post-processing executable program is placed at Codes/Post-processing.

We have released the trained models on CASIA.v4-distance (casia for short), MICHE-I (miche for short), and UBIRIS.v2(nice for short).

Evaluation protocols

The iris segmentation and localization evaluation codes are provided. During realizing the evaluation protocols, we've referenced a lot of open source codes. Here, we thank them, especially USIT Iris Toolkit v2, TVM-iris segmentation, GlaS Challenge Contest.

Please read our paper for detailes. The evaluation codes can be found in evaluation folder.

Annotation codes

we use the interactive development environment (HDevelop) provided by the machine vision software, i.e. MVTec Halcon. Before labeling, you need to install Halcon software. Halcon is a paid software, but it allows to try out for free, please refer to the page: https://www.mvtec.com/products/halcon/now/.

Our halcon based annotation codes can be found in annotation. The code can help us to label iris inner/outer bounadry and output a variety of kinds of annatation results as much as possible.

Data

We have provided all training and testing datasets with ground truths to help reproduce our method. Since we do not have permission to release the original iris images for MICHE-I and UBIRIS.v2 databases, hence if you want to use the ground truths of these two databases, you can email the owners of both databases to request permission and let us know if given permission. We will provided the password of ground truth files.

Original iris database:

Ground truth:

Reference

[1] Zhao, Zijing, and Ajay Kumar. "An Accurate Iris Segmentation Framework Under Relaxed Imaging Constraints Using Total Variation Model." international conference on computer vision (2015): 3828-3836.

[2] Liu, Nianfeng, et al. "Accurate iris segmentation in non-cooperative environments using fully convolutional networks." Biometrics (ICB), 2016 International Conference on. IEEE, 2016.

[3] Hu, Yang, Konstantinos Sirlantzis, and Gareth Howells. "Improving colour iris segmentation using a model selection technique." Pattern Recognition Letters 57 (2015): 24-32.

[4] Proença H, Alexandre L A. The NICE. I: noisy iris challenge evaluation-part I[C]//2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. IEEE, 2007: 1-4.

[5] De Marsico M, Nappi M, Riccio D, et al. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols[J]. Pattern Recognition Letters, 2015, 57: 17-23.

Disclaimer

This package is only provided on "as it is" basis and does not include any warranty of any kind.

Questions

Please contact [email protected] or [email protected].

irisparsenet's People

Contributors

xiamenwcy avatar zidongcaffe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

irisparsenet's Issues

How to train our dataset?

we make the label of pupil mask (setting 1) and iris edge(setting 2), dimension is (N,1,W,H).
But we have a problem with cross entropy.
iris1

iris2

Is there any problem with the size and dimension of the picture I put?

SENHA DOS ARQUIVOS DATASET / DATASET FILE PASSWORD

Os arquivos estão com senha , infelizmente não consigo ter acesso!

Qual a senha do dataset???

The files have a password, unfortunately I can't get access!

What is the password for the dataset ???

虹膜数据集

您好作者,请问您有ND0405,2013或者其他的虹膜数据集可以分享一下嘛,我目前只有CASIA中科院的数据集,想做虹膜识别感觉量有点少。非常感谢

What‘s in the training list ?

The root_folder and source should be configured in ImageSegData layer, and I noticed that the SliceLabel layer is behind to get pupil_mask_gt and iris_edge_gt, so what‘s in the training list ? I've check out the two .cpp file, but I can't be sure,could somebody give me some help?

model/training code

Hello,

I do not seem to find the trained model nor the training code.
Any chances to share the model?

Best,

Can't reproduce results.

Hi!
I cannot reproduce your results.

I cannot run Caffe on my PC, because it is not supported on my system anymore (Ubuntu 22.04 is too new for Caffe).
I've tried both original and your extended-caffe.

Sites www.codewithgpu.com and www.autodl.com are in Chinese language, and I cannot read anything.
Google translator also fails to translate them.

I've tried to convert your model to ONNX format, but several converters have failed, reporting that they don't know parameters, listed in the proto file.

I've managed to port your code to PyTorch. Model weights have been converted to Torch dictionary using script from https://github.com/vadimkantorov/caffemodel2pytorch
I've checked tensor shapes in debugger, comparing them with .caffemodel files loaded in https://netron.app and with the paper.

However, result is wrong even for the image from Codes/Post-processing/data/img. I've got a black image instead of the pupil mask and incorrect iris and iris edge masks.

Here is what I get:
parse_net_masks
Left to right: source image (001_IP5_IN_R_RI_01_2.JPEG, converted to BGR and with per-channel mean subtracted), pupil mask, iris mask, iris edge mask.

I suspect that this is due to preprocessing that has been applied to the image.
I've tried to convert the image to BGR format and subtract (104.00698793, 116.66876762, 122.67891434) from blue, red and green channels respectively.
I've found these values in python scripts, located under Codes/IrisParseNet/test
However, the result is still wrong.

What preprocessing do you use?
What data layout does Caffe expect? Is it row-wise or column-wise?

Could you provide your models in ONNX format?

数据集询问

您好,我在百度网盘下载了CASIA-distance数据集,但是压缩文件解压需要密码,麻烦可以提供一下么?非常感谢啊

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.