This repository uses cycleGAN for the augmentation of mammography samples. The GANs used for the augmentation were pretrained on an in-house dataset (UKE dataset) subdivided into three visually different domains. The pretrained generators can be downloaded and the model architectures are stored in
generator_model.py
or simply follow the steps in Usage section. The idea is to use the cycleGAN generators
for the augmentation of training data. An overview of the complete setup can be seen here:
- use a cycleGAN to train the translation between
BRIGHT
,NORMAL
andDARK
subdomains of the UKE datasaet. The cycleGAN model architectures were modified with various cyclic (black loss) and acyclic (orange loss) loss functions. - cycleGAN generators can now be extracted and reused for the augmentation of training data, thereby improving robustness and generalizability of a model trained on the input data (e.g. a YOLO breast lesion detector)
├── Input # put your input files here
│ ├── BRIGHT
│ │ ├── file_bright01.png
│ │ ├── file_bright02.png
│ │ └── ...
│ ├── DARK
│ │ ├── file_dark01.png
│ │ ├── file_dark02.png
│ │ └── ...
│ └── NORMAL
│ ├── file_normal01.png
│ ├── file_normal02.png
│ └── ...
├── pretrained_models
│ ├── UNet_acyc_geo/
│ ├── UNet_acyc_perc/
│ ├── UNet_adversarial/
│ ├── UNet_cyc_geo/
│ └── UNet_cyc_perc/
└── Output
├── BRIGHT_NORMAL
│ ├── file_bright01.png
│ ├── file_bright02.png
│ └── ...
├── DARK_NORMAL
│ ├── file_dark01.png
│ ├── file_dark02.png
│ └── ...
├── NORMAL_BRIGHT
│ ├── file_normal01.png
│ ├── file_normal02.png
│ └── ...
└── NORMAL_DARK
├── file_normal01.png
├── file_normal02.png
└── ...
NOTE: this code does not require any GPUs!
-
clone the repository
-
download
model.zip
from here, extract the file and put it in the models directory -
install
requirements.txt
either withpip install -r requirements.txt
or simply use conda -
choose a model out of the listed models with flag
-md
/--model
:UNet_acyc_geo
UNet_acyc_perc
UNet_adversarial
- default modelUNet_cyc_geoqq:
UNet_cyc_perc
-
put your input files in the respective
BRIGHT
,NORMAL
andDARK
folders inInput
, files have to be either JPEG or PNG format -
generate images using
python3 generate.py -md UNet_acyc_geo
, generated images are saved into the respective subdirectoryOutput/BRIGHT_NORMAL
,Output/NORMAL_BRIGHT
,Output/DARK_NORMAL
andOutput/NORMAL_DARK
-m
/--model
: choose the generator model out of the list; Default isUNet_adversarial
-d
/--delete_input
: delete all previous input files inBRIGHT
,NORMAL
andDARK
; Default isFalse
-s
/--size
: set the generated image size; Default is512
- upload cycleGAN training code
If you use any of this code for your research, please cite this paper.
@InProceedings{CycleAugmentGAN2022,
author="El-Ghoussani, Amir
and Rodr{\'i}guez-Salas, Dalia
and Seuret, Mathias
and Maier, Andreas",
editor="Maier-Hein, Klaus
and Deserno, Thomas M.
and Handels, Heinz
and Maier, Andreas
and Palm, Christoph
and Tolxdorff, Thomas",
title="GAN-based Augmentation of Mammograms to Improve Breast Lesion Detection",
booktitle="Bildverarbeitung f{\"u}r die Medizin 2022",
year="2022",
publisher="Springer Fachmedien Wiesbaden",
address="Wiesbaden",
pages="321--326",
isbn="978-3-658-36932-3"
}