Posts

Showing posts from April, 2020

Reading notes: Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation

Image
This blog is the reading note for the paper " Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation " by Uktku Ozbulak, et al., MICCAI 2019.  Broadly speaking, the authors try to generate more realistic adversarial attacks on medical image segmentation tasks. Specifically, they design a very novel generation method to produce an adversarial example that misleads the trained model to predict a targeted  realistic prediction mask with limited perturbation. Introduction Recent studies have developed various deep learning-based models in medical imaging systems. They have shown great performance over some clinical prediction tasks with minimum labor expenses. However, deep learning-based models  come with a major security problem: they can be totally fooled by so-called adversarial examples. Adversarial examples are inputs of the model with imperceptibly small perturbation that lead to misclassification. Consequently, the users of these syst